Abstract Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer ...Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open-source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer.
Large‐scale digitization projects such as #ScanAllFishes and oVert are generating high‐resolution microCT scans of vertebrates by the thousands. Data from these projects are shared with the community ...using aggregate 3D specimen repositories like MorphoSource through various open licenses. We anticipate an explosion of quantitative research in organismal biology with the convergence of available data and the methodologies to analyse them.
Though the data are available, the road from a series of images to analysis is fraught with challenges for most biologists. It involves tedious tasks of data format conversions, preserving spatial scale of the data accurately, 3D visualization and segmentations, and acquiring measurements and annotations. When scientists use commercial software with proprietary formats, a roadblock for data exchange, collaboration and reproducibility is erected that hurts the efforts of the scientific community to broaden participation in research.
We developed SlicerMorph as an extension of 3D Slicer, a biomedical visualization and analysis ecosystem with extensive visualization and segmentation capabilities built on proven python‐scriptable open‐source libraries such as Visualization Toolkit and Insight Toolkit. In addition to the core functionalities of Slicer, SlicerMorph provides users with modules to conveniently retrieve open‐access 3D models or import users own 3D volumes, to annotate 3D curve and patch‐based landmarks, generate landmark templates, conduct geometric morphometric analyses of 3D organismal form using both landmark‐driven and landmark‐free approaches, and create 3D animations from their results. We highlight how these individual modules can be tied together to establish complete workflow(s) from image sequence to morphospace. Our software development efforts were supplemented with short courses and workshops that cover the fundamentals of 3D imaging and morphometric analyses as it applies to study of organismal form and shape in evolutionary biology.
Our goal is to establish a community of organismal biologists centred around Slicer and SlicerMorph to facilitate easy exchange of data and results and collaborations using 3D specimens. Our proposition to our colleagues is that using a common open platform supported by a large user and developer community ensures the longevity and sustainability of the tools beyond the initial development effort.
Diffusion MRI (dMRI) is the only noninvasive method for mapping white matter connections in the brain. We describe SlicerDMRI, a software suite that enables visualization and analysis of dMRI for ...neuroscientific studies and patient-specific anatomic assessment. SlicerDMRI has been successfully applied in multiple studies of the human brain in health and disease, and here, we especially focus on its cancer research applications. As an extension module of the 3D Slicer medical image computing platform, the SlicerDMRI suite enables dMRI analysis in a clinically relevant multimodal imaging workflow. Core SlicerDMRI functionality includes diffusion tensor estimation, white matter tractography with single and multi-fiber models, and dMRI quantification. SlicerDMRI supports clinical DICOM and research file formats, is open-source and cross-platform, and can be installed as an extension to 3D Slicer (www.slicer.org). More information, videos, tutorials, and sample data are available at dmri.slicer.org
.
Neurosurgery makes use of preoperative imaging to visualize pathology, inform surgical planning, and evaluate the safety of selected approaches. The utility of preoperative imaging for ...neuronavigation, however, is diminished by the well-characterized phenomenon of brain shift, in which the brain deforms intraoperatively as a result of craniotomy, swelling, gravity, tumor resection, cerebrospinal fluid (CSF) drainage, and many other factors. As such, there is a need for updated intraoperative information that accurately reflects intraoperative conditions. Since 1982, intraoperative ultrasound has allowed neurosurgeons to craft and update operative plans without ionizing radiation exposure or major workflow interruption. Continued evolution of ultrasound technology since its introduction has resulted in superior imaging quality, smaller probes, and more seamless integration with neuronavigation systems. Furthermore, the introduction of related imaging modalities, such as 3-dimensional ultrasound, contrast-enhanced ultrasound, high-frequency ultrasound, and ultrasound elastography, has dramatically expanded the options available to the neurosurgeon intraoperatively. In the context of these advances, we review the current state, potential, and challenges of intraoperative ultrasound for brain tumor resection. We begin by evaluating these ultrasound technologies and their relative advantages and disadvantages. We then review three specific applications of these ultrasound technologies to brain tumor resection: (1) intraoperative navigation, (2) assessment of extent of resection, and (3) brain shift monitoring and compensation. We conclude by identifying opportunities for future directions in the development of ultrasound technologies.
Volumetric change in glioblastoma multiforme (GBM) over time is a critical factor in treatment decisions. Typically, the tumor volume is computed on a slice-by-slice basis using MRI scans obtained at ...regular intervals. (3D)Slicer - a free platform for biomedical research - provides an alternative to this manual slice-by-slice segmentation process, which is significantly faster and requires less user interaction. In this study, 4 physicians segmented GBMs in 10 patients, once using the competitive region-growing based GrowCut segmentation module of Slicer, and once purely by drawing boundaries completely manually on a slice-by-slice basis. Furthermore, we provide a variability analysis for three physicians for 12 GBMs. The time required for GrowCut segmentation was on an average 61% of the time required for a pure manual segmentation. A comparison of Slicer-based segmentation with manual slice-by-slice segmentation resulted in a Dice Similarity Coefficient of 88.43 ± 5.23% and a Hausdorff Distance of 2.32 ± 5.23 mm.
The segmentation of medical and dental images is a fundamental step in automated clinical decision support systems. It supports the entire clinical workflow from diagnosis, therapy planning, ...intervention, and follow-up. In this paper, we propose a novel tool to accurately process a full-face segmentation in about 5 minutes that would otherwise require an average of 7h of manual work by experienced clinicians. This work focuses on the integration of the state-of-the-art UNEt TRansformers (UNETR) of the Medical Open Network for Artificial Intelligence (MONAI) framework. We trained and tested our models using 618 de-identified Cone-Beam Computed Tomography (CBCT) volumetric images of the head acquired with several parameters from different centers for a generalized clinical application. Our results on a 5-fold cross-validation showed high accuracy and robustness with a Dice score up to 0.962±0.02. Our code is available on our public GitHub repository.
Neuronavigation greatly improves the surgeons ability to approach, assess and operate on brain tumors, but tends to lose its accuracy as the surgery progresses and substantial brain shift and ...deformation occurs. Intraoperative MRI (iMRI) can partially address this problem but is resource intensive and workflow disruptive. Intraoperative ultrasound (iUS) provides real-time information that can be used to update neuronavigation and provide real-time information regarding the resection progress. We describe the intraoperative use of 3D iUS in relation to iMRI, and discuss the challenges and opportunities in its use in neurosurgical practice.
We performed a retrospective evaluation of patients who underwent image-guided brain tumor resection in which both 3D iUS and iMRI were used. The study was conducted between June 2020 and December 2020 when an extension of a commercially available navigation software was introduced in our practice enabling 3D iUS volumes to be reconstructed from tracked 2D iUS images. For each patient, three or more 3D iUS images were acquired during the procedure, and one iMRI was acquired towards the end. The iUS images included an extradural ultrasound sweep acquired before dural incision (iUS-1), a post-dural opening iUS (iUS-2), and a third iUS acquired immediately before the iMRI acquisition (iUS-3). iUS-1 and preoperative MRI were compared to evaluate the ability of iUS to visualize tumor boundaries and critical anatomic landmarks; iUS-3 and iMRI were compared to evaluate the ability of iUS for predicting residual tumor.
Twenty-three patients were included in this study. Fifteen patients had tumors located in eloquent or near eloquent brain regions, the majority of patients had low grade gliomas (11), gross total resection was achieved in 12 patients, postoperative temporary deficits were observed in five patients. In twenty-two iUS was able to define tumor location, tumor margins, and was able to indicate relevant landmarks for orientation and guidance. In sixteen cases, white matter fiber tracts computed from preoperative dMRI were overlaid on the iUS images. In nineteen patients, the EOR (GTR or STR) was predicted by iUS and confirmed by iMRI. The remaining four patients where iUS was not able to evaluate the presence or absence of residual tumor were recurrent cases with a previous surgical cavity that hindered good contact between the US probe and the brainsurface.
This recent experience at our institution illustrates the practical benefits, challenges, and opportunities of 3D iUS in relation to iMRI.
Zero-footprint Web architecture enables imaging applications to be deployed on premise or in the cloud without requiring installation of custom software on the user's computer. Benefits include ...decreased costs and information technology support requirements, as well as improved accessibility across sites. The Open Health Imaging Foundation (OHIF) Viewer is an extensible platform developed to leverage these benefits and address the demand for open-source Web-based imaging applications. The platform can be modified to support site-specific workflows and accommodate evolving research requirements.
The OHIF Viewer provides basic image review functionality (eg, image manipulation and measurement) as well as advanced visualization (eg, multiplanar reformatting). It is written as a client-only, single-page Web application that can easily be embedded into third-party applications or hosted as a standalone Web site. The platform provides extension points for software developers to include custom tools and adapt the system for their workflows. It is standards compliant and relies on DICOMweb for data exchange and OpenID Connect for authentication, but it can be configured to use any data source or authentication flow. Additionally, the user interface components are provided in a standalone component library so that developers can create custom extensions.
The OHIF Viewer and its underlying components have been widely adopted and integrated into multiple clinical research platforms (e,g Precision Imaging Metrics, XNAT, LabCAS, ISB-CGC) and commercial applications (eg, Osirix). It has also been used to build custom imaging applications (eg, ProstateCancer.ai, Crowds Cure Cancer presented as a case study).
The OHIF Viewer provides a flexible framework for building applications to support imaging research. Its adoption could reduce redundancies in software development for National Cancer Institute-funded projects, including Informatics Technology for Cancer Research and the Quantitative Imaging Network.
Advances in imaging techniques and high-throughput technologies are providing scientists with unprecedented possibilities to visualize internal structures of cells, organs and organisms and to ...collect systematic image data characterizing genes and proteins on a large scale. To make the best use of these increasingly complex and large image data resources, the scientific community must be provided with methods to query, analyze and crosslink these resources to give an intuitive visual representation of the data. This review gives an overview of existing methods and tools for this purpose and highlights some of their limitations and challenges.
Accurate segmentation of lung nodules is crucial in the development of imaging biomarkers for predicting malignancy of the nodules. Manual segmentation is time consuming and affected by ...inter-observer variability. We evaluated the robustness and accuracy of a publically available semiautomatic segmentation algorithm that is implemented in the 3D Slicer Chest Imaging Platform (CIP) and compared it with the performance of manual segmentation.
CT images of 354 manually segmented nodules were downloaded from the LIDC database. Four radiologists performed the manual segmentation and assessed various nodule characteristics. The semiautomatic CIP segmentation was initialized using the centroid of the manual segmentations, thereby generating four contours for each nodule. The robustness of both segmentation methods was assessed using the region of uncertainty (δ) and Dice similarity index (DSI). The robustness of the segmentation methods was compared using the Wilcoxon-signed rank test (pWilcoxon<0.05). The Dice similarity index (DSIAgree) between the manual and CIP segmentations was computed to estimate the accuracy of the semiautomatic contours.
The median computational time of the CIP segmentation was 10 s. The median CIP and manually segmented volumes were 477 ml and 309 ml, respectively. CIP segmentations were significantly more robust than manual segmentations (median δCIP = 14ml, median dsiCIP = 99% vs. median δmanual = 222ml, median dsimanual = 82%) with pWilcoxon~10-16. The agreement between CIP and manual segmentations had a median DSIAgree of 60%. While 13% (47/354) of the nodules did not require any manual adjustment, minor to substantial manual adjustments were needed for 87% (305/354) of the nodules. CIP segmentations were observed to perform poorly (median DSIAgree≈50%) for non-/sub-solid nodules with subtle appearances and poorly defined boundaries.
Semi-automatic CIP segmentation can potentially reduce the physician workload for 13% of nodules owing to its computational efficiency and superior stability compared to manual segmentation. Although manual adjustment is needed for many cases, CIP segmentation provides a preliminary contour for physicians as a starting point.