Despite great advances in brain tumor segmentation and clear clinical need, translation of state-of-the-art computational methods into clinical routine and scientific practice remains a major ...challenge. Several factors impede successful implementations, including data standardization and preprocessing. However, these steps are pivotal for the deployment of state-of-the-art image segmentation algorithms. To overcome these issues, we present BraTS Toolkit. BraTS Toolkit is a holistic approach to brain tumor segmentation and consists of three components: First, the BraTS Preprocessor facilitates data standardization and preprocessing for researchers and clinicians alike. It covers the entire image analysis workflow prior to tumor segmentation, from image conversion and registration to brain extraction. Second, BraTS Segmentor enables orchestration of BraTS brain tumor segmentation algorithms for generation of fully-automated segmentations. Finally, Brats Fusionator can combine the resulting candidate segmentations into consensus segmentations using fusion methods such as majority voting and iterative SIMPLE fusion. The capabilities of our tools are illustrated with a practical example to enable easy translation to clinical and scientific practice.
The glioma segmentation from the Magnetic Resonance Imaging (MRI) is known to be a tedious task because of the variability in the tumor’s morphology, extent and localization. The commonly used deep ...learning loss functions need advancement to segment the extremely small and multiple objective areas present in a single MRI. Dice loss is well known for segmenting imbalance classes, but it removes background details that represent a lot of details. Also, in binary cross-entropy (BCE), the segmentation classes are equally valued, eliminating the features of smaller regions. So, this work proposes a new compound-based loss function that incorporates background details and enhances the segmentation accuracies by predicting even small and multiple tumor regions. This loss function adds a negative logarithm of the dice coefficient, which can be recognized as implicit regularization and optimizes the training process. In addition, a dual-modal system with the proposed loss function is utilized to highlight the modality correlation with Fluid-attenuated inversion recovery (FLAIR) and T2. It achieves the highest values of evaluation metrics on the test set, indicating better generalization capabilities of the proposed loss function. The accuracy of the proposed segmentation approach has been validated on Multimodal brain tumor segmentation (BraTS) challenge 2018 and BraTS 2019 datasets. The experiments showed that the proposed approach outperforms the other state-of-the-art segmentation algorithms while achieving mean dice coefficient and mean Hausdorff distance as 0.960 and 2.30 respectively for BraTS 2018 whereas 0.962 and 2.29 for BraTS 2019.
Segmenting and quantifying gliomas from MRI is an important task for diagnosis, planning intervention, and for tracking tumor changes over time. However, this task is complicated by the lack of prior ...knowledge concerning tumor location, spatial extent, shape, possible displacement of normal tissue, and intensity signature. To accommodate such complications, we introduce a framework for supervised segmentation based on multiple modality intensity, geometry, and asymmetry feature sets. These features drive a supervised whole-brain and tumor segmentation approach based on random forest-derived probabilities. The asymmetry-related features (based on optimal symmetric multimodal templates) demonstrate excellent discriminative properties within this framework. We also gain performance by generating probability maps from random forest models and using these maps for a refining Markov random field regularized probabilistic segmentation. This strategy allows us to interface the supervised learning capabilities of the random forest model with regularized probabilistic segmentation using the recently developed
ANTsR
package—a comprehensive statistical and visualization interface between the popular Advanced Normalization Tools (ANTs) and the
R
statistical project. The reported algorithmic framework was the top-performing entry in the MICCAI 2013 Multimodal Brain Tumor Segmentation challenge. The challenge data were widely varying consisting of both high-grade and low-grade glioma tumor four-modality MRI from five different institutions. Average Dice overlap measures for the final algorithmic assessment were 0.87, 0.78, and 0.74 for “complete”, “core”, and “enhanced” tumor components, respectively.
The accessibility and potential of deep learning techniques have increased considerably over the past years. Image segmentation is one of the many fields which have seen novel implementations being ...developed to solve problems in the domain. U-Net is an example of a popular deep learning model designed specifically for biomedical image segmentation, initially proposed for cell segmentation. We propose a variation of the U-Net++ model, which is itself an adaptation of U-Net, and evaluate its brain tumor segmentation capabilities. The proposed approach obtained Dice Coefficient scores of 0.7192, 0.8712, and 0.7817 for the Enhancing Tumor, Whole Tumor and Tumor Core classes of the BraTS 2019 challenge Validation Dataset. The proposed approach differs from the standard U-Net++ model in a number of ways, including the loss function, number of convolutional blocks, and method of employing deep supervision. Data augmentation and post-processing techniques were also implemented and observed to substantially improve the model predictions. Thus, this article presents a novel adaptation of the U-Net++ architecture, which is both lightweight, and performs comparably with peer-reviewed work evaluated on the same data.
We developed a fully automated method for brain tumor segmentation using deep learning; 285 brain tumor cases with multiparametric magnetic resonance images from the BraTS2018 data set were used. We ...designed 3 separate 3D-Dense-UNets to simplify the complex multiclass segmentation problem into individual binary-segmentation problems for each subcomponent. We implemented a 3-fold cross-validation to generalize the network's performance. The mean cross-validation Dice-scores for whole tumor (WT), tumor core (TC), and enhancing tumor (ET) segmentations were 0.92, 0.84, and 0.80, respectively. We then retrained the individual binary-segmentation networks using 265 of the 285 cases, with 20 cases held-out for testing. We also tested the network on 46 cases from the BraTS2017 validation data set, 66 cases from the BraTS2018 validation data set, and 52 cases from an independent clinical data set. The average Dice-scores for WT, TC, and ET were 0.90, 0.84, and 0.80, respectively, on the 20 held-out testing cases. The average Dice-scores for WT, TC, and ET on the BraTS2017 validation data set, the BraTS2018 validation data set, and the clinical data set were as follows: 0.90, 0.80, and 0.78; 0.90, 0.82, and 0.80; and 0.85, 0.80, and 0.77, respectively. A fully automated deep learning method was developed to segment brain tumors into their subcomponents, which achieved high prediction accuracy on the BraTS data set and on the independent clinical data set. This method is promising for implementation into a clinical workflow.
A brain tumor is a deformity in the tissue where cells divide promptly and uncontrollably. As a consequence, the tumor expands. It is hypothesized that a neural network can successfully identify and ...predict brain tumors, two of the most challenging medical problems now facing doctors. The abundance of information enhances the diagnostic potential of magnetic resonance imaging (MRI) which provides the anatomical features of brain tumors. To improve the efficiency of the semantic segmentation architecture, we introduce a novel transformer-based attention U-shaped network called TransAttU-Net, in which the multilevel guided attention and multiscale skip connection operate simultaneously and which is also used to extract the pixel on the tumor area. Initially, the input image data are altered and undergo further processing using various preprocessing techniques. Methods such as these can be used to resize or rescale features, data augmentation, reverse or flip data, and alter the orientation of data. These procedures are required before sending data to the TransAttU-Net deep learning (DL) model. The algorithm attained a degree of accuracy on the BraTS 2019, i.e., the dataset provided in multimodal brain tumor image segmentation challenge and BraTS 2020 dataset, indicating great performance on BraTS 2020 dataset. The performance metrics of the models are evaluated using and results are discussed in this article.
Purpose
Personalized interpretation of medical images is critical for optimum patient care, but current tools available to physicians to perform quantitative analysis of patient’s medical images in ...real time are significantly limited. In this work, we describe a novel platform within PACS for volumetric analysis of images and thus development of large expert annotated datasets in parallel with radiologist performing the reading that are critically needed for development of clinically meaningful AI algorithms. Specifically, we implemented a deep learning-based algorithm for automated brain tumor segmentation and radiomics extraction, and embedded it into PACS to accelerate a supervised, end-to- end workflow for image annotation and radiomic feature extraction.
Materials and methods
An algorithm was trained to segment whole primary brain tumors on FLAIR images from multi-institutional glioma BraTS 2021 dataset. Algorithm was validated using internal dataset from Yale New Haven Health (YHHH) and compared (by Dice similarity coefficient DSC) to radiologist manual segmentation. A UNETR deep-learning was embedded into Visage 7 (Visage Imaging, Inc., San Diego, CA, United States) diagnostic workstation. The automatically segmented brain tumor was pliable for manual modification. PyRadiomics (Harvard Medical School, Boston, MA) was natively embedded into Visage 7 for feature extraction from the brain tumor segmentations.
Results
UNETR brain tumor segmentation took on average 4 s and the median DSC was 86%, which is similar to published literature but lower than the RSNA ASNR MICCAI BRATS challenge 2021. Finally, extraction of 106 radiomic features within PACS took on average 5.8 ± 0.01 s. The extracted radiomic features did not vary over time of extraction or whether they were extracted within PACS or outside of PACS. The ability to perform segmentation and feature extraction before radiologist opens the study was made available in the workflow. Opening the study in PACS, allows the radiologists to verify the segmentation and thus annotate the study.
Conclusion
Integration of image processing algorithms for tumor auto-segmentation and feature extraction into PACS allows curation of large datasets of annotated medical images and can accelerate translation of research into development of personalized medicine applications in the clinic. The ability to use familiar clinical tools to revise the AI segmentations and natively embedding the segmentation and radiomic feature extraction tools on the diagnostic workstation accelerates the process to generate ground-truth data.
Manual identification of brain tumors in Magnetic Resonance (MR) images is laborious, time-consuming, and human error-prone. Automatic segmentation of brain tumors from MR images aims to bridge the ...gap. U-Net, a deep learning model, has delivered promising results in generating brain tumor segments. However, the model tends to over-segment the tumor volume than required. It will have a significant impact on deploying the model for practical use. In this work, the baseline U-Net model has been studied with the addition of residual, multi-resolution, dual attention, and deep supervision blocks. The goal of residual blocks is to efficiently extract features to reduce the semantic gap between low-level features from the decoder and high-level features from skip connections. The multiple resolution blocks have been added to extract features and analyze tumors of varying scales. The dual attention mechanism has been incorporated to highlight tumor representations and reduce over-segmentation. Finally, the deep supervision blocks have been added to utilize features from various decoder layers to obtain the target segmentation. The design of the proposed model has been justified with several experiments and ablation studies. The proposed model has been trained and evaluated on the BraTS2020 training and validation datasets. On the validation data, the proposed model has achieved a dice score of 0.60, 0.75, 0.62 for enhancing tumor (ET), whole tumor (WT), and tumor core (TC), respectively, and a Hausdorff 95 score of 46.84, 11.05, and 22.5, respectively. Compared to the baseline U-Net, the proposed model has outperformed WT and TC volumes in the Hausdorff 95 distance metric except for the ET volume.
Brain tumors are one of the most fatal cancers. Magnetic Resonance Imaging (MRI) is a non-invasive method that provides multi-modal images containing important information regarding the tumor. Many ...contemporary techniques employ four modalities: T1-weighted (T1), T1-weighted with contrast (T1c), T2-weighted (T2), and fluid-attenuation-inversion-recovery (FLAIR), each of which provides unique and important characteristics for the location of each tumor. Although several modern procedures provide decent segmentation results on the multimodal brain tumor image segmentation benchmark (BraTS) dataset, they lack performance when evaluated simultaneously on all the regions of MRI images. Furthermore, there is still room for improvement due to parameter limitations and computational complexity. Therefore, in this work, a novel encoder–decoder-based architecture is proposed for the effective segmentation of brain tumor regions. Data pre-processing is performed by applying N4 bias field correction, z-score, and 0 to 1 resampling to facilitate model training. To minimize the loss of location information in different modules, a residual spatial pyramid pooling (RASPP) module is proposed. RASPP is a set of parallel layers using dilated convolution. In addition, an attention gate (AG) module is used to efficiently emphasize and restore the segmented output from extracted feature maps. The proposed modules attempt to acquire rich feature representations by combining knowledge from diverse feature maps and retaining their local information. The performance of the proposed deep network based on RASPP, AG, and recursive residual (R2) block termed RAAGR2-Net is evaluated on the BraTS benchmarks. The experimental results show that the suggested network outperforms existing networks that exhibit the usefulness of the proposed modules for “fine” segmentation. The code for this work is made available online at: https://github.com/Rehman1995/RAAGR2-Net.
•A novel architecture is proposed for the segmentation of brain tumor regions.•Proposed pre-processing module allows to learn insight features of the tumor region.•An encoder-decoder based segmentation model with optimized modules is introduced.•Modules are designed to minimize the data loss while deep feature extraction.•Depthwise convolution for brain tumor segmentation is explored in this work.•Proposed model helps to prevent location of tumor region during decoding process.