Purpose
According to the World Health Organization classification for tumors of the central nervous system, mutation status of the isocitrate dehydrogenase (
IDH
) genes has become a major diagnostic ...discriminator for gliomas. Therefore, imaging-based prediction of
IDH
mutation status is of high interest for individual patient management. We compared and evaluated the diagnostic value of radiomics derived from dual positron emission tomography (PET) and magnetic resonance imaging (MRI) data to predict the
IDH
mutation status non-invasively.
Methods
Eighty-seven glioma patients at initial diagnosis who underwent PET targeting the translocator protein (TSPO) using
18
FGE-180, dynamic amino acid PET using
18
FFET, and T1-/T2-weighted MRI scans were examined. In addition to calculating tumor-to-background ratio (TBR) images for all modalities, parametric images quantifying dynamic
18
FFET PET information were generated. Radiomic features were extracted from TBR and parametric images. The area under the receiver operating characteristic curve (AUC) was employed to assess the performance of logistic regression (LR) classifiers. To report robust estimates, nested cross-validation with five folds and 50 repeats was applied.
Results
TBR
GE-180
features extracted from TSPO-positive volumes had the highest predictive power among TBR images (AUC 0.88, with age as co-factor 0.94). Dynamic
18
FFET PET reached a similarly high performance (0.94, with age 0.96). The highest LR coefficients in multimodal analyses included TBR
GE-180
features, parameters from kinetic and early static
18
FFET PET images, age, and the features from TBR
T2
images such as the kurtosis (0.97).
Conclusion
The findings suggest that incorporating TBR
GE-180
features along with kinetic information from dynamic
18
FFET PET, kurtosis from TBR
T2
, and age can yield very high predictability of
IDH
mutation status, thus potentially improving early patient management.
Automated segmentation of brain tumour from multimodal MR images is pivotal for the analysis and monitoring of disease progression. As gliomas are malignant and heterogeneous, efficient and accurate ...segmentation techniques are used for the successful delineation of tumours into intra-tumoural classes. Deep learning algorithms outperform on tasks of semantic segmentation as opposed to the more conventional, context-based computer vision approaches. Extensively used for biomedical image segmentation, Convolutional Neural Networks have significantly improved the state-of-the-art accuracy on the task of brain tumour segmentation. In this paper, we propose an ensemble of two segmentation networks: a 3D CNN and a U-Net, in a significant yet straightforward combinative technique that results in better and accurate predictions. Both models were trained separately on the BraTS-19 challenge dataset and evaluated to yield segmentation maps which considerably differed from each other in terms of segmented tumour sub-regions and were ensembled variably to achieve the final prediction. The suggested ensemble achieved dice scores of 0.750, 0.906 and 0.846 for enhancing tumour, whole tumour, and tumour core, respectively, on the validation set, performing favourably in comparison to the state-of-the-art architectures currently available.
Background
Research suggests that treatment of multiple brain metastases (BMs) with stereotactic radiosurgery shows improvement when metastases are detected early, providing a case for BM detection ...capabilities on small lesions.
Purpose
To demonstrate automatic detection of BM on three MRI datasets using a deep learning‐based approach. To improve the performance of the network is iteratively co‐trained with datasets from different domains. A systematic approach is proposed to prevent catastrophic forgetting during co‐training.
Study Type
Retrospective.
Population
A total of 156 patients (105 ground truth and 51 pseudo labels) with 1502 BM (BrainMetShare); 121 patients with 722 BM (local); 400 patients with 447 primary gliomas (BrATS). Training/pseudo labels/validation data were distributed 84/51/21 (BrainMetShare). Training/validation data were split: 121/23 (local) and 375/25 (BrATS).
Field Strength/Sequence
A 5 T and 3 T/T1 spin‐echo postcontrast (T1‐gradient echo) (BrainMetShare), 3 T/T1 magnetization prepared rapid acquisition gradient echo postcontrast (T1‐MPRAGE) (local), 0.5 T, 1 T, and 1.16 T/T1‐weighted‐fluid‐attenuated inversion recovery (T1‐FLAIR) (BrATS).
Assessment
The ground truth was manually segmented by two (BrainMetShare) and four (BrATS) radiologists and manually annotated by one (local) radiologist. Confidence and volume based domain adaptation (CAVEAT) method of co‐training the three datasets on a 3D nonlocal convolutional neural network (CNN) architecture was implemented to detect BM.
Statistical Tests
The performance was evaluated using sensitivity and false positive rates per patient (FP/patient) and free receiver operating characteristic (FROC) analysis at seven predefined (1/8, 1/4, 1/2, 1, 2, 4, and 8) FPs per scan.
Results
The sensitivity and FP/patient from a held‐out set registered 0.811 at 2.952 FP/patient (BrainMetShare), 0.74 at 3.130 (local), and 0.723 at 2.240 (BrATS) using the CAVEAT approach with lesions as small as 1 mm being detected.
Data Conclusion
Improved sensitivities at lower FP can be achieved by co‐training datasets via the CAVEAT paradigm to address the problem of data sparsity.
Level of Evidence
3
Technical Efficacy Stage
2
Brain tumors have become a severe medical complication in recent years due to their high fatality rate. Radiologists segment the tumor manually, which is time-consuming, error-prone, and expensive. ...In recent years, automated segmentation based on deep learning has demonstrated promising results in solving computer vision problems such as image classification and segmentation. Brain tumor segmentation has recently become a prevalent task in medical imaging to determine the tumor location, size, and shape using automated methods. Many researchers have worked on various machine and deep learning approaches to determine the most optimal solution using the convolutional methodology. In this review paper, we discuss the most effective segmentation techniques based on the datasets that are widely used and publicly available. We also proposed a survey of federated learning methodologies to enhance global segmentation performance and ensure privacy. A comprehensive literature review is suggested after studying more than 100 papers to generalize the most recent techniques in segmentation and multi-modality information. Finally, we concentrated on unsolved problems in brain tumor segmentation and a client-based federated model training strategy. Based on this review, future researchers will understand the optimal solution path to solve these issues.
Medical workers can assess disease progression and create expedient treatment plans with the help of automated and accurate 3Dsegmentation of medical images. DCNNs (Deep convolution neural networks) ...have been widely used in this work, but their accuracy still needs to be increased, mostly due to their insufficient understanding of 3D environments. This study proposed three dimensional residual networks, ResUNet++, for precise segmentations of three-dimensional medical images where encoders, segmentation decoders, and context residual decoders are used. Two decoders are connected at scale utilizing context attention maps and context residual, the former explicitly learns inter-slice context data and the latter utilizes contexts as attention to increase segmentation accuracy. This model was assessed by using MICCAI 2018 BraTS dataset and, the Pancreas-CT dataset. The BrasTS and Pancreas-CT dataset scales were compared in terms of ET, WT, TC. Moreover, the proposed model was compared with/without boundary loss and validation dice score. The outcomes not only show how effective the suggested 3D residual learning approach is, but also show that the suggested ResUNet++ offers better accuracy compared to six of the top-ranking techniques used for segmenting tumors in the brain.
This paper proposes a novel approach, BTC-SAGAN-CHA-MRI, for the classification of brain tumors using a SAGAN optimized with a Color Harmony Algorithm. Brain cancer, with its high fatality rate ...worldwide, especially in the case of brain tumors, necessitates more accurate and efficient classification methods. While existing deep learning approaches for brain tumor classification have been suggested, they often lack precision and require substantial computational time.The proposed method begins by gathering input brain MR images from the BRATS dataset, followed by a pre-processing step using a Mean Curvature Flow-based approach to eliminate noise. The pre-processed images then undergo the Improved Non-Sub sampled Shearlet Transform (INSST) for extracting radiomic features. These features are fed into the SAGAN, which is optimized with a Color Harmony Algorithm to categorize the brain images into different tumor types, including Gliomas, Meningioma, and Pituitary tumors. This innovative approach shows promise in enhancing the precision and efficiency of brain tumor classification, holding potential for improved diagnostic outcomes in the field of medical imaging. The accuracy acquired for the brain tumor identification from the proposed method is 99.29%. The proposed BTC-SAGAN-CHA-MRI technique achieves 18.29%, 14.09% and 7.34% higher accuracy and 67.92%,54.04%, and 59.08% less Computation Time when analyzed to the existing models, like Brain tumor diagnosis utilizing deep learning convolutional neural network with transfer learning approach (BTC-KNN-SVM-MRI); M3BTCNet: multi model brain tumor categorization under metaheuristic deep neural network features optimization (BTC-CNN-DEMFOA-MRI), and efficient method depending upon hierarchical deep learning neural network classifier for brain tumour categorization (BTC-Hie DNN-MRI) respectively.
The quantitative analysis of images acquired in the diagnosis and treatment of patients with brain tumors has seen a significant rise in the clinical use of computational tools. The underlying ...technology to the vast majority of these tools are machine learning methods and, in particular, deep learning algorithms. This review offers clinical background information of key diagnostic biomarkers in the diagnosis of glioma, the most common primary brain tumor. It offers an overview of publicly available resources and datasets for developing new computational tools and image biomarkers, with emphasis on those related to the Multimodal Brain Tumor Segmentation (BraTS) Challenge. We further offer an overview of the state-of-the-art methods in glioma image segmentation, again with an emphasis on publicly available tools and deep learning algorithms that emerged in the context of the BraTS challenge.
Abstract
Glioma represents one of the most aggressive cancers, which can develop in the brain. The automatic tumor segmentation and its sub‐regions represent a challenging task owing to their ...considerable structural variation. It can appear in different ways and with several shapes, which makes tissue identification a crucial task. Reliable and accurate segmentation presents an important component in tumor treatment and diagnosis planning. To overcome these drawbacks, various Deep Learning (DL) schemes are proposed to aid doctors. In this paper, a novel Convolutional Neural Network (CNN) scheme for glioma segmentation was proposed. Our suggested technique consists of three phases. First, we use intensity normalization to ameliorate the image quality as a preprocessing step. Second, an automatic segmentation technique based on CNN has been proposed. The new model has several layers. Finally, and with the goal of refining the segmentation results, we employ a post‐processing approach. We use the public benchmark BraTS datasets from 2018 and 2020 with low‐grade and high‐grade glioma tumors to test the suggested framework. These datasets contain about 285 and 369 patients, respectively. The four modalities are exploited. Each patient has about 155 2D images from every modality. All the images have the same size (240 × 240 pixels). Our technique performs well compared to new methods, with Dice scores of 0.88 for the Whole Tumor, 0.84 for Tumor Core, and 0.71 for Enhancing Tumor based on the first dataset. According to the second dataset, the three regions had an average of 0.88, 0.9, and 0.75, respectively. The Jaccard indexes for the first data set are 0.8, 0.73, and 0.56 for the three regions, respectively. The second data set attains 0.8, 0.82, and 0.6 for the three regions. The results show that the suggested framework is an excellent way to segment data, especially compared to other methods.
Accurate detection and pixel‐wise classification of brain tumors in Magnetic Resonance Imaging (MRI) scans are vital for their diagnosis, prognosis study and treatment planning. Manual segmentation ...of tumors from MRI is highly subjective and tedious. With recent advances in deep learning, automatic brain tumor segmentation is an emerging research direction in the medical imaging domain. We present a study to improve the automatic segmentation process by introducing size variability in the Convolutional Neural Network (CNN). For pixel‐wise classification of tumorous slices convolutional neural network‐based encoder‐decoder UNET model is referred. A multi‐inception‐UNET model is proposed to improve scalability of the UNET model. Extensive experiments have been performed using the Brain Tumor Segmentation Challenge (BRATS) datasets to establish the validity of our proposed model. Experimental results show that our proposed method achieved the best results on BraTS 2015, 2017 and 2019 datasets for complete tumor, core tumor and enhancing tumor regions respectively.
Convolutional neural network (CNN) models obtain state of the art performance on image classification, localization, and segmentation tasks. Limitations in computer hardware, most notably memory size ...in deep learning accelerator cards, prevent relatively large images, such as those from medical and satellite imaging, from being processed as a whole in their original resolution. A fully convolutional topology, such as U-Net, is typically trained on down-sampled images and inferred on images of their original size and resolution, by simply dividing the larger image into smaller (typically overlapping) tiles, making predictions on these tiles, and stitching them back together as the prediction for the whole image. In this study, we show that this tiling technique combined with translationally-invariant nature of CNNs causes small, but relevant differences during inference that can be detrimental in the performance of the model. Here we quantify these variations in both medical (i.e., BraTS) and non-medical (i.e., satellite) images and show that training a 2D U-Net model on the whole image substantially improves the overall model performance. Finally, we compare 2D and 3D semantic segmentation models to show that providing CNN models with a wider context of the image in all three dimensions leads to more accurate and consistent predictions. Our results suggest that tiling the input to CNN models-while perhaps necessary to overcome the memory limitations in computer hardware-may lead to undesirable and unpredictable errors in the model's output that can only be adequately mitigated by increasing the input of the model to the largest possible tile size.