In this paper, we propose a triple intersecting U-Nets (TIU-Nets) for brain glioma segmentation. First, the proposed TIU-Nets is composed of binary-class segmentation U-Net (BU-Net) and multi-class ...segmentation U-Net (MU-Net), in which MU-Net reuses multi-resolution features from BU-Net. Second, we introduce a segmentation soft-mask predicted by BU-Net, that is, candidate glioma region is generated by removing most of non-glioma backgrounds, which guides multi-category segmentation of MU-Net in a weighted manner. Third, an edge branch in MU-Net is leveraged to enhance boundary information of glioma substructure, which facilitates to locate glioma true boundaries and improve segmentation accuracy. Finally, we propose a sigmoid-evolution based polarized cross-entropy loss (S-CE) to resolve class unbalance problem, and apply S-CE loss to soft-mask prediction loss in BU-Net, multi-class segmentation loss in MU-Net and edge prediction loss in edge branch. Experimental results have demonstrated that the proposed 2D/3D TIU-Nets achieves a higher segmentation accuracy than corresponding 2D/3D state-of-the-art segmentation methods including FCN, U-Net, SegNet, CRDN, IVD-Net, FCDenseNet, DeepMedic, DMFNet, etc, evaluating on publicly available brain tumor segmentation challenge 2015 (BRATS2015) datasets. To show the universality of the proposed method, we also give a comparison of segmentation performance on BrainWeb dataset.
The accurate prediction of isocitrate dehydrogenase (IDH) mutation and glioma segmentation are important tasks for computer-aided diagnosis using preoperative multimodal magnetic resonance imaging ...(MRI). The two tasks are ongoing challenges due to the significant inter-tumor and intra-tumor heterogeneity. The existing methods to address them are mostly based on single-task approaches without considering the correlation between the two tasks. In addition, the acquisition of IDH genetic labels is expensive and costly, resulting in a limited number of IDH mutation data for modeling. To comprehensively address these problems, we propose a fully automated multimodal MRI-based multi-task learning framework for simultaneous glioma segmentation and IDH genotyping. Specifically, the task correlation and heterogeneity are tackled with a hybrid CNN-Transformer encoder that consists of a convolutional neural network and a transformer to extract the shared spatial and global information learned from a decoder for glioma segmentation and a multi-scale classifier for IDH genotyping. Then, a multi-task learning loss is designed to balance the two tasks by combining the segmentation and classification loss functions with uncertain weights. Finally, an uncertainty-aware pseudo-label selection is proposed to generate IDH pseudo-labels from larger unlabeled data for improving the accuracy of IDH genotyping by using semi-supervised learning. We evaluate our method on a multi-institutional public dataset. Experimental results show that our proposed multi-task network achieves promising performance and outperforms the single-task learning counterparts and other existing state-of-the-art methods. With the introduction of unlabeled data, the semi-supervised multi-task learning framework further improves the performance of glioma segmentation and IDH genotyping. The source codes of our framework are publicly available at https://github.com/miacsu/MTTU-Net.git .
Purpose
To develop a neural ordinary differential equation (ODE) model for visualizing deep neural network behavior during multi‐parametric MRI‐based glioma segmentation as a method to enhance deep ...learning explainability.
Methods
By hypothesizing that deep feature extraction can be modeled as a spatiotemporally continuous process, we implemented a novel deep learning model, Neural ODE, in which deep feature extraction was governed by an ODE parameterized by a neural network. The dynamics of (1) MR images after interactions with the deep neural network and (2) segmentation formation can thus be visualized after solving the ODE. An accumulative contribution curve (ACC) was designed to quantitatively evaluate each MR image's utilization by the deep neural network toward the final segmentation results.
The proposed Neural ODE model was demonstrated using 369 glioma patients with a 4‐modality multi‐parametric MRI protocol: T1, contrast‐enhanced T1 (T1‐Ce), T2, and FLAIR. Three Neural ODE models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The key MRI modalities with significant utilization by deep neural networks were identified based on ACC analysis. Segmentation results by deep neural networks using only the key MRI modalities were compared to those using all four MRI modalities in terms of Dice coefficient, accuracy, sensitivity, and specificity.
Results
All Neural ODE models successfully illustrated image dynamics as expected. ACC analysis identified T1‐Ce as the only key modality in ET and TC segmentations, while both FLAIR and T2 were key modalities in WT segmentation. Compared to the U‐Net results using all four MRI modalities, the Dice coefficient of ET (0.784→0.775), TC (0.760→0.758), and WT (0.841→0.837) using the key modalities only had minimal differences without significance. Accuracy, sensitivity, and specificity results demonstrated the same patterns.
Conclusion
The Neural ODE model offers a new tool for optimizing the deep learning model inputs with enhanced explainability. The presented methodology can be generalized to other medical image‐related deep‐learning applications.
Background
Uncertainty quantification in deep learning is an important research topic. For medical image segmentation, the uncertainty measurements are usually reported as the likelihood that each ...pixel belongs to the predicted segmentation region. In potential clinical applications, the uncertainty result reflects the algorithm's robustness and supports the confidence and trust of the segmentation result when the ground‐truth result is absent. For commonly studied deep learning models, novel methods for quantifying segmentation uncertainty are in demand.
Purpose
To develop a U‐Net segmentation uncertainty quantification method based on spherical image projection of multi‐parametric MRI (MP‐MRI) in glioma segmentation.
Methods
The projection of planar MRI data onto a spherical surface is equivalent to a nonlinear image transformation that retains global anatomical information. By incorporating this image transformation process in our proposed spherical projection‐based U‐Net (SPU‐Net) segmentation model design, multiple independent segmentation predictions can be obtained from a single MRI. The final segmentation is the average of all available results, and the variation can be visualized as a pixel‐wise uncertainty map. An uncertainty score was introduced to evaluate and compare the performance of uncertainty measurements.
The proposed SPU‐Net model was implemented on the basis of 369 glioma patients with MP‐MRI scans (T1, T1‐Ce, T2, and FLAIR). Three SPU‐Net models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The SPU‐Net model was compared with (1) the classic U‐Net model with test‐time augmentation (TTA) and (2) linear scaling‐based U‐Net (LSU‐Net) segmentation models in terms of both segmentation accuracy (Dice coefficient, sensitivity, specificity, and accuracy) and segmentation uncertainty (uncertainty map and uncertainty score).
Results
The developed SPU‐Net model successfully achieved low uncertainty for correct segmentation predictions (e.g., tumor interior or healthy tissue interior) and high uncertainty for incorrect results (e.g., tumor boundaries). This model could allow the identification of missed tumor targets or segmentation errors in U‐Net. Quantitatively, the SPU‐Net model achieved the highest uncertainty scores for three segmentation targets (ET/TC/WT): 0.826/0.848/0.936, compared to 0.784/0.643/0.872 using the U‐Net with TTA and 0.743/0.702/0.876 with the LSU‐Net (scaling factor = 2). The SPU‐Net also achieved statistically significantly higher Dice coefficients, underscoring the improved segmentation accuracy.
Conclusion
The SPU‐Net model offers a powerful tool to quantify glioma segmentation uncertainty while improving segmentation accuracy. The proposed method can be generalized to other medical image‐related deep‐learning applications for uncertainty evaluation.
Accurate glioma segmentation based on magnetic resonance imaging (MRI) is crucial for assisting with the diagnosis of gliomas. However, the manual delineation of all diverse gliomas, including the ...whole tumors (WTs), tumor cores (TCs) and enhancing tumors (ETs) of high-grade gliomas (HGG) and low-grade gliomas (LGG), is laborious and often error prone. The different phenotypes, sizes and locations of gliomas in/between patients make automatic segmentation a challenging task. To alleviate these challenges, in this paper, we propose a 3D fully convolutional network (FCN) with a dual-attention (i.e., global and local attention) mechanism to segment diverse gliomas simultaneously. The global attention mechanism (GAM) focuses on segmenting gliomas precisely by segment discrimination learning with a weight-allocated segmentation loss function to alleviate biased results obtained for tumors with large sizes and an adversarial loss function to refine the segmentations of areas with low contrast relative to their neighbors. The local attention mechanism (LAM) constantly revises effective features with the guidance of a united loss function at different levels. Furthermore, we present a hierarchical feature module (HFM) with a weight-sharing block to obtain more information about the boundaries of different scales, aiming at enhancing the learning of ambiguous tumor outlines. According to experimental results, our network outperforms ten state-of-the-art methods. Ablation studies show that the proposed model components are effective for diverse glioma segmentation.
The segmentation of glioma by computer vision is one of the hot topics in medical image analysis, which further helps doctors to make a better treatment plan for glioma. At present, convolutional ...neural networks (CNN) with multi-kernels are the mainstream method to identify glioma regions. However, the segmentation is strongly affected if the intensity dissimilarity between adjacent glioma regions is small. To solve this challenge, we propose an attention-based multimodal glioma segmentation with multi-attention layers for small-intensity dissimilarity to focus on the small-intensity dissimilarity glioma regions. Firstly, to reduce background interferences, this paper proposes data enhancement in glioma-centered regions. In addition, a random multi-dimensional data view is generated in the glioma regions to reduce overfitting. Secondly, we embed a 3D U-Net to proposed attention layers, which focus on the intensity dissimilarity between adjacent glioma regions, and adaptively mine the glioma-related features, solving the problem that the existing algorithms are insensitive to the small-intensity dissimilarity between adjacent glioma regions. In particular, each attention layer can adaptively highlight valuable glioma features and suppress unrelated ones. Finally, experiment results on the multimodal brain tumor segmentation challenge (BraTS) 2020 dataset validate the effectiveness of the proposed method, where the Dice Similarity Coefficients (DSC) are improved on the segmentation of whole tumor (WT), tumor core (TC), and enhanced tumor (ET) regions, reaching higher results at 0.7803, 0.8831 and 0.8172 for WT, TC and ET regions respectively. Also, we make a test on the public dataset BraTS2019, reaching the results at 0.7675, 0.8925, and 0.8110 for WT, TC, and ET regions, respectively.
•A 2D–3D cascade network with multi-scale information is proposed for glioma segmentation.•A multi-task learning-based 2D network is applied to exploit intra-slice features.•A 3D DenseUNet is ...integrated with the 2D network to extract inter-slice features.•A multi-scale information module is used in 2D and 3D networks to capture glioma details.•Competitive performance is achieved on public available and clinical datasets.
Glioma segmentation is an important procedure for the treatment plan and follow-up evaluation of patients with glioma. UNet-based networks are widely used in medical image segmentation tasks and have achieved state-of-the-art performance. However, context information along the third dimension is ignored in 2D convolutions, whereas difference between z-axis and in-plane resolutions is large in 3D convolutions. Moreover, an original UNet structure cannot capture fine details because of the reduced resolution of feature maps near bottleneck layers.
To address these issues, a novel 2D–3D cascade network with multiscale information module is proposed for the multiclass segmentation of gliomas in multisequence MRI images. First, a 2D network is applied to fully exploit potential intra-slice features. A variational autoencoder module is incorporated into 2D DenseUNet to regularize a shared encoder, extract useful information, and represent glioma heterogeneity. Second, we integrated 3D DenseUNet with the 2D network in cascade mode to extract useful inter-slice features and alleviate the influence of large difference between z-axis and in-plane resolutions. Moreover, a multiscale information module is used in the 2D and 3D networks to further capture the fine details of gliomas. Finally, the whole 2D–3D cascade network is trained in an end-to-end manner, where the intra-slice and inter-slice features are fused and optimized jointly to take full advantage of 3D image information.
Our method is evaluated on publicly available and clinical datasets and achieves competitive performance in these two datasets.
These results indicate that the proposed method may be a useful tool for glioma segmentation.
The low accuracy of MR image segmentation is often caused by blurred glioma region boundaries and intensity inhomogeneity as well as class-imbalance problems, which greatly influences glioma ...quantitative analysis. To resolve these problems, we propose a Deep Multiple Guidances based Glioma Segmentation Network (DMGSN), which is designed according to an observation of hierarchical structure of glioma region. In DMGSN, Glioma Sub-regions Prediction (GSP) block fuses guidance features from Whole Glioma Prediction (WGP) block and Glioma Boundary Prediction (GBP) block by importance ranking fusion module, which reduces redundancy among guidance features. Specifically, the WGP block is responsible to generate a whole glioma guidance map, which provides a key clue to exclude non-glioma regions for glioma sub-regions segmentation. Meanwhile, we introduce GBP block to estimate glioma sub-regions contours, whose multi-scale feature maps are added into GSP decoding path to strengthen segmentation features around glioma boundaries. Besides, hybrid enhanced-gradient cross-entropy loss regularizes DMGSN training, which efficiently alleviates class-imbalance problem. Large numbers of experimental results have shown that the proposed DMGSN has superior performances against many state-of-the-art glioma segmentation methods in terms of complete dice, core dice and enhance dice.
•Glioma segmentation network is designed based on glioma hierarchical structure.•Whole glioma prediction is proposed to reduce wrongly segmented points.•Glioma boundary prediction is introduced to provide semantic glioma contour.•Importance ranking fusion is introduced to reduce feature redundancy.•Our hybrid enhanced-gradient cross-entropy loss can solve class-imbalance problem.
Display omitted
•A new accurate DPGM-DDM for brain tumor segmentation.•A novel DPGM based on a geometric P-norm.•An original geometric loss function based RANSAC model fitting.•DPGM-DDM optimization ...through an Expectation–Maximization Algorithm.
Gliomas are the most common types of brain tumors. Their shapes are irregular and their boundaries are ambiguous, producing a tumor that is hard to detect. To improve the segmentation performance, we propose a P-Norm-Based Glioma Segmentation by encapsulating it within a Deep Convolutional Autoencoder (DCA). In this paper, we first develop a new robust Deep P-norm Generative Model (DPGM) that can extract and classify the feature by the encoder section and segment this latter by the decoder section. After that, we utilise a Deep Discriminative Model (DDM) which applies the conditional random fields to refine the segmentation result. Effectiveness is gained by gradient descent training on the set of pre-P-norm filtering and thresolding convolutional operations using a novel geometric P-norm-based loss function. This latter will be optimized by the RANSAC layer, which is geared toward optimizing the strength of the correct model relative to the most convincing false model. We evaluate the accuracy and robustness of our method within MRI sequences (T1, T2, T2c & FLAIR) from the Multimodal Brain Tumor Image Segmentation Challenge (BRATS 2013 Leaderboard, BRATS 2013 Challenge, BRATS 2015 and BRATS 2017). The experiments demonstrate a high-quality segmentation results of our DPGM-DDM architecture and the average Dice is as high as 97.2%, 97.6%, and 98.4% respectively for complete, core, and enhancing tumor regions. Overall, our deep proposed architecture is effective in segmenting gliomas in multimodal images or fair images, and has the potential in routine examinations of gliomas in daily clinical practice.