Detecting brain tumors plays an important role in patients’ lives as it can help specialists save them or let them succumb to a terminal illness otherwise. Magnetic Resonance Imaging (MRI) has been ...shown to be the most accurate method to detect tumors if any, as it can clearly project their existence to the output image. However, it can also result in less accurate evaluations when a human specialist is to evaluate the images. This is mostly due to fatigue, weak expertise, and insufficient amount of information in the image. The latter occurs if the tumor is not large enough to be detected in the images, or has overlapped with some brain regions that may deter the specialist from correctly identifying one as they are mistaken for the healthy brain region(s). Inspired to alleviate such less accurate diagnosis, this study is to propose a segmentation approach to aid specialists detect brain tumors. This approach can segment and classify brain tumors with 98.81 % pixel-level and 98.93 % classification accuracy on the MICCAI BraTS’20 benchmark dataset. The performance of the proposed method is the most accurate compared to previous studies.
A brain tumor is one of the foremost reasons for the rise in mortality among children and adults. A brain tumor is a mass of tissue that propagates out of control of the normal forces that regulate ...growth inside the brain. A brain tumor appears when one type of cell changes from its normal characteristics and grows and multiplies abnormally. The unusual growth of cells within the brain or inside the skull, which can be cancerous or non-cancerous has been the reason for the death of adults in developed countries and children in under developing countries like Ethiopia. The studies have shown that the region growing algorithm initializes the seed point either manually or semi-manually which as a result affects the segmentation result. However, in this paper, we proposed an enhanced region-growing algorithm for the automatic seed point initialization. The proposed approach's performance was compared with the state-of-the-art deep learning algorithms using the common dataset, BRATS2015. In the proposed approach, we applied a thresholding technique to strip the skull from each input brain image. After the skull is stripped the brain image is divided into 8 blocks. Then, for each block, we computed the mean intensities and from which the five blocks with maximum mean intensities were selected out of the eight blocks. Next, the five maximum mean intensities were used as a seed point for the region growing algorithm separately and obtained five different regions of interest (ROIs) for each skull stripped input brain image. The five ROIs generated using the proposed approach were evaluated using dice similarity score (DSS), intersection over union (IoU), and accuracy (Acc) against the ground truth (GT), and the best region of interest is selected as a final ROI. Finally, the final ROI was compared with different state-of-the-art deep learning algorithms and region-based segmentation algorithms in terms of DSS. Our proposed approach was validated in three different experimental setups. In the first experimental setup where 15 randomly selected brain images were used for testing and achieved a DSS value of 0.89. In the second and third experimental setups, the proposed approach scored a DSS value of 0.90 and 0.80 for 12 randomly selected and 800 brain images respectively. The average DSS value for the three experimental setups was 0.86.
Magnetic resonance imaging is the most generally utilized imaging methodology that permits radiologists to look inside the cerebrum using radio waves and magnets for tumor identification. However, it ...is tedious and complex to identify the tumorous and nontumorous regions due to the complexity in the tumorous region. Therefore, reliable and automatic segmentation and prediction are necessary for the segmentation of brain tumors. This paper proposes a reliable and efficient neural network variant, i.e., an attention-based convolutional neural network for brain tumor segmentation. Specifically, an encoder part of the UNET is a pre-trained VGG19 network followed by the adjacent decoder parts with an attention gate for segmentation noise induction and a denoising mechanism for avoiding overfitting. The dataset we are using for segmentation is BRATS’20, which comprises four different MRI modalities and one target mask file. The abovementioned algorithm resulted in a dice similarity coefficient of 0.83, 0.86, and 0.90 for enhancing, core, and whole tumors, respectively.
•First study to highlight class overlap problem in medical domain.•Proposed a novel class-wise feature enhancement strategy.•Tuned the feature values for only two survival classes.•Prominence of ...multi-class separability is attained successfully.
Conventional radiomics-based models precisely engineered image-based features from Magnetic Resonance Imaging (MRI) to extract the predictive patterns for High-Grade Gliomas (HGGs) survival prediction. But an in-depth exploration and assessment of these extracted features are still not conducted. To the best of our knowledge, this is the first study that perceives the range distributions of radiomics features and gains insight into the problem of class overlap among these features. A novel class-wise feature enhancement strategy addresses the ambiguous data regions in the extracted features without any data loss. This strategy explicitly tunes the data values of only two classes and retains the data of the third class to achieve excellent data separability. The enhancement depends on the difference in the feature values of the two classes and incorporates the scalability of this difference using different scaling factors. Furthermore, Box-Cox and logarithmic transformations are employed to overcome the non-normality of the enhanced features. Consequently, ablation experimentation is conducted to substantiate the classification metrics with pre- and post-enhancement cases. BraTS 2020 benchmark is employed, demonstrating that the proposed approach performs competitively in classifying HGG patients into three survival groups, namely, short, mid, and long survivors. It achieves an overall testing classification accuracy, precision, recall, and F1-score of 0.994, 0.993, 0.996, and 0.994, respectively. Therefore, instead of directly utilizing these raw extracted features, this strategy eliminates the overlapped class regions without any information loss and has proven to be a significant step for HGG survival classification applications.
•We proposed a Nakagami-Fuzzy imaging framework for medical image segmentation•This paper would be the first application of Nakagami-Fuzzy protocol to MRI images•We enhanced images by Nakagami and ...segmented lesions by modified fuzzy 2-means•We achieved 92.61% dice score for the main clinical experiment we conducted•Dice scores are computed as 91.88%/89.25% for BraTS 2012/2020 dataset experiments
Nakagami distribution and related imaging methods are very efficient in diagnostic ultrasonography for visualization and characterization of tissues for years. Abnormalities in tissues are distinguished from surrounding cells by application of the distribution ruled by the Nakagami m-parameter. The potential of discrimination in ultrasonography enables intelligent segmentation of lesions by other diagnostic tools and the imaging technique is very promising in other areas of medicine, like magnetic resonance imaging (MRI) for brain lesion identification, as presented in this paper. Therefore, we propose a novel Nakagami-Fuzzy imaging framework for intelligent and fully automated suspicious region segmentation from axial FLAIR MRI images exhibiting brain tumor characteristics to satisfy ground truth images with different precision levels. The images from MRI data set are processed by applying Nakagami distribution from pre-Rayleigh to post-Rayleigh for adjusting m-parameter. Amorphous and non-homogenous suspicious regions revealed by Nakagami imaging are segmented using customized Fuzzy 2-means to compare with two types of binary ground truths. The framework we propose is an outstanding example of fuzzy-based expert systems providing an average of 92.61% dice score for the main clinical experiment we conducted using the images and two types of ground truths provided by University of Hospital, Hradec Kralove. We also tested our framework by the BraTS 2012 and BraTS 2020 datasets and achieved an average of 91.88% and 89.25% dice scores respectively, which are competitive among the relevant researches.
Automatic medical image analysis is one of the key tasks being used by the medical community for disease diagnosis and treatment planning. Statistical methods are the major algorithms used and ...consist of few steps including preprocessing, feature extraction, segmentation, and classification. Performance of such statistical methods is an important factor for their successful adaptation. The results of these algorithms depend on the quality of images fed to the processing pipeline: better the images, higher the results. Preprocessing is the pipeline phase that attempts to improve the quality of images before applying the chosen statistical method. In this work, popular preprocessing techniques are investigated from different perspectives where these preprocessing techniques are grouped into three main categories: noise removal, contrast enhancement, and edge detection. All possible combinations of these techniques are formed and applied on different image sets which are then passed to a predefined pipeline of feature extraction, segmentation, and classification. Classification results are calculated using three different measures: accuracy, sensitivity, and specificity while segmentation results are calculated using dice similarity score. Statistics of five high scoring combinations are reported for each data set. Experimental results show that application of proper preprocessing techniques could improve the classification and segmentation results to a greater extent. However, the combinations of these techniques depend on the characteristics and type of data set used.
Enhanced Features for Brain Tumar Classification. The current research presents a features enhancement framework for brain tumor segmentation and classification. The impact of noise removal, contrast enhancement and edge detection techniques on medical image analysis, classification is highlighted.
•Paper presents deep learning based medical image segmentation and classification methodology.•It provides an improved version of based method i.e. UNET.•Multi-disease detection task is resolved with ...comparable performance.•The proposed work is a step towards general AI.
Medical imaging and deep learning methods have significantly improved the early detection of brain diseases like tumors and Ischemic stroke with higher accuracy. Machine learning methods especially neural-network based algorithms have shown huge success in medical image analysis for variety of tasks including the detection, segmentation and classification of brain tumors and Ischemic stroke. Usually, these models address one problem at a time which is considered as Artificial Weak Intelligence (AWI). There is the need to develop methods that can push the research towards strong or Artificial General AI (AGI) where a single model can solve multiple tasks.
In this work, we propose convolutional neural network based integrated model to detect and classify two brain diseases simultaneously i.e., tumors and Ischemic stroke. For this, a new dataset is created by merging two open-source datasets: BRATS 2015 and ISLES 2015. The designed network is an enhancement of encoder-decoder architecture based UNET where feature maps from one encoder block are fused with output of following encoder block to keep low-level fine-grained information intact and distinguish the overlapping features during encoding process in addition to UNET skip connections. The dataset is partitioned into training and validation sets on the ratio of 80:20 respectively with proportionate image inclusion in each training batch to address class imbalance issue. Average accuracy obtained through proposed model is 99.56%, 99.99% specificity, 99.59% precision and F1- score 99.57%. Obtained performance scores shows the usability of proposed feature fusion mechanism for multi-disease detection.
•Preprocessing is proposed to compensate inherent noise and low contrast.•A minimal set of highly representative regional features are employed.•The problem of class imbalance at a regional level is ...addressed for the first time.•Random Forest performance is much better than Support Vector Machine.
This paper presents a fully automated brain tissue classification method for normal and abnormal tissues and its associated region from Fluid Attenuated Inversion Recovery modality of Magnetic Resonance (MR) images. The proposed regional classification method is able to simultaneously detect and segment tumours to pixel-level accuracy. The region-based features considered in this study are statistical, texton histograms, and fractal features. This is the first study to address the class imbalance problem at the regional level using Random Majority Down-sampling-Synthetic Minority Over-sampling Technique (RMD-SMOTE). A comparison of benchmark supervised techniques including Support Vector Machine, AdaBoost and Random Forest (RF) classifiers is presented, where the RF-based regional classifier is selected in the proposed approach due to its better generalization performance. The robustness of the proposed method is evaluated on the standard publicly available BRATS 2012 dataset using five standard benchmark measures. We demonstrate that the proposed method consistently outperforms three benchmark tumour classification methods in terms of Dice score and obtains significantly better results as compared to its SVM and AdaBoost counterparts in terms of precision and specificity at the 5% confidence interval. The promising results of the proposed method support its application for early detection and diagnosis of brain tumours in clinical settings.
Glioma, which is a malignant tumor, is present in the glial tissue region of the human brain. Segmentation of such tumor cells in the brain region is still challenging and needs experts. Because of ...the overlap between the intensity distributions of tissue with edema, non-edema, and enhancing features, the segmentation process is a significant challenge for neurosurgeons and radiologists. As per the current state of the art in medicine and surgery, artificial intelligence is gaining attention in effective detection and segmentation in the area of medical diagnosis. In MICCAI 2020, the authors prepare an algorithm for the semantic segmentation of brain tumors from multimodal MRI images for further treatments such as observing treatment, monitoring recovery, and evaluating the effects of the treatment on patients. This paper's objective is to develop an efficient deep learning model which performs semantic segmentation using a multi-modal modified Link-Net model which uses Squeeze and Excitation ResNet152 model is used as a backbone for the segmentation. A model developed by Manipal Hospital in Bangalore is compared with the traditional state-of-the-art models, and its accuracy is verified by neurosurgeons there. This model imbibes the multi-modal MRI dataset which includes T1 weighted images, Flair images, and T2-weighted MRI images of the human brain, model perform comparably well, which shows that our model is robust for tumor segmentation. The accuracy of this model is 99.2.
This work proposes a novel framework for brain tumor segmentation prediction in longitudinal multimodal MRI scans, comprising two methods; feature fusion and joint label fusion (JLF). The first ...method fuses stochastic multi-resolution texture features with tumor cell density feature to obtain tumor segmentation predictions in follow-up timepoints using data from baseline pre-operative timepoint. The cell density feature is obtained by solving the 3D reaction-diffusion equation for biophysical tumor growth modelling using the Lattice-Boltzmann method. The second method utilizes JLF to combine segmentation labels obtained from (i) the stochastic texture feature-based and Random Forest (RF)-based tumor segmentation method; and (ii) another state-of-the-art tumor growth and segmentation method, known as boosted Glioma Image Segmentation and Registration (GLISTRboost, or GB). We quantitatively evaluate both proposed methods using the Dice Similarity Coefficient (DSC) in longitudinal scans of 9 patients from the public BraTS 2015 multi-institutional dataset. The evaluation results for the feature-based fusion method show improved tumor segmentation prediction for the whole tumor(DSCWT = 0.314, p = 0.1502), tumor core (DSCTC = 0.332, p = 0.0002), and enhancing tumor (DSCET = 0.448, p = 0.0002) regions. The feature-based fusion shows some improvement on tumor prediction of longitudinal brain tumor tracking, whereas the JLF offers statistically significant improvement on the actual segmentation of WT and ET (DSCWT = 0.85 ± 0.055, DSCET = 0.837 ± 0.074), and also improves the results of GB. The novelty of this work is two-fold: (a) exploit tumor cell density as a feature to predict brain tumor segmentation, using a stochastic multi-resolution RF-based method, and (b) improve the performance of another successful tumor segmentation method, GB, by fusing with the RF-based segmentation labels.