Brain Tumours pose a significant health challenge, demanding the immediate development of reliable and automated detection methods within the medical sphere. Swift and accurate identification of ...these Tumours are crucial for effective treatment and the well-being of patients. These growths stem from uncontrolled cell multiplication, depleting vital nutrients from healthy brain tissue and leading to organ dysfunction. Presently, the conventional method involves a manual examination of brain MRI scans by medical professionals, but this is hindered by the varied shapes and sizes of Tumours, resulting in time-consuming and occasionally imprecise evaluations. The emergence of automation holds immense potential, promising to bolster efficiency and allow medical practitioners more time for direct patient care. Traditional machine learning approaches have historically depended on labor-intensive feature engineering. In our research, we introduce an innovative approach: a combination of the U-Net model, a Convolutional Neural Network (CNN), and Self Organizing Feature Map (SOFM) in an ensemble technique for precise brain Tumour segmentation using the BRATS 2020 dataset. Our evaluation not only focuses on segmentation accuracy but also utilizes valuable survival data from the dataset to predict patient survival rates. The proposed model resulted in average training accuracy, mean Intersection over Union (mIoU), and dice coefficient scores of 0.967, 0.521, and 0.990 respectively for different epochs. Also, average validation accuracy, mIoU, and dice coefficient scores of 0.965, 0.546, and 0.992 respectively. The proposed model showcases a 98.28% accuracy in the segmentation of brain Tumours. The proposed methodology has the potential to revolutionize the landscape of brain Tumour diagnosis and treatment.
Brain is recognized as a focal part of nervous system and it is inflated by tumour. Therefore the lifespan among humans gets diminished. The anatomy of brain can be reflected by Magnetic resonance ...image (MRI) or Computed tomography (CT) image. The accuracy segmentation of brain tumour detection in a multimodality images with inadequate computing resource is a challenging task in a medical field. To take over the provisioned difficulties we proposed a new novel innovative approach. The segmentation process comprises the subsequent footsteps. To wipe out the noise and smoothen image there is a need for a pre-processing. Cross guided bilateral filter (CGBF) technique had introduced for the eradication of noises in multimodality images. In this paper, hybrid dual tree complex wavelet transform with Walsh hadamard transform (Hybrid DTCWT-WHT) and Gabor filter is proposed in order to extract the indispensable hybrid set of features from the respective wavelet transforms. The hybrid DTCWT-WHT approach is used for an accurate identification brain tumour in Multi-Modality brain images. Features are important one for differentiating and deciding the exact class of brain tumour. The proposed hybrid features are used for predicting the presence of brain tumours and helps to segment the brain region correctly. Secondly in this framework, Adaptive mayfly Optimization (AMO) is proposed for the selection of crucial features from the feature vectors and destroys the non-required features. Then the classification purpose is emphasized to categorize tumour and un-tumour images. To efficiently strengthen the segmentation resolution, Fuzzy group teaching (FGT) algorithm is proposed. Proposed scheme is consolidated in Brain Tumour segmentation (BraTS) 2020 dataset to brand a nominal segmentation process. The substantial outcome had been appraised in terms of 98.18%. Accuracy, 95.50% precision, 97.14% Recall, 97.14% F1-Score, 98.66% Specificity, 95.52% Structural similarity index metric (SSIM), 95.12% Universal quality index (UQI), 0.94% Jaccard and 0.97% dice coefficients correspondingly.
The surging use of medical AI algorithms and their hardware integration is transforming healthcare by improving non-invasive medical analysis with early disease detection, advanced segmentation, and ...classification. However, realizing comprehensive and accurate medical analysis through efficient AI-based tools necessitates a fundamental requirement - extensive multimodal data for training deep learning models. Handling this extensive data volume demands significant hardware resources, including multi-node training, to address the substantial computational requirements essential for accelerating model development. Hence, the challenge is two-fold: Achieving high accuracy while upholding a computationally inexpensive solution. To navigate this challenge, we propose a novel and efficient solution: a lightweight predictive tool for medical image classification developed by combining a Radiomics-based Random Forest model with MobileViT transformer, tailored for mobile applications. This approach ensures enhanced accuracy and reproducibility along with hardware flexibility. Our proposed method is exemplified by its superior performance in the BraTS2021 challenge, surpassing current state-of-the-art models with the best AUROC of 0.64 and 0.63 on both public and private test datasets respectively. The success of our approach highlights the potential of hybrid models in diverse medical applications beyond image classification.
The combination of radiomics and artificial intelligence has emerged as a strong technique for building predictive models in radiology. This study aims to address the clinically important issue of ...whether a radiomic profile can predict the overall survival (OS) time of glioblastoma multiforme (GBM) patients having gross tumor resection (GTR) through pre-operative structural magnetic resonance imaging (MRI) scans. A retrospective analysis was made using data of glioma patients made publicly available by the University of Pennsylvania. The radiomic characteristics were extracted from pre-operative structural multiparametric MRI (mpMRI) sequences after pre-processing and 3D segmentation using deep learning (DL). After removing irrelevant features, regression models based on machine learning (ML) were developed by considering selected features to predict the OS time of GBM patients within a period of days only. The patients were divided into three survivor groups depending on their projected survival time. To validate the significance of the selected feature set, statistical analysis was performed. As many as 494 patients were considered to improve survival prediction (SP) by using more effective feature extraction and selection techniques. The ridge regressor acquired the highest SpearmanR Rank correlation of 0.635 with an accuracy of 69%, the greatest of all the previous works for categorical predictions of such patients. The researchers in the past who used radiomic characteristics for the OS prognosis of GBM patients could yield limited results only. However, the current research work recorded an enhanced accuracy and SpearmanR rank for the three survivor classes of GBM patients using ML, feature selection, and radiomics. The significance of this work lies in the selection of patients with GTR and the extraction of radiomic characteristics through the use of radiomics and artificial intelligence.
Prediction of overall survival based on multimodal MRI of brain tumor patients is a difficult problem. Although survival also depends on factors that cannot be assessed via preoperative MRI such as ...surgical outcome, encouraging results for MRI-based survival analysis have been published for different datasets. We assess if and how established radiomic approaches as well as novel methods can predict overall survival of brain tumor patients on the BraTS challenge dataset. This dataset consists of multimodal preoperative images of 211 glioblastoma patients from several institutions with reported resection status and known survival. In the official challenge setting, only patients with a reported gross total resection are taken into account. We therefore evaluated previously published methods as well as different machine learning approaches on the BraTS dataset. For different types of resection status, these approaches are compared to a baseline, a linear regression on patient age only. This naive approach won the 3rd place out of 26 participants in the BraTS survival prediction challenge 2018. Previously published radiomic signatures show significant correlations and predictiveness to patient survival for patients with a reported subtotal resection. However, for patients with reported gross total resection, none of the evaluated approaches was able to outperform the age-only baseline in a cross-validation setting, explaining the poor performance of approaches based on radiomics in the BraTS challenge 2018.
This paper presents an automated approach to perform multiclass classification of four majorly diagnosed Central nervous system brain tumors. The Astrocytoma, Glioblastoma multiforme, Meningioma and ...Oligodendroglioma are the four types of central nervous system brain tumors types, whose classification is being performed with the aid of this proposed approach. In addition, this proposed approach also used to perform binary classification of Glioma brain tumors into low grade and high grade Glioma tumor. The proposed automated approach for multiclass and binary class classification is based on the threshold segmentation of fused Magnetic Resonance Imaging sequences, proposed hybrid feature extraction methods along with shape based features and ensemble learning classifier. The two hybrid feature extraction methods are proposed in this paper, one based on the Discrete wavelet transform + Gradient Grey level co-occurrences matrix and second one based on the Discrete wavelet transform + Local binary pattern + Grey level run length matrix. The extracted texture features along with the shape based features are further reduced employing Principal component analysis. The resulted selected features are finally used to train the majority voting based ensemble classifier model with the aid of Central nervous system local dataset, Brain Tumor Segmentation 2013 and 2015 global dataset. The proposed automated system delivers an accuracies of 99.12, 95.24, 97.62 and 97.62 for the correct classification of Astrocytoma, Glioblastoma multiforme, Meningioma and Oligodendroglioma over the Central nervous system local dataset. Whereas delivers an accuracy of 100 and 99.52 for the binary classification of Glioma on the Brain Tumor Segmentation 2013 and 2015 global datasets employing 10-fold cross validation.
Multimodal brain MR image analysis is still a challenging research area due to its complex intensity distribution and sensitivity towards the noise. Tumourous cells have different characteristics ...than normal human cells, which makes them more salient. In this Letter, the authors propose a novel unsupervised spatial information based saliency boosting tumour detection method which will help to identify tumourous cells by making it more clearly visible. Initially, a pseudo-coloured MR image is formed using the CIELab colour space. Saliency map has been established by calculating distance among scales varying elliptical windows in both spatial and colour space. Elliptical windows endeavour to cover-up curved outliers of the brain images. The average intensity value is kept constant by fixing the axis ratio for each window. The proposed algorithm has been evaluated on both real and simulated brain images of different patients from MICCAI-BRATS database. The performance analysis of the new algorithm exhibits higher accuracy with a low computational complexity as compared to other state of the art. The efficacy is due to the immobility of the window across rows and columns to move over the image. The novelty of the proposed technique is that neither it downscales the input images nor require any training bases.