Breast cancer is the second leading cause of death for women, so accurate early detection can help decrease breast cancer mortality rates. Computer-aided detection allows radiologists to detect ...abnormalities efficiently. Medical images are sources of information relevant to the detection and diagnosis of various diseases and abnormalities. Several modalities allow radiologists to study the internal structure, and these modalities have been met with great interest in several types of research. In some medical fields, each of these modalities is of considerable significance. This study aims at presenting a review that shows the new applications of machine learning and deep learning technology for detecting and classifying breast cancer and provides an overview of progress in this area. This review reflects on the classification of breast cancer utilizing multi-modalities medical imaging. Details are also given on techniques developed to facilitate the classification of tumors, non-tumors, and dense masses in various medical imaging modalities. It first provides an overview of the different approaches to machine learning, then an overview of the different deep learning techniques and specific architectures for the detection and classification of breast cancer. We also provide a brief overview of the different image modalities to give a complete overview of the area. In the same context, this review was performed using a broad variety of research databases as a source of information for access to various field publications. Finally, this review summarizes the future trends and challenges in the classification and detection of breast cancer.
•Analysis the current research methodologies on deep learning and machine learning techniques.•Provide the deep learning for breast cancer using various modalities of medical imaging.•Provide the machine learning for breast cancer using various modalities of medical imaging.•Illustrate the modalities of medical imaging used for classifying breast cancer.•Present the datasets used in the classification models for medical images.
Breast cancer is a common and fatal disease among women worldwide. Therefore, the early and precise diagnosis of breast cancer plays a pivotal role to improve the prognosis of patients with this ...disease. Several studies have developed automated techniques using different medical imaging modalities to predict breast cancer development. However, few review studies are available to recapitulate the existing literature on breast cancer classification. These studies provide an overview of the classification, segmentation, or grading of many cancer types, including breast cancer, by using traditional machine learning approaches through hand-engineered features. This review focuses on breast cancer classification by using medical imaging multimodalities through state-of-the-art artificial deep neural network approaches. It is anticipated to maximize the procedural decision analysis in five aspects, such as types of imaging modalities, datasets and their categories, pre-processing techniques, types of deep neural network, and performance metrics used for breast cancer classification. Forty-nine journal and conference publications from eight academic repositories were methodically selected and carefully reviewed from the perspective of the five aforementioned aspects. In addition, this study provided quantitative, qualitative, and critical analyses of the five aspects. This review showed that mammograms and histopathologic images were mostly used to classify breast cancer. Moreover, about 55% of the selected studies used public datasets, and the remaining used exclusive datasets. Several studies employed augmentation, scaling, and image normalization pre-processing techniques to minimize inconsistencies in breast cancer images. Several types of shallow and deep neural network architecture were employed to classify breast cancer using images. The convolutional neural network was utilized frequently to construct an effective breast cancer classification model. Some of the selected studies employed a pre-trained network or developed new deep neural networks to classify breast cancer. Most of the selected studies used accuracy and area-under-the-curve metrics followed by sensitivity, precision, and F-measure metrics to evaluate the performance of the developed breast cancer classification models. Finally, this review presented 10 open research challenges for future scholars who are interested to develop breast cancer classification models through various imaging modalities. This review could serve as a valuable resource for beginners on medical image classification and for advanced scientists focusing on deep learning-based breast cancer classification through different medical imaging modalities.
Detection and diagnosis of a disease with a single image can be tedious and difficult for doctors but with the adaptation of medical image fusion, a path for additional improvements can be paved. The ...objective of this research is to implement different fusion algorithms based on conventional and proposed hybrid techniques. Based on performance metrics it has been observed that the novel method, Discrete Component Wavelet Transform (DCWT) shows remarkable results in comparison to the traditional techniques. As per the enhancement methods, Binarization, Median Filter, and Contrast Stretching have been considered to compare the contrast performance with Contrast Limited Adaptive Histogram Equalization. Certain modifications to each enhancement method were made related to the selection of parameters. Thus, better qualitative and quantitative values were observed in Discrete Component Wavelet Transform. The different attributes were calculated from the fused images which were classified using various machine learning techniques. Maximum accuracy of 97.87% and 95.74% is obtained using Discrete Component Wavelet Transform for Support Vector Machine (SVM) and k Nearest Neighbor (kNN) (k = 4) respectively considering the combination of both features Grey Level Difference Statistics and shape.
Recently, medical image registration and fusion processes are considered as a valuable assistant for the medical experts. The role of these processes arises from their ability to help the experts in ...the diagnosis, following up the diseases’ evolution, and deciding the necessary therapies regarding the patient’s condition. Therefore, the aim of this paper is to focus on medical image registration as well as medical image fusion. In addition, the paper presents a description of the common diagnostic images along with the main characteristics of each of them. The paper also illustrates most well-known toolkits that have been developed to help the working with the registration and fusion processes. Finally, the paper presents the current challenges associated with working with medical image registration and fusion through illustrating the recent diseases/disorders that were addressed through such an analyzing process.
This paper is an effort to encapsulate the various developments in the domain of different unsupervised, supervised and half supervised brain anomaly detection approaches or techniques proposed by ...the researchers working in the domain of the Medical image segmentation and classification. As researchers are constantly working hard in the domain of image segregation, interpretation and computer vision in order to automate the task of tumour segmentation, anomaly detection, classification and other structural disorder prediction at an early stage with the aid of computer. The different medical imaging modalities are used by the doctors in order to diagnose the brain tumour and other structural brain disorders which are an integral part of diagnosis and prognosis process. When these different medical image modalities are used along with various image segmentation methods and machine learning approaches tends to perform brain structural disorder detection and classification in a semi-automated or fully automated manner with high accuracy. This paper presents all such approaches using various medical image modalities for the accurate detection and classification of brain tumour and other brain structural disorders. In this paper, all the major phases of any brain tumour or brain structural disorder detection and classification approach is covered begin with the comparison of various medical image pre-processing techniques then major segmentation approaches followed by the approaches based on machine learning. This paper also presents an evaluation and comparison among the various popular texture and shape based feature extraction methods used in combination with different machine learning classifiers on the BRATS 2013 dataset. The fusion of MRI modalities used along with the hybrid features extraction methods and ensemble model delivers the best result in terms of accuracy.
Ultrasound is a highly adaptable medical imaging modality that offers several applications and a wide range of uses, both for diagnostic and therapeutic purposes. The principles of sound wave ...propagation and reflection enable ultrasound imaging to function as a highly secure modality. This technique facilitates the production of real-time visual representations, thereby assisting in the evaluation of various medical conditions such as cardiac, gynecologic, and abdominal diseases, among others. The ultrasound modality encompasses a diverse range of modes and mechanisms that serve to enhance the methodology of pathology and physiology assessment. Doppler imaging and US elastography, in particular, are two such techniques that contribute to this expansion. Elastography-based imaging methods have attracted significant interest in recent years for the non-invasive evaluation of tissue mechanical characteristics. These techniques utilize the changes in soft tissue elasticity in various diseases to generate both qualitative and quantitative data for diagnostic purposes. Specialized imaging techniques collect data by identifying tissue stiffness under mechanical forces such as compression or shear waves. However, in this review paper, we provide a comprehensive examination of the fundamental concepts, underlying physics, and limitations associated with ultrasound elastography. Additionally, we present a concise overview of its present-day clinical utilization and ongoing advancements across many clinical domains.
Multimodal medical image fusion aims to reduce insignificant information and improve clinical diagnosis accuracy. The purpose of image fusion is to retain salient image features and detail ...information of multiple source images to yield a more informative fused image. A hybrid algorithm based on both pixel and feature levels of multimodal medical image fusion is presented in this paper. For the pixel-level fusion, the source images are decomposed into low- and high-frequency components using Discrete Wavelet Transform (DWT), and then the low-frequency coefficients are fused using maximum fusion rule. Thereafter, the curvelet transform is applied on the high-frequency coefficients. The obtained high-frequency subbands (fine scale) are fused using Principal Component Analysis (PCA) fusion rule. On the other hand, the feature-level fusion is accomplished by extracting various features form the coarse and detail subbands and using them for the fusion process. These features involve mean, variance, entropy, visibility, and standard deviation. Thereafter, the inverse curvelet transform is implemented on the fused high-frequency coefficients, and finally the resultant fused image is acquired by applying the inverse DWT on the fused low- and high-frequency components. The proposed method is evaluated and implemented on different pairs of medical image modalities. The results demonstrate that the proposed method improves the quality of the final fused image in terms of Mutual Information (
MI
), Correlation Coefficient (
CC
), entropy, Structural Similarity index (
SSIM
), Edge Strength Similarity for Image quality (
ESSIM
), Peak Signal-to-Noise Ratio (
PSNR
), and edge-based similarity measure (
Q
AB
/
F
).
Much of medical knowledge is stored in the biomedical literature, collected in archives like PubMed Central that continue to grow rapidly. A significant part of this knowledge is contained in images ...with limited metadata available which makes it difficult to explore the visual knowledge in the biomedical literature. Thus, extraction of metadata from visual content is important. One important piece of metadata is the type of the image, which could be one of the various medical imaging modalities such as X-ray, computed tomography or magnetic resonance images and also of general graphs that are frequent in the literature. This study explores a late, score-based fusion of several deep convolutional neural networks with a traditional hand-crafted bag of visual words classifier to classify images from the biomedical literature into image types or modalities. It achieved a classification accuracy of 85.51% on the ImageCLEF 2013 modality classification task, which is better than the best visual methods in the challenge that the data were produced for, and similar compared to mixed methods that make use of both visual and textual information. It achieved similarly good results of 84.23 and 87.04% classification accuracy before and after augmentation, respectively, on the related ImageCLEF 2016 subfigure classification task.
3D magnetic resonance imaging (3D MRI) is one of the most preferred medical imaging modalities for the analysis of anatomical structures where acquisition of multiple slices along the slice select ...gradient direction is very common. In 2D multi-slice acquisition, adjacent slices are highly correlated because of very narrow inter-slice gaps. Application of compressed sensing (CS) in MRI significantly reduces traditional MRI scan time due to random undersampling. The authors first propose a fast interpolation technique to estimate missing samples in the k-space of a highly undersampled slice (H-slice) from k-space (s) of neighbouring lightly undersampled slice/s (L-slice). Subsequently, an efficient multislice CS-MRI reconstruction technique based on weighted wavelet forest sparsity, and joint total variation regularisation norms is applied simultaneously on both interpolated H and non-interpolated L-slices. Simulation results show that the proposed CS reconstruction for 3D MRI is not only computationally faster but significant improvements in terms of visual quality and quantitative performance metrics are also achieved compared to the existing methods.