Pathology is the cornerstone of cancer care. The need for accuracy in histopathologic diagnosis of cancer is increasing as personalized cancer therapy requires accurate biomarker assessment. The ...appearance of digital image analysis holds promise to improve both the volume and precision of histomorphological evaluation. Recently, machine learning, and particularly deep learning, has enabled rapid advances in computational pathology. The integration of machine learning into routine care will be a milestone for the healthcare sector in the next decade, and histopathology is right at the centre of this revolution. Examples of potential high‐value machine learning applications include both model‐based assessment of routine diagnostic features in pathology, and the ability to extract and identify novel features that provide insights into a disease. Recent groundbreaking results have demonstrated that applications of machine learning methods in pathology significantly improves metastases detection in lymph nodes, Ki67 scoring in breast cancer, Gleason grading in prostate cancer and tumour‐infiltrating lymphocyte (TIL) scoring in melanoma. Furthermore, deep learning models have also been demonstrated to be able to predict status of some molecular markers in lung, prostate, gastric and colorectal cancer based on standard HE slides. Moreover, prognostic (survival outcomes) deep neural network models based on digitized HE slides have been demonstrated in several diseases, including lung cancer, melanoma and glioma. In this review, we aim to present and summarize the latest developments in digital image analysis and in the application of artificial intelligence in diagnostic pathology.
Background
Ultrasound is employed in needle interventions to visualize the anatomical structures and track the needle. Nevertheless, needle detection in ultrasound images is a difficult task, ...specifically at steep insertion angles.
Purpose
A new method is presented to enable effective needle detection using ultrasound B‐mode and power Doppler analyses.
Methods
A small buzzer is used to excite the needle and an ultrasound system is utilized to acquire B‐mode and power Doppler images for the needle. The B‐mode and power Doppler images are processed using Radon transform and local‐phase analysis to initially detect the axis of the needle. The detection of the needle axis is improved by processing the power Doppler image using alpha shape analysis to define a region of interest (ROI) that contains the needle. Also, a set of feature maps is extracted from the ROI in the B‐mode image. The feature maps are processed using a machine learning classifier to construct a likelihood image that visualizes the posterior needle likelihoods of the pixels. Radon transform is applied to the likelihood image to achieve an improved needle axis detection. Additionally, the region in the B‐mode image surrounding the needle axis is analyzed to identify the needle tip using a custom‐made probabilistic approach. Our method was utilized to detect needles inserted in ex vivo animal tissues at shallow 20∘−40∘$20^\circ -40^\circ$), moderate 40∘−60∘$40^\circ -60^\circ$), and steep 60∘−85∘$60^\circ -85^\circ$ angles.
Results
Our method detected the needles with failure rates equal to 0% and mean angle, axis, and tip errors less than or equal to 0.7°, 0.6 mm, and 0.7 mm, respectively. Additionally, our method achieved favorable results compared to two recently introduced needle detection methods.
Conclusions
The results indicate the potential of applying our method to achieve effective needle detection in ultrasound images.
Medical deep learning—A systematic meta-review Egger, Jan; Gsaxner, Christina; Pepe, Antonio ...
Computer methods and programs in biomedicine,
June 2022, 2022-Jun, 2022-06-00, 20220601, Letnik:
221
Journal Article
Recenzirano
Odprti dostop
•providing an overview of current deep learning reviews where a medical application plays the key role and arranging the researched works chronologically for a historical common thread and picture ...over the years.•extracting the overall number of referenced works and citations to give an impression of the research influence and footprints of the respective field.•analyzing, exploring and highlighting the main reasons for the massive research efforts on this topic.•conducting a comprehensive discussion of the current state-of-the-art in the deep learning area with achievements but also failures from other domains that should be avoided and not be repeated in the medical area.•providing a critical expert opinion and pointing out further controversies.
Deep learning has remarkably impacted several different scientific disciplines over the last few years. For example, in image processing and analysis, deep learning algorithms were able to outperform other cutting-edge methods. Additionally, deep learning has delivered state-of-the-art results in tasks like autonomous driving, outclassing previous attempts. There are even instances where deep learning outperformed humans, for example with object recognition and gaming. Deep learning is also showing vast potential in the medical domain. With the collection of large quantities of patient records and data, and a trend towards personalized treatments, there is a great need for automated and reliable processing and analysis of health information. Patient data is not only collected in clinical centers, like hospitals and private practices, but also by mobile healthcare apps or online websites. The abundance of collected patient data and the recent growth in the deep learning field has resulted in a large increase in research efforts. In Q2/2020, the search engine PubMed returned already over 11,000 results for the search term ‘deep learning’, and around 90% of these publications are from the last three years. However, even though PubMed represents the largest search engine in the medical field, it does not cover all medical-related publications. Hence, a complete overview of the field of ‘medical deep learning’ is almost impossible to obtain and acquiring a full overview of medical sub-fields is becoming increasingly more difficult. Nevertheless, several review and survey articles about medical deep learning have been published within the last few years. They focus, in general, on specific medical scenarios, like the analysis of medical images containing specific pathologies. With these surveys as a foundation, the aim of this article is to provide the first high-level, systematic meta-review of medical deep learning surveys.
► We provide a review of recent developments and advances in pore-scale X-ray tomographic imaging of subsurface porous media. ► The particular focus is on immiscible multi-phase fluid flow and ...quantitative analyses. ► Advances in both imaging techniques and image processing are discussed and future trends are addressed.
We report here on recent developments and advances in pore-scale X-ray tomographic imaging of subsurface porous media. Our particular focus is on immiscible multi-phase fluid flow, i.e., the displacement of one immiscible fluid by another inside a porous material, which is of central importance to many natural and engineered processes. Multiphase flow and displacement can pose a rather difficult problem, both because the underlying physics is complex, and also because standard laboratory investigation reveals little about the mechanisms that control micro-scale processes. X-ray microtomographic imaging is a non-destructive technique for quantifying these processes in three dimensions within individual pores, and as we report here, with rapidly increasing spatial and temporal resolution.
Geographic object-based image analysis (GEOBIA) is a remote sensing image analysis paradigm that defines and examines image-objects: groups of neighboring pixels that represent real-world geographic ...objects. Recent reviews have examined methodological considerations and highlighted how GEOBIA improves upon the 30+ year pixel-based approach, particularly for H-resolution imagery. However, the literature also exposes an opportunity to improve guidance on the application of GEOBIA for novice practitioners. In this paper, we describe the theoretical foundations of GEOBIA and provide a comprehensive overview of the methodological workflow, including: (i) software-specific approaches (open-source and commercial); (ii) best practices informed by research; and (iii) the current status of methodological research. Building on this foundation, we then review recent research on the convergence of GEOBIA with deep convolutional neural networks, which we suggest is a new form of GEOBIA. Specifically, we discuss general integrative approaches and offer recommendations for future research. Overall, this paper describes the past, present, and anticipated future of GEOBIA in a novice-accessible format, while providing innovation and depth to experienced practitioners.
•We explore the architecture modification of ResNet.•We studied the effect of feature transformation on model performance in grouped convolution.•We perform benchmark tests on multiple medical image ...classification and segmentation applications.•The ResGANet network proposed in this paper is superior to ResNet and its variants in the medical image classification test, and can be directly used as the backbone network for medical image segmentation tasks.
In recent years, deep learning technology has shown superior performance in different fields of medical image analysis. Some deep learning architectures have been proposed and used for computational pathology classification, segmentation, and detection tasks. Due to their simple, modular structure, most downstream applications still use ResNet and its variants as the backbone network. This paper proposes a modular group attention block that can capture feature dependencies in medical images in two independent dimensions: channel and space. By stacking these group attention blocks in ResNet-style, we obtain a new ResNet variant called ResGANet. The stacked ResGANet architecture has 1.51–3.47 times fewer parameters than the original ResNet and can be directly used for downstream medical image segmentation tasks. Many experiments show that the proposed ResGANet is superior to state-of-the-art backbone models in medical image classification tasks. Applying it to different segmentation networks can improve the baseline model in medical image segmentation tasks without changing the network architecture. We hope that this work provides a promising method for enhancing the feature representation of convolutional neural networks (CNNs) in the future.
Display omitted
Remote sensing (RS) platforms such as unmanned aerial vehicles (UAVs) represent an essential source of information in precision agriculture (PA) as they are able to provide images on a daily basis ...and at a very high resolution. In this framework, this study aims to identify the optimal level of nitrogen (N)-based nutrients for improved productivity in an onion field of “Cipolla Rossa di Tropea” (Tropea red onion). Following an experiment that involved the arrangement of nine plots in the onion field in a randomized complete block design (RCBD), with three replications, three different levels of N fertilization were compared: N150 (150 kg N ha−1), N180 (180 kg N ha−1), and e N210 (210 kg N ha−1). The crop cycle was monitored using multispectral (MS) UAV imagery, producing vigor maps and taking into account the yield of data. The soil-adjusted vegetation index (SAVI) was used to monitor the vigor of the crop. In addition, the coverage’s class onion was spatially identified using geographical object-based image classification (GEOBIA), observing differences in SAVI values obtained in plots subjected to differentiated N fertilizer treatment. The information retrieved from the analysis of soil properties (electrical conductivity, ammonium and nitrate nitrogen), yield performance and mean SAVI index data from each field plot showed significant relationships between the different indicators investigated. A higher onion yield was evident in plot N180, in which SAVI values were higher based on the production data.
Despite the availabilty of imaging-based and mass-spectrometry-based methods for spatial proteomics, a key challenge remains connecting images with single-cell-resolution protein abundance ...measurements. Here, we introduce Deep Visual Proteomics (DVP), which combines artificial-intelligence-driven image analysis of cellular phenotypes with automated single-cell or single-nucleus laser microdissection and ultra-high-sensitivity mass spectrometry. DVP links protein abundance to complex cellular or subcellular phenotypes while preserving spatial context. By individually excising nuclei from cell culture, we classified distinct cell states with proteomic profiles defined by known and uncharacterized proteins. In an archived primary melanoma tissue, DVP identified spatially resolved proteome changes as normal melanocytes transition to fully invasive melanoma, revealing pathways that change in a spatial manner as cancer progresses, such as mRNA splicing dysregulation in metastatic vertical growth that coincides with reduced interferon signaling and antigen presentation. The ability of DVP to retain precise spatial proteomic information in the tissue context has implications for the molecular profiling of clinical samples.
Dentists' diagnostic accuracy in detecting periapical radiolucency varies considerably. This systematic review and meta-analysis aimed to investigate the accuracy of artificial intelligence (AI) for ...detecting periapical radiolucency.
Studies reporting diagnostic accuracy and utilizing AI for periapical radiolucency detection, published until November 2023, were eligible for inclusion. Meta-analysis was conducted using the online MetaDTA Tool to calculate pooled sensitivity and specificity. Risk of bias was evaluated using QUADAS-2.
A comprehensive search was conducted in PubMed/MEDLINE, ScienceDirect, and Institute of Electrical and Electronics Engineers (IEEE) Xplore databases. Studies reporting diagnostic accuracy and utilizing AI tools for periapical radiolucency detection, published until November 2023, were eligible for inclusion.
We identified 210 articles, of which 24 met the criteria for inclusion in the review. All but one study used one type of convolutional neural network. The body of evidence comes with an overall unclear to high risk of bias and several applicability concerns. Four of the twenty-four studies were included in a meta-analysis. AI showed a pooled sensitivity and specificity of 0.94 (95 % CI = 0.90–0.96) and 0.96 (95 % CI = 0.91–0.98), respectively.
AI demonstrated high specificity and sensitivity for detecting periapical radiolucencies. However, the current landscape suggests a need for diverse study designs beyond traditional diagnostic accuracy studies. Prospective real-life randomized controlled trials using heterogeneous data are needed to demonstrate the true value of AI.
Artificial intelligence tools seem to have the potential to support detecting periapical radiolucencies on imagery. Notably, nearly all studies did not test fully fledged software systems but measured the mere accuracy of AI models in diagnostic accuracy studies. The true value of currently available AI-based software for lesion detection on both 2D and 3D radiographs remains uncertain.
•Multi-level fully convolutional networks for effective object segmentation.•A novel method to harness information of object appearance and contour simultaneously.•Transfer learning to mitigate the ...issue of insufficient training data.•The method won two MICCAI challenges on object segmentation from histology images.
Display omitted
In histopathological image analysis, the morphology of histological structures, such as glands and nuclei, has been routinely adopted by pathologists to assess the malignancy degree of adenocarcinomas. Accurate detection and segmentation of these objects of interest from histology images is an essential prerequisite to obtain reliable morphological statistics for quantitative diagnosis. While manual annotation is error-prone, time-consuming and operator-dependant, automated detection and segmentation of objects of interest from histology images can be very challenging due to the large appearance variation, existence of strong mimics, and serious degeneration of histological structures. In order to meet these challenges, we propose a novel deep contour-aware network (DCAN) under a unified multi-task learning framework for more accurate detection and segmentation. In the proposed network, multi-level contextual features are explored based on an end-to-end fully convolutional network (FCN) to deal with the large appearance variation. We further propose to employ an auxiliary supervision mechanism to overcome the problem of vanishing gradients when training such a deep network. More importantly, our network can not only output accurate probability maps of histological objects, but also depict clear contours simultaneously for separating clustered object instances, which further boosts the segmentation performance. Our method ranked the first in two histological object segmentation challenges, including 2015 MICCAI Gland Segmentation Challenge and 2015 MICCAI Nuclei Segmentation Challenge. Extensive experiments on these two challenging datasets demonstrate the superior performance of our method, surpassing all the other methods by a significant margin.