Microscopic evaluation of resected tissue plays a central role in the surgical management of cancer. Because optical microscopes have a limited depth-of-field (DOF), resected tissue is either frozen ...or preserved with chemical fixatives, sliced into thin sections placed on microscope slides, stained, and imaged to determine whether surgical margins are free of tumor cells—a costly and time- and labor-intensive procedure. Here, we introduce a deep-learning extended DOF (DeepDOF) microscope to quickly image large areas of freshly resected tissue to provide histologic-quality images of surgical margins without physical sectioning. The DeepDOF microscope consists of a conventional fluorescence microscope with the simple addition of an inexpensive (less than $10) phase mask inserted in the pupil plane to encode the light field and enhance the depth-invariance of the point-spread function. When used with a jointly optimized image-reconstruction algorithm, diffraction-limited optical performance to resolve subcellular features can be maintained while significantly extending the DOF (200 μm). Data from resected oral surgical specimens show that the DeepDOF microscope can consistently visualize nuclear morphology and other important diagnostic features across highly irregular resected tissue surfaces without serial refocusing. With the capability to quickly scan intact samples with subcellular detail, the DeepDOF microscope can improve tissue sampling during intraoperative tumor-margin assessment, while offering an affordable tool to provide histological information from resected tissue specimens in resource-limited settings.
Recently, artificial intelligence and machine learning in general have demonstrated remarkable performances in many tasks, from image processing to natural language processing, especially with the ...advent of deep learning (DL). Along with research progress, they have encroached upon many different fields and disciplines. Some of them require high level of accountability and thus transparency, for example, the medical sector. Explanations for machine decisions and predictions are thus needed to justify their reliability. This requires greater interpretability, which often means we need to understand the mechanism underlying the algorithms. Unfortunately, the blackbox nature of the DL is still unresolved, and many machine decisions are still poorly understood. We provide a review on interpretabilities suggested by different research works and categorize them. The different categories show different dimensions in interpretability research, from approaches that provide "obviously" interpretable information to the studies of complex patterns. By applying the same categorization to interpretability in medical research, it is hoped that: 1) clinicians and practitioners can subsequently approach these methods with caution; 2) insight into interpretability will be born with more considerations for medical practices; and 3) initiatives to push forward data-based, mathematically grounded, and technically grounded medical education are encouraged.
Deep learning on image denoising: An overview Tian, Chunwei; Fei, Lunke; Zheng, Wenxian ...
Neural networks,
November 2020, 2020-Nov, 2020-11-00, 20201101, Letnik:
131
Journal Article
Recenzirano
Odprti dostop
Deep learning techniques have received much attention in the area of image denoising. However, there are substantial differences in the various types of deep learning methods dealing with image ...denoising. Specifically, discriminative learning based on deep learning can ably address the issue of Gaussian noise. Optimization models based on deep learning are effective in estimating the real noise. However, there has thus far been little related research to summarize the different deep learning techniques for image denoising. In this paper, we offer a comparative study of deep techniques in image denoising. We first classify the deep convolutional neural networks (CNNs) for additive white noisy images; the deep CNNs for real noisy images; the deep CNNs for blind denoising and the deep CNNs for hybrid noisy images, which represents the combination of noisy, blurred and low-resolution images. Then, we analyze the motivations and principles of the different types of deep learning methods. Next, we compare the state-of-the-art methods on public denoising datasets in terms of quantitative and qualitative analyses. Finally, we point out some potential challenges and directions of future research.
•This review covers diffusion MRI artifacts and preprocessing steps.•Notable developments and new advances since the HCP are summarized.•Practical considerations and future developments are ...discussed.
Diffusion MRI (dMRI) provides invaluable information for the study of tissue microstructure and brain connectivity, but suffers from a range of imaging artifacts that greatly challenge the analysis of results and their interpretability if not appropriately accounted for. This review will cover dMRI artifacts and preprocessing steps, some of which have not typically been considered in existing pipelines or reviews, or have only gained attention in recent years: brain/skull extraction, B-matrix incompatibilities w.r.t the imaging data, signal drift, Gibbs ringing, noise distribution bias, denoising, between- and within-volumes motion, eddy currents, outliers, susceptibility distortions, EPI Nyquist ghosts, gradient deviations, B1 bias fields, and spatial normalization. The focus will be on “what’s new” since the notable advances prior to and brought by the Human Connectome Project (HCP), as presented in the predecessing issue on “Mapping the Connectome” in 2013. In addition to the development of novel strategies for dMRI preprocessing, exciting progress has been made in the availability of open source tools and reproducible pipelines, databases and simulation tools for the evaluation of preprocessing steps, and automated quality control frameworks, amongst others. Finally, this review will consider practical considerations and our view on “what’s next” in dMRI preprocessing.
Tractography based on non-invasive diffusion imaging is central to the study of human brain connectivity. To date, the approach has not been systematically validated in ground truth studies. Based on ...a simulated human brain data set with ground truth tracts, we organized an open international tractography challenge, which resulted in 96 distinct submissions from 20 research groups. Here, we report the encouraging finding that most state-of-the-art algorithms produce tractograms containing 90% of the ground truth bundles (to at least some extent). However, the same tractograms contain many more invalid than valid bundles, and half of these invalid bundles occur systematically across research groups. Taken together, our results demonstrate and confirm fundamental ambiguities inherent in tract reconstruction based on orientation information alone, which need to be considered when interpreting tractography and connectivity results. Our approach provides a novel framework for estimating reliability of tractography and encourages innovation to address its current limitations.
This article reviews current limitations and future opportunities for the application of computer-aided detection (CAD) systems and artificial intelligence in breast imaging. Traditional CAD systems ...in mammography screening have followed a rules-based approach, incorporating domain knowledge into hand-crafted features before using classical machine learning techniques as a classifier. The first commercial CAD system, ImageChecker M1000, relies on computer vision techniques for pattern recognition. Unfortunately, CAD systems have been shown to adversely affect some radiologists' performance and increase recall rates. The Digital Mammography DREAM Challenge was a multidisciplinary collaboration that provided 640,000 mammography images for teams to help decrease false-positive rates in breast cancer screening. Winning solutions leveraged deep learning's (DL) automatic hierarchical feature learning capabilities and used convolutional neural networks. Start-ups Therapixel and Kheiron Medical Technologies are using DL for breast cancer screening. With increasing use of digital breast tomosynthesis, specific artificial intelligence (AI)-CAD systems are emerging to include iCAD's PowerLook Tomo Detection and ScreenPoint Medical's Transpara. Other AI-CAD systems are focusing on breast diagnostic techniques such as ultrasound and magnetic resonance imaging (MRI). There is a gap in the market for contrast-enhanced spectral mammography AI-CAD tools. Clinical implementation of AI-CAD tools requires testing in scenarios mimicking real life to prove its usefulness in the clinical environment. This requires a large and representative dataset for testing and assessment of the reader's interaction with the tools. A cost-effectiveness assessment should be undertaken, with a large feasibility study carried out to ensure there are no unintended consequences. AI-CAD systems should incorporate explainable AI in accordance with the European Union General Data Protection Regulation (GDPR).
Deep learning convolutional neural networks (CNN) may facilitate melanoma detection, but data comparing a CNN’s diagnostic performance to larger groups of dermatologists are lacking.
Google’s ...Inception v4 CNN architecture was trained and validated using dermoscopic images and corresponding diagnoses. In a comparative cross-sectional reader study a 100-image test-set was used (level-I: dermoscopy only; level-II: dermoscopy plus clinical information and images). Main outcome measures were sensitivity, specificity and area under the curve (AUC) of receiver operating characteristics (ROC) for diagnostic classification (dichotomous) of lesions by the CNN versus an international group of 58 dermatologists during level-I or -II of the reader study. Secondary end points included the dermatologists’ diagnostic performance in their management decisions and differences in the diagnostic performance of dermatologists during level-I and -II of the reader study. Additionally, the CNN’s performance was compared with the top-five algorithms of the 2016 International Symposium on Biomedical Imaging (ISBI) challenge.
In level-I dermatologists achieved a mean (±standard deviation) sensitivity and specificity for lesion classification of 86.6% (±9.3%) and 71.3% (±11.2%), respectively. More clinical information (level-II) improved the sensitivity to 88.9% (±9.6%, P=0.19) and specificity to 75.7% (±11.7%, P<0.05). The CNN ROC curve revealed a higher specificity of 82.5% when compared with dermatologists in level-I (71.3%, P<0.01) and level-II (75.7%, P<0.01) at their sensitivities of 86.6% and 88.9%, respectively. The CNN ROC AUC was greater than the mean ROC area of dermatologists (0.86 versus 0.79, P<0.01). The CNN scored results close to the top three algorithms of the ISBI 2016 challenge.
For the first time we compared a CNN’s diagnostic performance with a large international group of 58 dermatologists, including 30 experts. Most dermatologists were outperformed by the CNN. Irrespective of any physicians’ experience, they may benefit from assistance by a CNN’s image classification.
This study was registered at the German Clinical Trial Register (DRKS-Study-ID: DRKS00013570; https://www.drks.de/drks_web/).
We develop and test a pupil function determination algorithm, termed embedded pupil function recovery (EPRY), which can be incorporated into the Fourier ptychographic microscopy (FPM) algorithm and ...recover both the Fourier spectrum of sample and the pupil function of imaging system simultaneously. This EPRY-FPM algorithm eliminates the requirement of the previous FPM algorithm for a priori knowledge of the aberration in the imaging system to reconstruct a high quality image. We experimentally demonstrate the effectiveness of this algorithm by reconstructing high resolution, large field-of-view images of biological samples. We also illustrate that the pupil function we retrieve can be used to study the spatially varying aberration of a large field-of-view imaging system. We believe that this algorithm adds more flexibility to FPM and can be a powerful tool for the characterization of an imaging system's aberration.