At the RIKEN Nishina Center for Accelerator-Based Science, we developed a pepperpot emittance monitor using a method that can change the distance between the pepperpot mask and the screen. The ...accuracy can be improved by correctly identifying the beam’s position at a close range followed by the increase of the distance; however, if the distance is increased excessively, the position matching will be difficult. To solve this problem, we developed a method for tracking continuous changes on a screen using optical flow. Using this method, the distance between the pepperpot mask and screen could be extended without plotting in the wrong area in the phase space, and the emittance measurement accuracy was successfully improved by more than 10%. This development will enable beam transport simulations to be performed more accurately.
This review explores how imaging techniques are being developed with a focus on deployment for crop monitoring methods. Imaging applications are discussed in relation to both field and ...glasshouse-based plants, and techniques are sectioned into 'healthy and diseased plant classification' with an emphasis on classification accuracy, early detection of stress, and disease severity. A central focus of the review is the use of hyperspectral imaging and how this is being utilised to find additional information about plant health, and the ability to predict onset of disease. A summary of techniques used to detect biotic and abiotic stress in plants is presented, including the level of accuracy associated with each method.
Fine-Grained Image Analysis With Deep Learning: A Survey Wei, Xiu-Shen; Song, Yi-Zhe; Aodha, Oisin Mac ...
IEEE transactions on pattern analysis and machine intelligence,
2022-Dec.-1, 2022-12-1, 20221201, Letnik:
44, Številka:
12
Journal Article
Recenzirano
Odprti dostop
Fine-grained image analysis (FGIA) is a longstanding and fundamental problem in computer vision and pattern recognition, and underpins a diverse set of real-world applications. The task of FGIA ...targets analyzing visual objects from subordinate categories, e.g., species of birds or models of cars. The small inter-class and large intra-class variation inherent to fine-grained image analysis makes it a challenging problem. Capitalizing on advances in deep learning, in recent years we have witnessed remarkable progress in deep learning powered FGIA. In this paper we present a systematic survey of these advances, where we attempt to re-define and broaden the field of FGIA by consolidating two fundamental fine-grained research areas - fine-grained image recognition and fine-grained image retrieval. In addition, we also review other key issues of FGIA, such as publicly available benchmark datasets and related domain-specific applications. We conclude by highlighting several research directions and open problems which need further exploration from the community.
Machine learning techniques used in computer-aided medical image analysis usually suffer from the domain shift problem caused by different distributions between source/reference data and target data. ...As a promising solution, domain adaptation has attracted considerable attention in recent years. The aim of this paper is to survey the recent advances of domain adaptation methods in medical image analysis. We first present the motivation of introducing domain adaptation techniques to tackle domain heterogeneity issues for medical image analysis. Then we provide a review of recent domain adaptation models in various medical image analysis tasks. We categorize the existing methods into shallow and deep models, and each of them is further divided into supervised , semi-supervised and unsupervised methods. We also provide a brief summary of the benchmark medical image datasets that support current domain adaptation research. This survey will enable researchers to gain a better understanding of the current status, challenges and future directions of this energetic research field.
We propose Neural Image Compression (NIC), a two-step method to build convolutional neural networks for gigapixel image analysis solely using weak image-level labels. First, gigapixel images are ...compressed using a neural network trained in an unsupervised fashion, retaining high-level information while suppressing pixel-level noise. Second, a convolutional neural network (CNN) is trained on these compressed image representations to predict image-level labels, avoiding the need for fine-grained manual annotations. We compared several encoding strategies, namely reconstruction error minimization, contrastive training and adversarial feature learning, and evaluated NIC on a synthetic task and two public histopathology datasets. We found that NIC can exploit visual cues associated with image-level labels successfully, integrating both global and local visual information. Furthermore, we visualized the regions of the input gigapixel images where the CNN attended to, and confirmed that they overlapped with annotations from human experts.
This study aimed to evaluate a novel methodology to identify subjects that could derive benefit from Buparlisib treatment in metastatic SCCHN patients. The analysis was focused on image analysis of ...H&E images to select features associated with improved clinical benefit from paclitaxel+buparlisib.
BERIL-1 (NCT01852292) was a multicenter, randomized, double-blind, placebo-controlled phase II study evaluating treatment with either buparlisib + paclitaxel or placebo + paclitaxel in adult patients with histologically or cytologically confirmed recurrent or metastatic SCCHN. H&E stained whole slide images (WSI) were scanned at 40x and a model was developed to identify features of the tumor and the tumor immune microenvironment through digital pathology. We then evaluated spatial histological biomarkers from 145 subjects (73 in treatment & 72 in placebo arms) associated with improvement in efficacy endpoints of Progression Free Survival (PFS) and Overall Survival (OS) within and between the treatment and control arms.
A deep learning model was developed that can accurately identify and classify tumor, necrotic and stromal areas as well as fibroblast, endothelial and immune cells (plasma, lymphocyte, granulocyte), from H&E images. The accuracy of this model was developed against the ground truth of human pathology analysis of the same images. This analysis demonstrated that a >10% infiltration of TILs (p=0.00058, HR=0.195) as well the heterogeneity of cells in the TME (p=0.015, HR=0.53) are both associated with a survival advantage in patients receiving the combination treatment when compared to placebo. Moreover, we discovered that the proximity of granulocytes to tumor cells (p=0.00006, HR=0.32) is associated with improved survival in patients treated with buparlisib + paclitaxel combination therapy.
This analysis highlights a novel approach, utilizing the common and cost-effective biomarker of H&E to identify metastatic SCCHN subjects that could derive therapeutic benefit from the combination of Buparlisib + paclitaxel. Further analysis will be conducted to determine if this method provides a better prediction of clinical benefit than regular pathology evaluation. This approach also highlights interesting and novel biological observations that underscore the mechanisms of this therapeutic combination that could lead to studies evaluating novel therapeutic combinations. The results of this analysis can be expanded to the ongoing Phase III BURAN study to further optimize and validate this method of identifying subjects for therapeutic intervention; providing a fast and cost effective method for clinicians to understand which subject would benefit from treatment with Buparlisib.
•This paper surveys over 200 papers using explainable artificial intelligence (XAI) in deep learning-based medical image analysis.•The surveyed papers are classified according to an XAI ...framework.•Trends and future perspectives for XAI in medical image analysis are identified.
With an increase in deep learning-based methods, the call for explainability of such methods grows, especially in high-stakes decision making areas such as medical image analysis. This survey presents an overview of explainable artificial intelligence (XAI) used in deep learning-based medical image analysis. A framework of XAI criteria is introduced to classify deep learning-based medical image analysis methods. Papers on XAI techniques in medical image analysis are then surveyed and categorized according to the framework and according to anatomical location. The paper concludes with an outlook of future opportunities for XAI in medical image analysis.
Display omitted
•A comprehensive review of state-of-the-art deep learning (DL) approaches is presented in the context of histopathological image analysis.•This survey paper focuses on a methodological aspect of ...different machine learning strategies such as supervised, weakly supervised, unsupervised, transfer learning and various other sub-variants of these methods.•We also provided an overview of deep learning based survival models that are applicable for diseasespecific prognosis tasks.•Finally, we summarize several existing open datasets and highlight critical challenges and limitations with current deep learning approaches, along with possible avenues for future research.
Display omitted
Histopathological images contain rich phenotypic information that can be used to monitor underlying mechanisms contributing to disease progression and patient survival outcomes. Recently, deep learning has become the mainstream methodological choice for analyzing and interpreting histology images. In this paper, we present a comprehensive review of state-of-the-art deep learning approaches that have been used in the context of histopathological image analysis. From the survey of over 130 papers, we review the field’s progress based on the methodological aspect of different machine learning strategies such as supervised, weakly supervised, unsupervised, transfer learning and various other sub-variants of these methods. We also provide an overview of deep learning based survival models that are applicable for disease-specific prognosis tasks. Finally, we summarize several existing open datasets and highlight critical challenges and limitations with current deep learning approaches, along with possible avenues for future research.
This work focuses on image anomaly detection by leveraging only normal images in the training phase. Most previous methods tackle anomaly detection by reconstructing the input images with an ...autoencoder (AE)-based model, and an underlying assumption is that the reconstruction errors for the normal images are small, and those for the abnormal images are large. However, these AE-based methods, sometimes, even reconstruct the anomalies well; consequently, they are less sensitive to anomalies. To conquer this issue, we propose to reconstruct the image by leveraging the structure-texture correspondence. Specifically, we observe that, usually, for normal images, the texture can be inferred from its corresponding structure (e.g., the blood vessels in the fundus image and the structured anatomy in optical coherence tomography image), while it is hard to infer the texture from a destroyed structure for the abnormal images. Therefore, a structure-texture correspondence memory (STCM) module is proposed to reconstruct image texture from its structure, where a memory mechanism is used to characterize the mapping from the normal structure to its corresponding normal texture. As the correspondence between destroyed structure and texture cannot be characterized by the memory, the abnormal images would have a larger reconstruction error, facilitating anomaly detection. In this work, we utilize two kinds of complementary structures (i.e., the semantic structure with human-labeled category information and the low-level structure with abundant details), which are extracted by two structure extractors. The reconstructions from the two kinds of structures are fused together by a learned attention weight to get the final reconstructed image. We further feed the reconstructed image into the two aforementioned structure extractors to extract structures. On the one hand, constraining the consistency between the structures extracted from the original input and that from the reconstructed image would regularize the network training; on the other hand, the error between the structures extracted from the original input and that from the reconstructed image can also be used as a supplement measurement to identify the anomaly. Extensive experiments validate the effectiveness of our method for image anomaly detection on both industrial inspection images and medical images.