Abstract At the RIKEN Nishina Center for Accelerator-Based Science, we developed a pepperpot emittance monitor using a method that can change the distance between the pepperpot mask and the screen. ...The accuracy can be improved by correctly identifying the beam’s position at a close range followed by the increase of the distance; however, if the distance is increased excessively, the position matching will be difficult. To solve this problem, we developed a method for tracking continuous changes on a screen using optical flow. Using this method, the distance between the pepperpot mask and screen could be extended without plotting in the wrong area in the phase space, and the emittance measurement accuracy was successfully improved by more than 10%. This development will enable beam transport simulations to be performed more accurately.
This review explores how imaging techniques are being developed with a focus on deployment for crop monitoring methods. Imaging applications are discussed in relation to both field and ...glasshouse-based plants, and techniques are sectioned into 'healthy and diseased plant classification' with an emphasis on classification accuracy, early detection of stress, and disease severity. A central focus of the review is the use of hyperspectral imaging and how this is being utilised to find additional information about plant health, and the ability to predict onset of disease. A summary of techniques used to detect biotic and abiotic stress in plants is presented, including the level of accuracy associated with each method.
Machine learning techniques used in computer-aided medical image analysis usually suffer from the domain shift problem caused by different distributions between source/reference data and target data. ...As a promising solution, domain adaptation has attracted considerable attention in recent years. The aim of this paper is to survey the recent advances of domain adaptation methods in medical image analysis. We first present the motivation of introducing domain adaptation techniques to tackle domain heterogeneity issues for medical image analysis. Then we provide a review of recent domain adaptation models in various medical image analysis tasks. We categorize the existing methods into shallow and deep models, and each of them is further divided into supervised , semi-supervised and unsupervised methods. We also provide a brief summary of the benchmark medical image datasets that support current domain adaptation research. This survey will enable researchers to gain a better understanding of the current status, challenges and future directions of this energetic research field.
We propose Neural Image Compression (NIC), a two-step method to build convolutional neural networks for gigapixel image analysis solely using weak image-level labels. First, gigapixel images are ...compressed using a neural network trained in an unsupervised fashion, retaining high-level information while suppressing pixel-level noise. Second, a convolutional neural network (CNN) is trained on these compressed image representations to predict image-level labels, avoiding the need for fine-grained manual annotations. We compared several encoding strategies, namely reconstruction error minimization, contrastive training and adversarial feature learning, and evaluated NIC on a synthetic task and two public histopathology datasets. We found that NIC can exploit visual cues associated with image-level labels successfully, integrating both global and local visual information. Furthermore, we visualized the regions of the input gigapixel images where the CNN attended to, and confirmed that they overlapped with annotations from human experts.
This study aimed to evaluate a novel methodology to identify subjects that could derive benefit from Buparlisib treatment in metastatic SCCHN patients. The analysis was focused on image analysis of ...H&E images to select features associated with improved clinical benefit from paclitaxel+buparlisib.
BERIL-1 (NCT01852292) was a multicenter, randomized, double-blind, placebo-controlled phase II study evaluating treatment with either buparlisib + paclitaxel or placebo + paclitaxel in adult patients with histologically or cytologically confirmed recurrent or metastatic SCCHN. H&E stained whole slide images (WSI) were scanned at 40x and a model was developed to identify features of the tumor and the tumor immune microenvironment through digital pathology. We then evaluated spatial histological biomarkers from 145 subjects (73 in treatment & 72 in placebo arms) associated with improvement in efficacy endpoints of Progression Free Survival (PFS) and Overall Survival (OS) within and between the treatment and control arms.
A deep learning model was developed that can accurately identify and classify tumor, necrotic and stromal areas as well as fibroblast, endothelial and immune cells (plasma, lymphocyte, granulocyte), from H&E images. The accuracy of this model was developed against the ground truth of human pathology analysis of the same images. This analysis demonstrated that a >10% infiltration of TILs (p=0.00058, HR=0.195) as well the heterogeneity of cells in the TME (p=0.015, HR=0.53) are both associated with a survival advantage in patients receiving the combination treatment when compared to placebo. Moreover, we discovered that the proximity of granulocytes to tumor cells (p=0.00006, HR=0.32) is associated with improved survival in patients treated with buparlisib + paclitaxel combination therapy.
This analysis highlights a novel approach, utilizing the common and cost-effective biomarker of H&E to identify metastatic SCCHN subjects that could derive therapeutic benefit from the combination of Buparlisib + paclitaxel. Further analysis will be conducted to determine if this method provides a better prediction of clinical benefit than regular pathology evaluation. This approach also highlights interesting and novel biological observations that underscore the mechanisms of this therapeutic combination that could lead to studies evaluating novel therapeutic combinations. The results of this analysis can be expanded to the ongoing Phase III BURAN study to further optimize and validate this method of identifying subjects for therapeutic intervention; providing a fast and cost effective method for clinicians to understand which subject would benefit from treatment with Buparlisib.
•This paper surveys over 200 papers using explainable artificial intelligence (XAI) in deep learning-based medical image analysis.•The surveyed papers are classified according to an XAI ...framework.•Trends and future perspectives for XAI in medical image analysis are identified.
With an increase in deep learning-based methods, the call for explainability of such methods grows, especially in high-stakes decision making areas such as medical image analysis. This survey presents an overview of explainable artificial intelligence (XAI) used in deep learning-based medical image analysis. A framework of XAI criteria is introduced to classify deep learning-based medical image analysis methods. Papers on XAI techniques in medical image analysis are then surveyed and categorized according to the framework and according to anatomical location. The paper concludes with an outlook of future opportunities for XAI in medical image analysis.
Display omitted
Fine-grained image analysis (FGIA) is a longstanding and fundamental problem in computer vision and pattern recognition, and underpins a diverse set of real-world applications. The task of FGIA ...targets analyzing visual objects from subordinate categories, e.g., species of birds or models of cars. The small inter-class and large intra-class variation inherent to fine-grained image analysis makes it a challenging problem. Capitalizing on advances in deep learning, in recent years we have witnessed remarkable progress in deep learning powered FGIA. In this paper we present a systematic survey of these advances, where we attempt to re-define and broaden the field of FGIA by consolidating two fundamental fine-grained research areas - fine-grained image recognition and fine-grained image retrieval. In addition, we also review other key issues of FGIA, such as publicly available benchmark datasets and related domain-specific applications. We conclude by highlighting several research directions and open problems which need further exploration from the community.
•Active learning: to choose the best data to annotate for optimal model performance.•Interpretation + Refinement: feedback for a prediction, meaningful ways to respond.•Practical considerations: full ...scale applications and considerations for deployment.•Related Areas: evolving research fields to benefit human-in-the-loop computing.
Display omitted
Fully automatic deep learning has become the state-of-the-art technique for many tasks including image acquisition, analysis and interpretation, and for the extraction of clinically useful information for computer-aided detection, diagnosis, treatment planning, intervention and therapy. However, the unique challenges posed by medical image analysis suggest that retaining a human end-user in any deep learning enabled system will be beneficial. In this review we investigate the role that humans might play in the development and deployment of deep learning enabled diagnostic applications and focus on techniques that will retain a significant input from a human end user. Human-in-the-Loop computing is an area that we see as increasingly important in future research due to the safety-critical nature of working in the medical domain. We evaluate four key areas that we consider vital for deep learning in the clinical practice: (1) Active Learning to choose the best data to annotate for optimal model performance; (2) Interaction with model outputs - using iterative feedback to steer models to optima for a given prediction and offering meaningful ways to interpret and respond to predictions; (3) Practical considerations - developing full scale applications and the key considerations that need to be made before deployment; (4) Future Prospective and Unanswered Questions - knowledge gaps and related research fields that will benefit human-in-the-loop computing as they evolve. We offer our opinions on the most promising directions of research and how various aspects of each area might be unified towards common goals.