Detection and classification of cell nuclei in histopathology images of cancerous tissue stained with the standard hematoxylin and eosin stain is a challenging task due to cellular heterogeneity. ...Deep learning approaches have been shown to produce encouraging results on histopathology images in various studies. In this paper, we propose a Spatially Constrained Convolutional Neural Network (SC-CNN) to perform nucleus detection. SC-CNN regresses the likelihood of a pixel being the center of a nucleus, where high probability values are spatially constrained to locate in the vicinity of the centers of nuclei. For classification of nuclei, we propose a novel Neighboring Ensemble Predictor (NEP) coupled with CNN to more accurately predict the class label of detected cell nuclei. The proposed approaches for detection and classification do not require segmentation of nuclei. We have evaluated them on a large dataset of colorectal adenocarcinoma images, consisting of more than 20,000 annotated nuclei belonging to four different classes. Our results show that the joint detection and classification of the proposed SC-CNN and NEP produces the highest average F1 score as compared to other recently published approaches. Prospectively, the proposed methods could offer benefit to pathology practice in terms of quantitative analysis of tissue constituents in whole-slide images, and potentially lead to a better understanding of cancer.
Stain colour estimation is a prominent factor of the analysis pipeline in most of histology image processing algorithms. Providing a reliable and efficient stain colour deconvolution approach is ...fundamental for robust algorithm. In this paper, we propose a novel method for stain colour deconvolution of histology images. This approach statistically analyses the multi-resolutional representation of the image to separate the independent observations out of the correlated ones. We then estimate the stain mixing matrix using filtered uncorrelated data. We conducted an extensive set of experiments to compare the proposed method to the recent state of the art methods and demonstrate the robustness of this approach using three different datasets of scanned slides, prepared in different labs using different scanners.
•A network targeted at simultaneous segmentation and classification of nuclei.•Introduce horizontal and vertical distance maps to separate clustered nuclei.•An interpretable evaluation framework that ...quantifies nuclear segmentation.•A new dataset of 24,319 exhaustively annotated nuclei with associated class labels.
Display omitted
Nuclear segmentation and classification within Haematoxylin & Eosin stained histology images is a fundamental prerequisite in the digital pathology work-flow. The development of automated methods for nuclear segmentation and classification enables the quantitative analysis of tens of thousands of nuclei within a whole-slide pathology image, opening up possibilities of further analysis of large-scale nuclear morphometry. However, automated nuclear segmentation and classification is faced with a major challenge in that there are several different types of nuclei, some of them exhibiting large intra-class variability such as the nuclei of tumour cells. Additionally, some of the nuclei are often clustered together. To address these challenges, we present a novel convolutional neural network for simultaneous nuclear segmentation and classification that leverages the instance-rich information encoded within the vertical and horizontal distances of nuclear pixels to their centres of mass. These distances are then utilised to separate clustered nuclei, resulting in an accurate segmentation, particularly in areas with overlapping instances. Then, for each segmented instance the network predicts the type of nucleus via a devoted up-sampling branch. We demonstrate state-of-the-art performance compared to other methods on multiple independent multi-tissue histology image datasets. As part of this work, we introduce a new dataset of Haematoxylin & Eosin stained colorectal adenocarcinoma image tiles, containing 24,319 exhaustively annotated nuclei with associated class labels.
Accurate and timely detection of plant diseases can help mitigate the worldwide losses experienced by the horticulture and agriculture industries each year. Thermal imaging provides a fast and ...non-destructive way of scanning plants for diseased regions and has been used by various researchers to study the effect of disease on the thermal profile of a plant. However, thermal image of a plant affected by disease has been known to be affected by environmental conditions which include leaf angles and depth of the canopy areas accessible to the thermal imaging camera. In this paper, we combine thermal and visible light image data with depth information and develop a machine learning system to remotely detect plants infected with the tomato powdery mildew fungus Oidium neolycopersici. We extract a novel feature set from the image data using local and global statistics and show that by combining these with the depth information, we can considerably improve the accuracy of detection of the diseased plants. In addition, we show that our novel feature set is capable of identifying plants which were not originally inoculated with the fungus at the start of the experiment but which subsequently developed disease through natural transmission.
Determining the status of molecular pathways and key mutations in colorectal cancer is crucial for optimal therapeutic decision making. We therefore aimed to develop a novel deep learning pipeline to ...predict the status of key molecular pathways and mutations from whole-slide images of haematoxylin and eosin-stained colorectal cancer slides as an alternative to current tests.
In this retrospective study, we used 502 diagnostic slides of primary colorectal tumours from 499 patients in The Cancer Genome Atlas colon and rectal cancer (TCGA-CRC-DX) cohort and developed a weakly supervised deep learning framework involving three separate convolutional neural network models. Whole-slide images were divided into equally sized tiles and model 1 (ResNet18) extracted tumour tiles from non-tumour tiles. These tumour tiles were inputted into model 2 (adapted ResNet34), trained by iterative draw and rank sampling to calculate a prediction score for each tile that represented the likelihood of a tile belonging to the molecular labels of high mutation density (vs low mutation density), microsatellite instability (vs microsatellite stability), chromosomal instability (vs genomic stability), CpG island methylator phenotype (CIMP)-high (vs CIMP-low), BRAFmut (vs BRAFWT), TP53mut (vs TP53WT), and KRASWT (vs KRASmut). These scores were used to identify the top-ranked titles from each slide, and model 3 (HoVer-Net) segmented and classified the different types of cell nuclei in these tiles. We calculated the area under the convex hull of the receiver operating characteristic curve (AUROC) as a model performance measure and compared our results with those of previously published methods.
Our iterative draw and rank sampling method yielded mean AUROCs for the prediction of hypermutation (0·81 SD 0·03 vs 0·71), microsatellite instability (0·86 0·04 vs 0·74), chromosomal instability (0·83 0·02 vs 0·73), BRAFmut (0·79 0·01 vs 0·66), and TP53mut (0·73 0·02 vs 0·64) in the TCGA-CRC-DX cohort that were higher than those from previously published methods, and an AUROC for KRASmut that was similar to previously reported methods (0·60 SD 0·04 vs 0·60). Mean AUROC for predicting CIMP-high status was 0·79 (SD 0·05). We found high proportions of tumour-infiltrating lymphocytes and necrotic tumour cells to be associated with microsatellite instability, and high proportions of tumour-infiltrating lymphocytes and a low proportion of necrotic tumour cells to be associated with hypermutation.
After large-scale validation, our proposed algorithm for predicting clinically important mutations and molecular pathways, such as microsatellite instability, in colorectal cancer could be used to stratify patients for targeted therapies with potentially lower costs and quicker turnaround times than sequencing-based or immunohistochemistry-based approaches.
The UK Medical Research Council.
•A unified deep learning framework for segmentation of objects (cell nuclei, cells, and multi-cellular objects such as glandular structures) in two main types of microscopy images: fluorescence and ...histology.•The proposed Micro-Net is aimed at better object localization in the face of varying intensities and is robust to noise.•Detailed experimentation & comparative evaluation on publicly available data sets and a new image dataset that is made public with this paper.•Demonstration of robustness of the algorithm to high levels of noise.
Display omitted
Object segmentation and structure localization are important steps in automated image analysis pipelines for microscopy images. We present a convolution neural network (CNN) based deep learning architecture for segmentation of objects in microscopy images. The proposed network can be used to segment cells, nuclei and glands in fluorescence microscopy and histology images after slight tuning of input parameters. The network trains at multiple resolutions of the input image, connects the intermediate layers for better localization and context and generates the output using multi-resolution deconvolution filters. The extra convolutional layers which bypass the max-pooling operation allow the network to train for variable input intensities and object size and make it robust to noisy data. We compare our results on publicly available data sets and show that the proposed network outperforms recent deep learning algorithms.
The recent surge in performance for image analysis of digitised pathology slides can largely be attributed to the advances in deep learning. Deep models can be used to initially localise various ...structures in the tissue and hence facilitate the extraction of interpretable features for biomarker discovery. However, these models are typically trained for a single task and therefore scale poorly as we wish to adapt the model for an increasing number of different tasks. Also, supervised deep learning models are very data hungry and therefore rely on large amounts of training data to perform well. In this paper, we present a multi-task learning approach for segmentation and classification of nuclei, glands, lumina and different tissue regions that leverages data from multiple independent data sources. While ensuring that our tasks are aligned by the same tissue type and resolution, we enable meaningful simultaneous prediction with a single network. As a result of feature sharing, we also show that the learned representation can be used to improve the performance of additional tasks via transfer learning, including nuclear classification and signet ring cell detection. As part of this work, we train our developed Cerberus model on a huge amount of data, consisting of over 600 thousand objects for segmentation and 440 thousand patches for classification. We use our approach to process 599 colorectal whole-slide images from TCGA, where we localise 377 million, 900 thousand and 2.1 million nuclei, glands and lumina respectively. We make this resource available to remove a major barrier in the development of explainable models for computational pathology.
•We present Cerberus, a multi-task model for identification of various tissue regions.•Cerberus uses a novel sampling mechanism to incorporate data from multiple sources.•Cerberus surpasses the performance of single-task competitors.•We process 599 colorectal TCGA slides and make the results publicly available.•We release the pretrained Cerberus backbone that can be used for transfer learning.
Diagnostic, prognostic and therapeutic decision-making of cancer in pathology clinics can now be carried out based on analysis of multi-gigapixel tissue images, also known as whole-slide images ...(WSIs). Recently, deep convolutional neural networks (CNNs) have been proposed to derive unsupervised WSI representations; these are attractive as they rely less on expert annotation which is cumbersome. However, a major trade-off is that higher predictive power generally comes at the cost of interpretability, posing a challenge to their clinical use where transparency in decision-making is generally expected. To address this challenge, we present a handcrafted framework based on deep CNN for constructing holistic WSI-level representations. Building on recent findings about the internal working of the Transformer in the domain of natural language processing, we break down its processes and handcraft them into a more transparent framework that we term as the Handcrafted Histological Transformer or H2T. Based on our experiments involving various datasets consisting of a total of 10,042 WSIs, the results demonstrate that H2T based holistic WSI-level representations offer competitive performance compared to recent state-of-the-art methods and can be readily utilized for various downstream analysis tasks. Finally, our results demonstrate that the H2T framework can be up to 14 times faster than the Transformer models.
•A novel paradigm for deriving holistic WSI-level representations in an unsupervised manner.•A novel approach to generate prototypical patterns that are mined from WSIs and their usages.•A handcrafted framework that is as predictive as the Transformer model on WSIbased cancer subtype classifications.•Extensive set of results verified on more than 5000 WSIs.