The development of whole slide scanners has revolutionized the field of digital pathology. Unfortunately, whole slide scanners often produce images with out-of-focus/blurry areas that limit the ...amount of tissue available for a pathologist to make accurate diagnosis/prognosis. Moreover, these artifacts hamper the performance of computerized image analysis systems. These areas are typically identified by visual inspection, which leads to a subjective evaluation causing high intra- and inter-observer variability. Moreover, this process is both tedious, and time-consuming. The aim of this study is to develop a deep learning based software called, DeepFocus, which can automatically detect and segment blurry areas in digital whole slide images to address these problems. DeepFocus is built on TensorFlow, an open source library that exploits data flow graphs for efficient numerical computation. DeepFocus was trained by using 16 different H&E and IHC-stained slides that were systematically scanned on nine different focal planes, generating 216,000 samples with varying amounts of blurriness. When trained and tested on two independent datasets, DeepFocus resulted in an average accuracy of 93.2% (± 9.6%), which is a 23.8% improvement over an existing method. DeepFocus has the potential to be integrated with whole slide scanners to automatically re-scan problematic areas, hence improving the overall image quality for pathologists and image analysis algorithms.
Pathology Image Informatics Platform (PIIP) is an NCI/NIH sponsored project intended for managing, annotating, sharing, and quantitatively analyzing digital pathology imaging data. It expands on an ...existing, freely available pathology image viewer, Sedeen. The goal of this project is to develop and embed some commonly used image analysis applications into the Sedeen viewer to create a freely available resource for the digital pathology and cancer research communities. Thus far, new plugins have been developed and incorporated into the platform for out of focus detection, region of interest transformation, and IHC slide analysis. Our biomarker quantification and nuclear segmentation algorithms, written in MATLAB, have also been integrated into the viewer. This article describes the viewing software and the mechanism to extend functionality by plugins, brief descriptions of which are provided as examples, to guide users who want to use this platform. PIIP project materials, including a video describing its usage and applications, and links for the Sedeen Viewer, plug-ins, and user manuals are freely available through the project web page: http://pathiip.org
.
The Ki67 Index has been extensively studied as a prognostic biomarker in breast cancer. However, its clinical adoption is largely hampered by the lack of a standardized method to assess Ki67 that ...limits inter-laboratory reproducibility. It is important to standardize the computation of the Ki67 Index before it can be effectively used in clincial practice.
In this study, we develop a systematic approach towards standardization of the Ki67 Index. We first create the ground truth consisting of tumor positive and tumor negative nuclei by registering adjacent breast tissue sections stained with Ki67 and H&E. The registration is followed by segmentation of positive and negative nuclei within tumor regions from Ki67 images. The true Ki67 Index is then approximated with a linear model of the area of positive to the total area of tumor nuclei.
When tested on 75 images of Ki67 stained breast cancer biopsies, the proposed method resulted in an average root mean square error of 3.34. In comparison, an expert pathologist resulted in an average root mean square error of 9.98 and an existing automated approach produced an average root mean square error of 5.64.
We show that it is possible to approximate the true Ki67 Index accurately without detecting individual nuclei and also statically demonstrate the weaknesses of commonly adopted approaches that use both tumor and non-tumor regions together while compensating for the latter with higher order approximations.
Automatic and accurate detection of positive and negative nuclei from images of immunostained tissue biopsies is critical to the success of digital pathology. The evaluation of most nuclei detection ...algorithms relies on manually generated ground truth prepared by pathologists, which is unfortunately time-consuming and suffers from inter-pathologist variability. In this work, we developed a digital immunohistochemistry (IHC) phantom that can be used for evaluating computer algorithms for enumeration of IHC positive cells. Our phantom development consists of two main steps, 1) extraction of the individual as well as nuclei clumps of both positive and negative nuclei from real WSI images, and 2) systematic placement of the extracted nuclei clumps on an image canvas. The resulting images are visually similar to the original tissue images. We created a set of 42 images with different concentrations of positive and negative nuclei. These images were evaluated by four board certified pathologists in the task of estimating the ratio of positive to total number of nuclei. The resulting concordance correlation coefficients (CCC) between the pathologist and the true ratio range from 0.86 to 0.95 (point estimates). The same ratio was also computed by an automated computer algorithm, which yielded a CCC value of 0.99. Reading the phantom data with known ground truth, the human readers show substantial variability and lower average performance than the computer algorithm in terms of CCC. This shows the limitation of using a human reader panel to establish a reference standard for the evaluation of computer algorithms, thereby highlighting the usefulness of the phantom developed in this work. Using our phantom images, we further developed a function that can approximate the true ratio from the area of the positive and negative nuclei, hence avoiding the need to detect individual nuclei. The predicted ratios of 10 held-out images using the function (trained on 32 images) are within ±2.68% of the true ratio. Moreover, we also report the evaluation of a computerized image analysis method on the synthetic tissue dataset.
Deep Learning for Medical Image Analysis Senaras, Caglar; Gurcan, Metin Nafi
Journal of Pathology Informatics,
01/2018, Letnik:
9, Številka:
1
Book Review, Journal Article
Recenzirano
Odprti dostop
While these chapters provide a quite comprehensive look at applications of neural networks, the coverage would have been improved by including some recent fully convolutional network-based studies. ...Because these types of networks are more suited to semantic segmentation, the algorithms may work faster than traditional CNN solutions. The chapters in the last two sections demonstrate many diverse applications of deep learning for biomedical images: using two unregistered images to classify mammograms; applying transfer learning for chest radiology-pathology categorization; and showing the predictive power in identifying future disease progression for Alzheimer's disease. ...there any many ways of visualizing deep learning networks, and these methods may help researchers better understand how a trained network functions.
This paper introduces a new approach for the automated detection of buildings from monocular very high resolution (VHR) optical satellite images. First, we investigate the shadow evidence to focus on ...building regions. To do that, we propose a new fuzzy landscape generation approach to model the directional spatial relationship between buildings and their shadows. Once all landscapes are collected, a pruning process is developed to eliminate the landscapes that may occur due to non-building objects. The final building regions are detected by GrabCut partitioning approach. In this paper, the input requirements of the GrabCut partitioning are automatically extracted from the previously determined shadow and landscape regions, so that the approach gained an efficient fully automated behavior for the detection of buildings. Extensive experiments performed on 20 test sites selected from a set of QuickBird and Geoeye-1 VHR images showed that the proposed approach accurately detects buildings with arbitrary shapes and sizes in complex environments. The tests also revealed that even under challenging environmental and illumination conditions, reasonable building detection performances could be achieved by the proposed approach.
With the availability of high-resolution commercial satellite images, automated analysis and object extraction became even a more important topic in remote sensing. As shadows cover a significant ...portion of an image, they play an important role on automated analysis. While they degrade performance of applications such as image registration, shadow is an important cue for information such as man-made structures. In this article, a shadow detection algorithm that makes use of near-infrared information in combination with RGB bands is introduced. The algorithm is applied on an application for automated building detection.
Accurate identification of crop phenology timing is crucial for agriculture. While remote sensing tracks vegetation changes, linking these to ground-measured crop growth stages remains challenging. ...Existing methods offer broad overviews but fail to capture detailed phenological changes, which can be partially related to the temporal resolution of the remote sensing datasets used. The availability of higher-frequency observations, obtained by combining sensors and gap-filling, offers the possibility to capture more subtle changes in crop development, some of which can be relevant for management decisions. One such dataset is Planet Fusion, daily analysis-ready data obtained by integrating PlanetScope imagery with public satellite sensor sources such as Sentinel-2 and Landsat. This study introduces a novel method utilizing Dynamic Time Warping applied to Planet Fusion imagery for maize phenology detection, to evaluate its effectiveness across 70 micro-stages. Unlike singular template approaches, this method preserves critical data patterns, enhancing prediction accuracy and mitigating labeling issues. During the experiments, eight commonly employed spectral indices were investigated as inputs. The method achieves high prediction accuracy, with 90% of predictions falling within a 10-day error margin, evaluated based on over 3200 observations from 208 fields. To understand the potential advantage of Planet Fusion, a comparative analysis was performed using Harmonized Landsat Sentinel-2 data. Planet Fusion outperforms Harmonized Landsat Sentinel-2, with significant improvements observed in key phenological stages such as V4, R1, and late R5. Finally, this study showcases the method’s transferability across continents and years, although additional field data are required for further validation.
Building Detection With Decision Fusion Senaras, Caglar; Ozay, Mete; Yarman Vural, Fatos T.
IEEE journal of selected topics in applied earth observations and remote sensing,
06/2013, Letnik:
6, Številka:
3
Journal Article
Recenzirano
A novel decision fusion approach to building detection problem in VHR optical satellite images is proposed. The method combines the detection results of multiple classifiers under a hierarchical ...architecture, called Fuzzy Stacked Generalization (FSG). After an initial segmentation and pre-processing step, a large variety of color, texture and shape features are extracted from each segment. Then, the segments, represented in K different feature spaces are classified by K different base-layer classifiers of the FSG architecture. The class membership values of the segments, which represent the decisions of different base-layer classifiers in a decision space, are aggregated to form a fusion space which is then fed to a meta-layer classifier of the FSG to label the vectors in the fusion space. The paper presents the performance results of the proposed decision fusion model by a comparison with the state of the art machine learning algorithms. The results show that fusing the decisions of multiple classifiers improves the performance, when they are ensembled under the suggested hierarchical learning architecture.
In this study, a new building detection framework for monocular satellite images, called self-supervised decision fusion (SSDF) is proposed. The model is based on the idea of self-supervision, which ...aims to generate training data automatically from each individual test image, without human interaction. This approach allows us to use the advantages of the supervised classifiers in a fully automated framework. We combine our previous supervised and unsupervised building detection frameworks to suggest a self-supervised learning architecture. Hence, we borrow the major strength of the unsupervised approach to obtain one of the most important clues, the relation of a building, and its cast shadow. This important information is, then, used in order to satisfy the requirement of training sample selection. Finally, an ensemble learning algorithm, called fuzzy stacked generalization (FSG), fuses a set of supervised classifiers trained on the automatically generated dataset with various shape, color, and texture features. We assessed the building detection performance of the proposed approach over 19 test sites and compare our results with the state of the art algorithms. Our experiments show that the supervised building detection method requires more than 30% of the ground truth (GT) training data to reach the performance of the proposed SSDF method. Furthermore, the SSDF method increases the F-score by 2 percentage points (p.p.) on the average compared to performance of the unsupervised method.