•A novel self-supervised learning strategy called context restoration.•It improves the subsequent learning performance.•Its implementation is simple and straightforward.•It is useful for different ...types of subsequent tasks, including classification, detection, and segmentation.
Display omitted
Machine learning, particularly deep learning has boosted medical image analysis over the past years. Training a good model based on deep learning requires large amount of labelled data. However, it is often difficult to obtain a sufficient number of labelled images for training. In many scenarios the dataset in question consists of more unlabelled images than labelled ones. Therefore, boosting the performance of machine learning models by using unlabelled as well as labelled data is an important but challenging problem. Self-supervised learning presents one possible solution to this problem. However, existing self-supervised learning strategies applicable to medical images cannot result in significant performance improvement. Therefore, they often lead to only marginal improvements. In this paper, we propose a novel self-supervised learning strategy based on context restoration in order to better exploit unlabelled images. The context restoration strategy has three major features: 1) it learns semantic image features; 2) these image features are useful for different types of subsequent image analysis tasks; and 3) its implementation is simple. We validate the context restoration strategy in three common problems in medical imaging: classification, localization, and segmentation. For classification, we apply and test it to scan plane detection in fetal 2D ultrasound images; to localise abdominal organs in CT images; and to segment brain tumours in multi-modal MR images. In all three cases, self-supervised learning based on context restoration learns useful semantic features and lead to improved machine learning models for the above tasks.
•Outlines the setup of challenge on “Diabetic Retinopathy – Segmentation and Grading” held at ISBI-2018.•Describes the dataset used, evaluation criteria and results of top-performing participating ...solutions.•Presents various deep learning and handcrafted features based participating approaches.•Discusses the lessons learnt from analysis of the methods submitted to this challenge.
Display omitted
Diabetic Retinopathy (DR) is the most common cause of avoidable vision loss, predominantly affecting the working-age population across the globe. Screening for DR, coupled with timely consultation and treatment, is a globally trusted policy to avoid vision loss. However, implementation of DR screening programs is challenging due to the scarcity of medical professionals able to screen a growing global diabetic population at risk for DR. Computer-aided disease diagnosis in retinal image analysis could provide a sustainable approach for such large-scale screening effort. The recent scientific advances in computing capacity and machine learning approaches provide an avenue for biomedical scientists to reach this goal. Aiming to advance the state-of-the-art in automatic DR diagnosis, a grand challenge on “Diabetic Retinopathy – Segmentation and Grading” was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2018). In this paper, we report the set-up and results of this challenge that is primarily based on Indian Diabetic Retinopathy Image Dataset (IDRiD). There were three principal sub-challenges: lesion segmentation, disease severity grading, and localization of retinal landmarks and segmentation. These multiple tasks in this challenge allow to test the generalizability of algorithms, and this is what makes it different from existing ones. It received a positive response from the scientific community with 148 submissions from 495 registrations effectively entered in this challenge. This paper outlines the challenge, its organization, the dataset used, evaluation methods and results of top-performing participating solutions. The top-performing approaches utilized a blend of clinical information, data augmentation, and an ensemble of models. These findings have the potential to enable new developments in retinal image analysis and image-based DR screening in particular.
Correlation functions are becoming one of the major tools for quantification of structural information that is usually represented as 2D or 3D images. In this paper we introduce ▪ open-source package ...developed in Julia and capable of computing all classical correlation functions based on imaging input data. Images include both binary and multi-phase representations. Our code is capable of evaluating two-point probability S2, phase cross-correlation ρij, cluster C2, lineal-path L2, surface-surface Fss, surface-void Fsv, pore-size P and chord-length p distribution functions on both CPU and GPU architectures. Where possible, we presented two types of computations: full correlation map (correlations of each point with other points on the image, that also allows obtaining ensemble averaged CF) and directional correlation functions (currently in major orthogonal and diagonal directions). Such an implementation allowed for the first time to assemble a completely free solution to evaluate correlation functions under any operating system with well documented application programming interface (API). Our package includes automatic tests against analytical solutions that are described in the paper. We measured execution times for all CPU and GPU implementations and as a rule of thumb full correlation maps on GPU are faster than other methods. However, full maps require more RAM and, thus, are limited to available RAM resources. On the other hand, directional CFs are memory efficient and can be evaluated for huge datasets – this way they are the first candidates for structural data compression of feature extraction. The package itself is available through Julia package ecosystem and on GitHub, the latter source also contains documentation and additional helpful resources such as tutorials. We believe that a single powerful computational tool such as ▪ presented in this paper will significantly facilitate the usage of correlation functions in numerous areas of structural description and research of porous materials, as well as in machine learning applications. We also present some examples as applied to ceramic, soil composite and oil-bearing rock samples based on their 3D X-ray tomography and 2D scanning electron microscope images. Finally, we conclude our paper with discussion of possible ways to further improve presented computational framework.
Program Title: CorrelationFunctions.jl
CPC Library link to program files:https://doi.org/10.17632/6gb9gfm3dw.1
Developer's repository link:https://github.com/fatimp/CorrelationFunctions.jl
Licensing provisions: MIT
Programming language: Julia
Supplementary material: Numerous Jupiter notebooks with examples are available on the GitHub page
Nature of problem: Correlation functions are invaluable universal statistical descriptors of structures used in numerous scientific fields such as astronomy, material science, rock and soil physics, hydrology and biology, to name just a handful of examples. While computational approaches are available in the literature for some functions, they are fragmented and are usually implemented in proprietary interpreted languages for CPU architecture alone.
Solution method: We contribute an open source and cross-platform solution with well documented API for computation of all classical correlation functions from both 2D and 3D images on CPU and GPU architectures. The package computes correlation functions using two approaches: computation of correlation maps and computation along predefined directions. These two approaches can be thought of as an execution time - memory trade-off, but the choice may also depend on the application. The computations are based on a) fast Fourier transform with preprocessing steps such as cluster labeling or edge detection, and b) linear scan approach to evaluate correlation functions along predefined directions. Where justified, the algorithms can be executed on both CPU and GPU which results in high execution speed on modern hardware.
Robust and fast detection of anatomical structures is a prerequisite for both diagnostic and interventional medical image analysis. Current solutions for anatomy detection are typically based on ...machine learning techniques that exploit large annotated image databases in order to learn the appearance of the captured anatomy. These solutions are subject to several limitations, including the use of suboptimal feature engineering techniques and most importantly the use of computationally suboptimal search-schemes for anatomy detection. To address these issues, we propose a method that follows a new paradigm by reformulating the detection problem as a behavior learning task for an artificial agent. We couple the modeling of the anatomy appearance and the object search in a unified behavioral framework, using the capabilities of deep reinforcement learning and multi-scale image analysis. In other words, an artificial agent is trained not only to distinguish the target anatomical object from the rest of the body but also how to find the object by learning and following an optimal navigation path to the target object in the imaged volumetric space. We evaluated our approach on 1487 3D-CT volumes from 532 patients, totaling over 500,000 image slices and show that it significantly outperforms state-of-the-art solutions on detecting several anatomical structures with no failed cases from a clinical acceptance perspective, while also achieving a 20-30 percent higher detection accuracy. Most importantly, we improve the detection-speed of the reference methods by 2-3 orders of magnitude, achieving unmatched real-time performance on large 3D-CT scans.
Many terrestrial ecosystems engage mycorrhizal symbiotic associations, potentially to enhance nutrition, increase resistance to soil-borne pests and diseases, and improve resilience and soil ...structure. Mycorrhizal fungi create dynamic networked structures through branching and anastomosis that connect multiple plants and consent to transport resources underground from nutrient-rich patches to demanding plants. Controlled laboratory experiments are fundamental to improving our knowledge of mycelium network growth dynamics and further understanding its role in preserving ecological niches. We propose a method for highly automated analysis of the mycelium network structure and other morphological properties, such as hyphal length, hyphal density, and number of crossing and branches, in 2D microscopy images of fungal samples. Available tools for automated network analyses suffer from overestimating network connectivity since filament crossings are not considered. In particular, we propose a) a ridge-based mycelium detection algorithm and b) a geometrical-based approach to identify overlapping filaments crossing each other. The algorithmic solution is evaluated on a total of 135 real mycelium sample images over different validation steps, originating from different datasets and having different characteristics, including background, contrast, image acquisition system, fungal species, and clearness (e.g., level of transparency, homogeneity, dirtiness of the medium) of the sample. Results show that 1) the proposed detection method can be used to measure the length of mycelium in an image, replacing manual tracing and allowing for less laborious analysis ρ̂c=0.96, 2) the filament detection is on par with state-of-the-art techniques F1=0.88−0.94 with a more intuitive parameterization, and 3) the proposed algorithm correctly identifies filament crossings F1=0.89 in most common cases, yielding a reduction in the overestimation of network connectivity. The latter feature consents to applying the proposed fully automated solution to complex and irregular fungal structures, advancing mycelium detection and reconstruction performance accuracy with respect to the state-of-the-art.
•A method for detecting and quantifying mycelium is tested on real fungal images.•A method for identifying filament crossings in the detected mycelium is proposed.•Crossing detection improves estimation of network topology in real fungal images.
End-to-End Adversarial Retinal Image Synthesis Costa, Pedro; Galdran, Adrian; Meyer, Maria Ines ...
IEEE transactions on medical imaging,
03/2018, Letnik:
37, Številka:
3
Journal Article
Odprti dostop
In medical image analysis applications, the availability of the large amounts of annotated data is becoming increasingly critical. However, annotated medical data is often scarce and costly to ...obtain. In this paper, we address the problem of synthesizing retinal color images by applying recent techniques based on adversarial learning. In this setting, a generative model is trained to maximize a loss function provided by a second model attempting to classify its output into real or synthetic. In particular, we propose to implement an adversarial autoencoder for the task of retinal vessel network synthesis. We use the generated vessel trees as an intermediate stage for the generation of color retinal images, which is accomplished with a generative adversarial network. Both models require the optimization of almost everywhere differentiable loss functions, which allows us to train them jointly. The resulting model offers an end-to-end retinal image synthesis system capable of generating as many retinal images as the user requires, with their corresponding vessel networks, by sampling from a simple probability distribution that we impose to the associated latent space. We show that the learned latent space contains a well-defined semantic structure, implying that we can perform calculations in the space of retinal images, e.g., smoothly interpolating new data points between two retinal images. Visual and quantitative results demonstrate that the synthesized images are substantially different from those in the training set, while being also anatomically consistent and displaying a reasonable visual quality.
With the development of
Computer-aided Diagnosis
(CAD) and image scanning techniques,
Whole-slide Image
(WSI) scanners are widely used in the field of pathological diagnosis. Therefore, WSI analysis ...has become the key to modern digital histopathology. Since 2004, WSI has been used widely in CAD. Since machine vision methods are usually based on semi-automatic or fully automatic computer algorithms, they are highly efficient and labor-saving. The combination of WSI and CAD technologies for segmentation, classification, and detection helps histopathologists to obtain more stable and quantitative results with minimum labor costs and improved diagnosis objectivity. This paper reviews the methods of WSI analysis based on machine learning. Firstly, the development status of WSI and CAD methods are introduced. Secondly, we discuss publicly available WSI datasets and evaluation metrics for segmentation, classification, and detection tasks. Then, the latest development of machine learning techniques in WSI segmentation, classification, and detection are reviewed. Finally, the existing methods are studied, and the application prospects of the methods in this field are forecasted.
The science of solving clinical problems by analyzing images generated in clinical practice is known as medical image analysis. The aim is to extract information in an affective and efficient manner ...for improved clinical diagnosis. The recent advances in the field of biomedical engineering have made medical image analysis one of the top research and development area. One of the reasons for this advancement is the application of machine learning techniques for the analysis of medical images. Deep learning is successfully used as a tool for machine learning, where a neural network is capable of automatically learning features. This is in contrast to those methods where traditionally hand crafted features are used. The selection and calculation of these features is a challenging task. Among deep learning techniques, deep convolutional networks are actively used for the purpose of medical image analysis. This includes application areas such as segmentation, abnormality detection, disease classification, computer aided diagnosis and retrieval. In this study, a comprehensive review of the current state-of-the-art in medical image analysis using deep convolutional networks is presented. The challenges and potential of these techniques are also highlighted.
Challenges drive the state-of-the-art of automated medical image analysis. The quantity of public training data that they provide can limit the performance of their solutions. Public access to the ...training methodology for these solutions remains absent. This study implements the Type Three (T3) challenge format, which allows for training solutions on private data and guarantees reusable training methodologies. With T3, challenge organizers train a codebase provided by the participants on sequestered training data. T3 was implemented in the STOIC2021 challenge, with the goal of predicting from a computed tomography (CT) scan whether subjects had a severe COVID-19 infection, defined as intubation or death within one month. STOIC2021 consisted of a Qualification phase, where participants developed challenge solutions using 2000 publicly available CT scans, and a Final phase, where participants submitted their training methodologies with which solutions were trained on CT scans of 9724 subjects. The organizers successfully trained six of the eight Final phase submissions. The submitted codebases for training and running inference were released publicly. The winning solution obtained an area under the receiver operating characteristic curve for discerning between severe and non-severe COVID-19 of 0.815. The Final phase solutions of all finalists improved upon their Qualification phase solutions.
Display omitted
•STOIC2021, aimed at detecting severe COVID-19, was organized with 10,724 CT scans.•The T3 challenge format allows training on private data and ensures reusable methods.•CT scans and metadata of 2,000 COVID-19 subjects were released under CC-BY-NC 4.0.•The finalist codebases were released publicly under permissive licenses.