Purpose:
Radiomics, which is the high‐throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at ...present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (ibex), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions.
Methods:
The ibex software package was developed using the matlab and c/c++ programming languages. The software architecture deploys the modern model‐view‐controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, ibex is self‐contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, ibex provides an integrated development environment on top of matlab and c/c++, so users are not limited to its built‐in functions. In the ibex developer studio, users can plug in, debug, and test new algorithms, extending ibex’s functionality. ibex also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm‐related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the ibex workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between institutions.
Results:
Researchers with a variety of technical skill levels, including radiation oncologists, physicists, and computer scientists, have found the ibex software to be intuitive, powerful, and easy to use. ibex can be run at any computer with the windows operating system and 1GB RAM. The authors fully validated the implementation of all importers, preprocessing algorithms, and feature extraction algorithms. Windows version 1.0 beta of stand‐alone ibex and ibex’s source code can be downloaded.
Conclusions:
The authors successfully implemented ibex, an open infrastructure software platform that streamlines common radiomics workflow tasks. Its transparency, flexibility, and portability can greatly accelerate the pace of radiomics research and pave the way toward successful clinical translation.
Selective Search for Object Recognition Uijlings, J. R. R.; van de Sande, K. E. A.; Gevers, T. ...
International journal of computer vision,
09/2013, Letnik:
104, Številka:
2
Journal Article
Recenzirano
Odprti dostop
This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and ...segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 % recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software:
http://disi.unitn.it/~uijlings/SelectiveSearch.html
).
Deep learning convolutional neural networks (CNN) may facilitate melanoma detection, but data comparing a CNN’s diagnostic performance to larger groups of dermatologists are lacking.
Google’s ...Inception v4 CNN architecture was trained and validated using dermoscopic images and corresponding diagnoses. In a comparative cross-sectional reader study a 100-image test-set was used (level-I: dermoscopy only; level-II: dermoscopy plus clinical information and images). Main outcome measures were sensitivity, specificity and area under the curve (AUC) of receiver operating characteristics (ROC) for diagnostic classification (dichotomous) of lesions by the CNN versus an international group of 58 dermatologists during level-I or -II of the reader study. Secondary end points included the dermatologists’ diagnostic performance in their management decisions and differences in the diagnostic performance of dermatologists during level-I and -II of the reader study. Additionally, the CNN’s performance was compared with the top-five algorithms of the 2016 International Symposium on Biomedical Imaging (ISBI) challenge.
In level-I dermatologists achieved a mean (±standard deviation) sensitivity and specificity for lesion classification of 86.6% (±9.3%) and 71.3% (±11.2%), respectively. More clinical information (level-II) improved the sensitivity to 88.9% (±9.6%, P=0.19) and specificity to 75.7% (±11.7%, P<0.05). The CNN ROC curve revealed a higher specificity of 82.5% when compared with dermatologists in level-I (71.3%, P<0.01) and level-II (75.7%, P<0.01) at their sensitivities of 86.6% and 88.9%, respectively. The CNN ROC AUC was greater than the mean ROC area of dermatologists (0.86 versus 0.79, P<0.01). The CNN scored results close to the top three algorithms of the ISBI 2016 challenge.
For the first time we compared a CNN’s diagnostic performance with a large international group of 58 dermatologists, including 30 experts. Most dermatologists were outperformed by the CNN. Irrespective of any physicians’ experience, they may benefit from assistance by a CNN’s image classification.
This study was registered at the German Clinical Trial Register (DRKS-Study-ID: DRKS00013570; https://www.drks.de/drks_web/).
Predictive models ground many state-of-the-art developments in statistical brain image analysis: decoding, MVPA, searchlight, or extraction of biomarkers. The principled approach to establish their ...validity and usefulness is cross-validation, testing prediction on unseen data. Here, I would like to raise awareness on error bars of cross-validation, which are often underestimated. Simple experiments show that sample sizes of many neuroimaging studies inherently lead to large error bars, eg±10% for 100 samples. The standard error across folds strongly underestimates them. These large error bars compromise the reliability of conclusions drawn with predictive models, such as biomarkers or methods developments where, unlike with cognitive neuroimaging MVPA approaches, more samples cannot be acquired by repeating the experiment across many subjects. Solutions to increase sample size must be investigated, tackling possible increases in heterogeneity of the data.
UK Biobank is a large-scale prospective epidemiological study with all data accessible to researchers worldwide. It is currently in the process of bringing back 100,000 of the original participants ...for brain, heart and body MRI, carotid ultrasound and low-dose bone/fat x-ray. The brain imaging component covers 6 modalities (T1, T2 FLAIR, susceptibility weighted MRI, Resting fMRI, Task fMRI and Diffusion MRI). Raw and processed data from the first 10,000 imaged subjects has recently been released for general research access. To help convert this data into useful summary information we have developed an automated processing and QC (Quality Control) pipeline that is available for use by other researchers. In this paper we describe the pipeline in detail, following a brief overview of UK Biobank brain imaging and the acquisition protocol. We also describe several quantitative investigations carried out as part of the development of both the imaging protocol and the processing pipeline.
The amygdala and the hippocampus are two limbic structures that play a critical role in cognition and behavior, however their manual segmentation and that of their smaller nuclei/subfields in ...multicenter datasets is time consuming and difficult due to the low contrast of standard MRI. Here, we assessed the reliability of the automated segmentation of amygdalar nuclei and hippocampal subfields across sites and vendors using FreeSurfer in two independent cohorts of older and younger healthy adults.
Sixty-five healthy older (cohort 1) and 68 younger subjects (cohort 2), from the PharmaCog and CoRR consortia, underwent repeated 3D-T1 MRI (interval 1–90 days). Segmentation was performed using FreeSurfer v6.0. Reliability was assessed using volume reproducibility error (ε) and spatial overlapping coefficient (DICE) between test and retest session.
Significant MRI site and vendor effects (p < .05) were found in a few subfields/nuclei for the ε, while extensive effects were found for the DICE score of most subfields/nuclei. Reliability was strongly influenced by volume, as ε correlated negatively and DICE correlated positively with volume size of structures (absolute value of Spearman’s r correlations >0.43, p < 1.39E-36). In particular, volumes larger than 200 mm3 (for amygdalar nuclei) and 300 mm3 (for hippocampal subfields, except for molecular layer) had the best test-retest reproducibility (ε < 5% and DICE > 0.80).
Our results support the use of volumetric measures of larger amygdalar nuclei and hippocampal subfields in multisite MRI studies. These measures could be useful for disease tracking and assessment of efficacy in drug trials.
•Differences in MRI site/vendor had a limited effect on volume reproducibility.•Differences in MRI site/vendor had an extensive effect on spatial accuracy.•Reliability is good for larger amygdalar and hippocampal structures.•Automated volumetry is reliable in multicenter MRI studies.
Live imaging of large biological specimens is fundamentally limited by the short optical penetration depth of light microscopes. To maximize physical coverage, we developed the SiMView technology ...framework for high-speed in vivo imaging, which records multiple views of the specimen simultaneously. SiMView consists of a light-sheet microscope with four synchronized optical arms, real-time electronics for long-term sCMOS-based image acquisition at 175 million voxels per second, and computational modules for high-throughput image registration, segmentation, tracking and real-time management of the terabytes of multiview data recorded per specimen. We developed one-photon and multiphoton SiMView implementations and recorded cellular dynamics in entire Drosophila melanogaster embryos with 30-s temporal resolution throughout development. We furthermore performed high-resolution long-term imaging of the developing nervous system and followed neuroblast cell lineages in vivo. SiMView data sets provide quantitative morphological information even for fast global processes and enable accurate automated cell tracking in the entire early embryo.