•Image segmentation pipeline based on Fully Convolutional Networks (FCN) and ResNets is proposed.•FCN can serve as a pre-processor to normalize medical imaging input data.•A trainable FCN is an ...alternative to hand-designed, modality specific pre-processing steps.•Our pipeline obtains or matches state-of-the-art performance on 3 segmentation datasets.
Display omitted
In this paper, we introduce a simple, yet powerful pipeline for medical image segmentation that combines Fully Convolutional Networks (FCNs) with Fully Convolutional Residual Networks (FC-ResNets). We propose and examine a design that takes particular advantage of recent advances in the understanding of both Convolutional Neural Networks as well as ResNets. Our approach focuses upon the importance of a trainable pre-processing when using FC-ResNets and we show that a low-capacity FCN model can serve as a pre-processor to normalize medical input data. In our image segmentation pipeline, we use FCNs to obtain normalized images, which are then iteratively refined by means of a FC-ResNet to generate a segmentation prediction. As in other fully convolutional approaches, our pipeline can be used off-the-shelf on different image modalities. We show that using this pipeline, we exhibit state-of-the-art performance on the challenging Electron Microscopy benchmark, when compared to other 2D methods. We improve segmentation results on CT images of liver lesions, when contrasting with standard FCN methods. Moreover, when applying our 2D pipeline on a challenging 3D MRI prostate segmentation challenge we reach results that are competitive even when compared to 3D methods. The obtained results illustrate the strong potential and versatility of the pipeline by achieving accurate segmentations on a variety of image modalities and different anatomical regions.
In radiation oncology, predicting patient risk stratification allows specialization of therapy intensification as well as selecting between systemic and regional treatments, all of which helps to ...improve patient outcome and quality of life. Deep learning offers an advantage over traditional radiomics for medical image processing by learning salient features from training data originating from multiple datasets. However, while their large capacity allows to combine high-level medical imaging data for outcome prediction, they lack generalization to be used across institutions. In this work, a pseudo-volumetric convolutional neural network with a deep preprocessor module and self-attention (PreSANet) is proposed for the prediction of distant metastasis, locoregional recurrence, and overall survival occurrence probabilities within the 10 year follow-up time frame for head and neck cancer patients with squamous cell carcinoma. The model is capable of processing multi-modal inputs of variable scan length, as well as integrating patient data in the prediction model. These proposed architectural features and additional modalities all serve to extract additional information from the available data when availability to additional samples is limited. This model was trained on the public Cancer Imaging Archive Head-Neck-PET-CT dataset consisting of 298 patients undergoing curative radio/chemo-radiotherapy and acquired from 4 different institutions. The model was further validated on an internal retrospective dataset with 371 patients acquired from one of the institutions in the training dataset. An extensive set of ablation experiments were performed to test the utility of the proposed model characteristics, achieving an AUROC of Formula: see text, Formula: see text and Formula: see text for DM, LR and OS respectively on the public TCIA Head-Neck-PET-CT dataset. External validation was performed on a retrospective dataset with 371 patients, achieving Formula: see text AUROC in all outcomes. To test for model generalization across sites, a validation scheme consisting of single site-holdout and cross-validation combining both datasets was used. The mean accuracy across 4 institutions obtained was Formula: see text, Formula: see text and Formula: see text for DM, LR and OS respectively. The proposed model demonstrates an effective method for tumor outcome prediction for multi-site, multi-modal combining both volumetric data and structured patient clinical data.
FGF21 stimulates FGFR1c activity in cells that co-express Klothoβ (KLB); however, relatively little is known about the interaction of these receptors at the plasma membrane. We measured the dynamics ...and distribution of fluorescent protein-tagged KLB and FGFR1c in living cells using fluorescence recovery after photobleaching and number and brightness analysis. We confirmed that fluorescent protein-tagged KLB translocates to the plasma membrane and is active when co-expressed with FGFR1c. FGF21-induced signaling was enhanced in cells treated with lactose, a competitive inhibitor of the galectin lattice, suggesting that lattice-binding modulates KLB and/or FGFR1c activity. Fluorescence recovery after photobleaching analysis consistently revealed that lactose treatment increased KLB mobility at the plasma membrane, but did not affect the mobility of FGFR1c. The association of endogenous KLB with the galectin lattice was also confirmed by co-immunoprecipitation with galectin-3. KLB mobility increased when co-expressed with FGFR1c, suggesting that the two receptors form a heterocomplex independent of the galectin lattice. Number and brightness analysis revealed that KLB and FGFR1c behave as monomers and dimers at the plasma membrane, respectively. Co-expression resulted in monomeric expression of KLB and FGFR1c consistent with formation of a 1:1 heterocomplex. Subsequent addition of FGF21 induced FGFR1 dimerization without changing KLB aggregate size, suggesting formation of a 1:2 KLB-FGFR1c signaling complex. Overall, these data suggest that KLB and FGFR1 form a 1:1 heterocomplex independent of the galectin lattice that transitions to a 1:2 complex upon the addition of FGF21.
FGFR1c and KLB form an ill-defined FGF21 signaling complex.
FGFR1c competes with galectin for binding to KLB. KLB and FGFR1c interact in a 1:1 heterocomplex, and subsequent addition of FGF21 induces FGFR1c dimers.
KLB and FGFR1c activity and dynamics suggest that the galectin lattice modulates FGF21 signaling.
The galectin lattice is a novel target to potentiate therapeutic effects of FGF21.
Deep Learning: A Primer for Radiologists Chartrand, Gabriel; Cheng, Phillip M; Vorontsov, Eugene ...
Radiographics,
2017 Nov-Dec, Letnik:
37, Številka:
7
Journal Article
Recenzirano
Odprti dostop
Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and ...playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging.
RSNA, 2017.
We propose a model for the joint segmentation of the liver and liver lesions in computed tomography (CT) volumes. We build the model from two fully convolutional networks, connected in tandem and ...trained together end-to-end. We evaluate our approach on the 2017 MICCAI Liver Tumour Segmentation Challenge, attaining competitive liver and liver lesion detection and segmentation scores across a wide range of metrics. Unlike other top performing methods, our model output post-processing is trivial, we do not use data external to the challenge, and we propose a simple single-stage model that is trained end-to-end. However, our method nearly matches the top lesion segmentation performance and achieves the second highest precision for lesion detection while maintaining high recall.
The early detection, diagnosis and monitoring of liver cancer progression can be achieved with the precise delineation of metastatic tumours. However, accurate automated segmentation remains ...challenging due to the presence of noise, inhomogeneity and the high appearance variability of malignant tissue. In this paper, we propose an unsupervised metastatic liver tumour segmentation framework using a machine learning approach based on discriminant Grassmannian manifolds which learns the appearance of tumours with respect to normal tissue. First, the framework learns within-class and between-class similarity distributions from a training set of images to discover the optimal manifold discrimination between normal and pathological tissue in the liver. Second, a conditional optimisation scheme computes non-local pairwise as well as pattern-based clique potentials from the manifold subspace to recognise regions with similar labelings and to incorporate global consistency in the segmentation process. The proposed framework was validated on a clinical database of 43 CT images from patients with metastatic liver cancer. Compared to state-of-the-art methods, our method achieves a better performance on two separate datasets of metastatic liver tumours from different clinical sites, yielding an overall mean Dice similarity coefficient of in over 50 tumours with an average volume of 27.3 mm3.
An important challenge and limiting factor in deep learning methods for medical imaging segmentation is the lack of available of annotated data to properly train models. For the specific task of ...tumor segmentation, the process entails clinicians labeling every slice of volumetric scans for every patient, which becomes prohibitive at the scale of datasets required to train neural networks to optimal performance. To address this, we propose a novel semi-supervised framework that allows training any segmentation (encoder–decoder) model using only information readily available in radiological data, namely the presence of a tumor in the image, in addition to a few annotated images. Specifically, we conjecture that a generative model performing domain translation on this weak label — healthy vs diseased scans — helps achieve tumor segmentation. The proposed GenSeg method first disentangles tumoral tissue from healthy “background” tissue. The latent representation is separated into (1) the common background information across both domains, and (2) the unique tumoral information. GenSeg then achieves diseased-to-healthy image translation by decoding a healthy version of the image from just the common representation, as well as a residual image that allows adding back the tumors. The same decoder that produces this residual tumor image, also outputs a tumor segmentation. Implicit data augmentation is achieved by re-using the same framework for healthy-to-diseased image translation, where a residual tumor image is produced from a prior distribution. By performing both image translation and segmentation simultaneously, GenSeg allows training on only partially annotated datasets. To test the framework, we trained U-Net-like architectures using GenSeg and evaluated their performance on 3 variants of a synthetic task, as well as on 2 benchmark datasets: brain tumor segmentation in MRI (derived from BraTS) and liver metastasis segmentation in CT (derived from LiTS). Our method outperforms the baseline semi-supervised (autoencoder and mean teacher) and supervised segmentation methods, with improvements ranging between 8–14% Dice score on the brain task and 5–8% on the liver task, when only 1% of the training images were annotated. These results show the proposed framework is ideal at addressing the problem of training deep segmentation models when a large portion of the available data is unlabeled and unpaired, a common issue in tumor segmentation.
Display omitted
•A new semi-supervised training method with a weak supervision component is proposed.•Proposed semi-supervised method tested with different encoder–decoder architectures.•Segmentation decoder reused for generation, can be trained in absence of annotations.•Modified segmentation tasks (brain, liver, synthetic) with ‘healthy’ vs ‘diseased’ cases.
Deep neural networks are commonly used for automated medical image segmentation, but models will frequently struggle to generalize well across different imaging modalities. This issue is particularly ...problematic due to the limited availability of annotated data, both in the target as well as the source modality, making it difficult to deploy these models on a larger scale. To overcome these challenges, we propose a new semi-supervised training strategy called MoDATTS. Our approach is designed for accurate cross-modality 3D tumor segmentation on unpaired bi-modal datasets. An image-to-image translation strategy between modalities is used to produce synthetic but annotated images and labels in the desired modality and improve generalization to the unannotated target modality. We also use powerful vision transformer architectures for both image translation (TransUNet) and segmentation (Medformer) tasks and introduce an iterative self-training procedure in the later task to further close the domain gap between modalities, thus also training on unlabeled images in the target modality. MoDATTS additionally allows the possibility to exploit image-level labels with a semi-supervised objective that encourages the model to disentangle tumors from the background. This semi-supervised methodology helps in particular to maintain downstream segmentation performance when pixel-level label scarcity is also present in the source modality dataset, or when the source dataset contains healthy controls. The proposed model achieves superior performance compared to other methods from participating teams in the CrossMoDA 2022 vestibular schwannoma (VS) segmentation challenge, as evidenced by its reported top Dice score of 0.87±0.04 for the VS segmentation. MoDATTS also yields consistent improvements in Dice scores over baselines on a cross-modality adult brain gliomas segmentation task composed of four different contrasts from the BraTS 2020 challenge dataset, where 95% of a target supervised model performance is reached when no target modality annotations are available. We report that 99% and 100% of this maximum performance can be attained if 20% and 50% of the target data is additionally annotated, which further demonstrates that MoDATTS can be leveraged to reduce the annotation burden.
Display omitted
•A new 3D domain adaptation framework is proposed for cross-modality segmentation.•A semi-supervised component is integrated to mitigate the lack of annotations in source and target domains.•Based on effective vision transformer backbone architectures and self-training.•Validated on two separate cross-modality segmentations tasks, namely brain tumor and vestibular schwannoma.
Radiotherapy planning of head and neck cancer patients requires an accurate delineation of several organs at risk (OAR) from planning CT images in order to determine a dose plan which reduces ...toxicity and salvages normal tissue. However training a single deep neural network for multiple organs is highly sensitive to class imbalance and variability in size between several structures within the head and neck region. In this paper, we propose a single-class segmentation model for each OAR in order to handle class imbalance issues during training across output classes (one class per structure), where there exists a severe disparity between 12 OAR. Based on a U-net architecture, we present a transfer learning approach between similar OAR to leverage common learned features, as well as a simple weight averaging strategy to initialize a model as the average of multiple models, each trained on a separate organ. Experiments performed on an internal dataset of 200 H & N cancer patients treated with external beam radiotherapy, show the proposed model presents a significant improvement compared to the baseline multi-organ segmentation model, which attempts to simultaneously train several OAR. The proposed model yields an overall Dice score of 0.75 plus or minus 0.12, by using both transfer learning across OAR and a weight averaging strategy, indicating that a reasonable segmentation performance can be achieved by leveraging additional data from surrounding structures, limiting the uncertainty in ground-truth annotations.