Objectives
Liver volumetry has emerged as an important tool in clinical practice. Liver volume is assessed primarily via organ segmentation of computed tomography (CT) and magnetic resonance imaging ...(MRI) images. The goal of this paper is to provide an accessible overview of liver segmentation targeted at radiologists and other healthcare professionals.
Methods
Using images from CT and MRI, this paper reviews the indications for liver segmentation, technical approaches used in segmentation software and the developing roles of liver segmentation in clinical practice.
Results
Liver segmentation for volumetric assessment is indicated prior to major hepatectomy, portal vein embolisation, associating liver partition and portal vein ligation for staged hepatectomy (ALPPS) and transplant. Segmentation software can be categorised according to amount of user input involved: manual, semi-automated and fully automated. Manual segmentation is considered the “gold standard” in clinical practice and research, but is tedious and time-consuming. Increasingly automated segmentation approaches are more robust, but may suffer from certain segmentation pitfalls. Emerging applications of segmentation include surgical planning and integration with MRI-based biomarkers.
Conclusions
Liver segmentation has multiple clinical applications and is expanding in scope. Clinicians can employ semi-automated or fully automated segmentation options to more efficiently integrate volumetry into clinical practice.
Teaching points
•
Liver volume is assessed via organ segmentation on CT and MRI examinations.
•
Liver segmentation is used for volume assessment prior to major hepatic procedures.
•
Segmentation approaches may be categorised according to the amount of user input involved.
•
Emerging applications include surgical planning and integration with MRI-based biomarkers.
Sub-cortical brain structure segmentation using F-CNN'S Shakeri, Mahsa; Tsogkas, Stavros; Ferrante, Enzo ...
2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI),
04/2016
Conference Proceeding, Journal Article
Odprti dostop
In this paper we propose a deep learning approach for segmenting sub-cortical structures of the human brain in Magnetic Resonance (MR) image data. We draw inspiration from a state-of-the-art ...Fully-Convolutional Neural Network (F-CNN) architecture for semantic segmentation of objects in natural images, and adapt it to our task. Unlike previous CNN-based methods that operate on image patches, our model is applied on a full blown 2D image, without any alignment or registration steps at testing time. We further improve segmentation results by interpreting the CNN output as potentials of a Markov Random Field (MRF), whose topology corresponds to a volumetric grid. Alpha-expansion is used to perform approximate inference imposing spatial volumetric homogeneity to the CNN priors. We compare the performance of the proposed pipeline with a similar system using Random Forest-based priors, as well as state-of-art segmentation algorithms, and show promising results on two different brain MRI datasets.
Introduction
Magnetic resonance navigation (MRN) uses MRI gradients to steer magnetic drug-eluting beads (MDEBs) across vascular bifurcations. We aim to experimentally verify our theoretical forces ...balance model (gravitational, thrust, friction, buoyant and gradient steering forces) to improve the MRN targeted success rate.
Method
A single-bifurcation phantom (3 mm inner diameter) made of poly-vinyl alcohol was connected to a cardiac pump at 0.8 mL/s, 60 beats/minutes with a glycerol solution to reproduce the viscosity of blood. MDEB aggregates (25 ± 6 particles, 200
μ
m
) were released into the main branch through a 5F catheter. The phantom was tilted horizontally from − 10° to +25° to evaluate the MRN performance.
Results
The gravitational force was equivalent to 71.85 mT/m in a 3T MRI. The gradient duration and amplitude had a power relationship (amplitude=78.717
(
d
u
r
a
t
i
o
n
)
-
0.525
). It was possible, in 15° elevated vascular branches, to steer 87% of injected aggregates if two MRI gradients are simultaneously activated (
G
x
= +26.5 mT/m,
G
y
= +18 mT/m for 57% duty cycle), the flow velocity was minimized to 8 cm/s and a residual pulsatile flow to minimize the force of friction.
Conclusion
Our experimental model can determine the maximum elevation angle MRN can perform in a single-bifurcation phantom simulating
in vivo
conditions.
Finding a noninvasive radiomic surrogate of tumor immune features could help identify patients more likely to respond to novel immune checkpoint inhibitors. Particularly, CD73 is an ectonucleotidase ...that catalyzes the breakdown of extracellular AMP into immunosuppressive adenosine, which can be blocked by therapeutic antibodies. High CD73 expression in colorectal cancer liver metastasis (CRLM) resected with curative intent is associated with early recurrence and shorter patient survival. The aim of this study was hence to evaluate whether machine learning analysis of preoperative liver CT-scan could estimate high vs low CD73 expression in CRLM and whether such radiomic score would have a prognostic significance.
We trained an Attentive Interpretable Tabular Learning (TabNet) model to predict, from preoperative CT images, stratified expression levels of CD73 (CD73
vs. CD73
) assessed by immunofluorescence (IF) on tissue microarrays. Radiomic features were extracted from 160 segmented CRLM of 122 patients with matched IF data, preprocessed and used to train the predictive model. We applied a five-fold cross-validation and validated the performance on a hold-out test set.
TabNet provided areas under the receiver operating characteristic curve of 0.95 (95% CI 0.87 to 1.0) and 0.79 (0.65 to 0.92) on the training and hold-out test sets respectively, and outperformed other machine learning models. The TabNet-derived score, termed rad-CD73, was positively correlated with CD73 histological expression in matched CRLM (Spearman's ρ = 0.6004; P < 0.0001). The median time to recurrence (TTR) and disease-specific survival (DSS) after CRLM resection in rad-CD73
vs rad-CD73
patients was 13.0 vs 23.6 months (P = 0.0098) and 53.4 vs 126.0 months (P = 0.0222), respectively. The prognostic value of rad-CD73 was independent of the standard clinical risk score, for both TTR (HR = 2.11, 95% CI 1.30 to 3.45, P < 0.005) and DSS (HR = 1.88, 95% CI 1.11 to 3.18, P = 0.020).
Our findings reveal promising results for non-invasive CT-scan-based prediction of CD73 expression in CRLM and warrant further validation as to whether rad-CD73 could assist oncologists as a biomarker of prognosis and response to immunotherapies targeting the adenosine pathway.
Purpose
Respiratory motion of thoracic organs poses a severe challenge for the administration of image-guided radiotherapy treatments. Providing online and up-to-date volumetric information during ...free breathing can improve target tracking, ultimately increasing treatment efficiency and reducing toxicity to surrounding healthy tissue. In this work, a novel population-based generative network is proposed to address the problem of 3D target location prediction from 2D image-based surrogates during radiotherapy, thus enabling out-of-plane tracking of treatment targets using images acquired in real time.
Methods
The proposed model is trained to simultaneously create a low-dimensional manifold representation of 3D non-rigid deformations and to predict, ahead of time, the motion of the treatment target. The predictive capabilities of the model allow correcting target location errors that can arise due to system latency, using only a baseline volume of the patient anatomy. Importantly, the method does not require supervised information such as ground-truth registration fields, organ segmentation, or anatomical landmarks.
Results
The proposed architecture was evaluated on both free-breathing 4D MRI and ultrasound datasets. Potential challenges present in a realistic therapy, like different acquisition protocols, were taken into account by using an independent hold-out test set. Our approach enables 3D target tracking from single-view slices with a mean landmark error of 1.8 mm, 2.4 mm and 5.2 mm in volunteer MRI, patient MRI and US datasets, respectively, without requiring any prior subject-specific 4D acquisition.
Conclusions
This model presents several advantages over state-of-the-art approaches. Namely, it benefits from an explainable latent space with explicit respiratory phase discrimination. Thanks to the strong generalization capabilities of neural networks, it does not require establishing inter-subject correspondences. Once trained, it can be quickly deployed with an inference time of only 8 ms. The results show the capability of the network to predict future anatomical changes and track tumors in real time, yielding statistically significant improvements over related methods.
Multi-parametric MR image synthesis is an effective approach for several clinical applications where specific modalities may be unavailable to reach a diagnosis. While technical and practical ...conditions limit the acquisition of new modalities for a patient, multimodal image synthesis combines multiple modalities to synthesize the desired modality.
In this paper, we propose a new multi-parametric magnetic resonance imaging (MRI) synthesis model, which generates the target MRI modality from two other available modalities, in pathological MR images. We first adopt a contrastive learning approach that trains an encoder network to extract a suitable feature representation of the target space. Secondly, we build a synthesis network that generates the target image from a common feature space that approximately matches the contrastive learned space of the target modality. We incorporate a bidirectional feature learning strategy that learns a multimodal feature matching function, in two opposite directions, to transform the augmented multichannel input in the learned target space. Overall, our training synthesis loss is expressed as the combination of the reconstruction loss and a bidirectional triplet loss, using a pair of features.
Compared to other state-of-the-art methods, the proposed model achieved an average improvement rate of 3.9% and 3.6% on the IXI and BraTS'18 datasets respectively. On the tumor BraTS'18 dataset, our model records the highest Dice score of 0.793(0.04) for preserving the synthesized tumor regions in the segmented images.
Validation of the proposed model on two public datasets confirms the efficiency of the model to generate different MR contrasts, and preserve tumor areas in the synthesized images. In addition, the model is flexible to generate head and neck CT image from MR acquisitions. In future work, we plan to validate the model using interventional iMRI contrasts for MR-guided neurosurgery applications, and also for radiotherapy applications. Clinical measurements will be collected during surgery to evaluate the model's performance.
Purpose
Cancer confirmation in the operating room (OR) is crucial to improve local control in cancer therapies. Histopathological analysis remains the gold standard, but there is a lack of real-time ...in situ cancer confirmation to support margin confirmation or remnant tissue. Raman spectroscopy (RS), as a label-free optical technique, has proven its power in cancer detection and, when integrated into a robotic assistance system, can positively impact the efficiency of procedures and the quality of life of patients, avoiding potential recurrence.
Methods
A workflow is proposed where a 6-DOF robotic system (optical camera + MECA500 robotic arm) assists the characterization of fresh tissue samples using RS. Three calibration methods are compared for the robot, and the temporal efficiency is compared with standard hand-held analysis. For healthy/cancerous tissue discrimination, a 1D-convolutional neural network is proposed and tested on three ex vivo datasets (brain, breast, and prostate) containing processed RS and histopathology ground truth.
Results
The robot achieves a minimum error of 0.20 mm (0.12) on a set of 30 test landmarks and demonstrates significant time reduction in 4 of the 5 proposed tasks. The proposed classification model can identify brain, breast, and prostate cancer with an accuracy of 0.83 (0.02), 0.93 (0.01), and 0.71 (0.01), respectively.
Conclusion
Automated RS analysis with deep learning demonstrates promising classification performance compared to commonly used support vector machines. Robotic assistance in tissue characterization can contribute to highly accurate, rapid, and robust biopsy analysis in the OR. These two elements are an important step toward real-time cancer confirmation using RS and OR integration.
Prostate cancer (PC) is the most frequently diagnosed cancer in North American men. Pathologists are in critical need of accurate biomarkers to characterize PC, particularly to confirm the presence ...of intraductal carcinoma of the prostate (IDC-P), an aggressive histopathological variant for which therapeutic options are now available. Our aim was to identify IDC-P with Raman micro-spectroscopy (RμS) and machine learning technology following a protocol suitable for routine clinical histopathology laboratories.
We used RμS to differentiate IDC-P from PC, as well as PC and IDC-P from benign tissue on formalin-fixed paraffin-embedded first-line radical prostatectomy specimens (embedded in tissue microarrays TMAs) from 483 patients treated in 3 Canadian institutions between 1993 and 2013. The main measures were the presence or absence of IDC-P and of PC, regardless of the clinical outcomes. The median age at radical prostatectomy was 62 years. Most of the specimens from the first cohort (Centre hospitalier de l'Université de Montréal) were of Gleason score 3 + 3 = 6 (51%) while most of the specimens from the 2 other cohorts (University Health Network and Centre hospitalier universitaire de Québec-Université Laval) were of Gleason score 3 + 4 = 7 (51% and 52%, respectively). Most of the 483 patients were pT2 stage (44%-69%), and pT3a (22%-49%) was more frequent than pT3b (9%-12%). To investigate the prostate tissue of each patient, 2 consecutive sections of each TMA block were cut. The first section was transferred onto a glass slide to perform immunohistochemistry with H&E counterstaining for cell identification. The second section was placed on an aluminum slide, dewaxed, and then used to acquire an average of 7 Raman spectra per specimen (between 4 and 24 Raman spectra, 4 acquisitions/TMA core). Raman spectra of each cell type were then analyzed to retrieve tissue-specific molecular information and to generate classification models using machine learning technology. Models were trained and cross-validated using data from 1 institution. Accuracy, sensitivity, and specificity were 87% ± 5%, 86% ± 6%, and 89% ± 8%, respectively, to differentiate PC from benign tissue, and 95% ± 2%, 96% ± 4%, and 94% ± 2%, respectively, to differentiate IDC-P from PC. The trained models were then tested on Raman spectra from 2 independent institutions, reaching accuracies, sensitivities, and specificities of 84% and 86%, 84% and 87%, and 81% and 82%, respectively, to diagnose PC, and of 85% and 91%, 85% and 88%, and 86% and 93%, respectively, for the identification of IDC-P. IDC-P could further be differentiated from high-grade prostatic intraepithelial neoplasia (HGPIN), a pre-malignant intraductal proliferation that can be mistaken as IDC-P, with accuracies, sensitivities, and specificities > 95% in both training and testing cohorts. As we used stringent criteria to diagnose IDC-P, the main limitation of our study is the exclusion of borderline, difficult-to-classify lesions from our datasets.
In this study, we developed classification models for the analysis of RμS data to differentiate IDC-P, PC, and benign tissue, including HGPIN. RμS could be a next-generation histopathological technique used to reinforce the identification of high-risk PC patients and lead to more precise diagnosis of IDC-P.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Early identification of dementia in the early or late stages of mild cognitive impairment (MCI) is crucial for a timely diagnosis and slowing down the progression of Alzheimer’s disease (AD). ...Positron emission tomography (PET) is considered a highly powerful diagnostic biomarker, but few approaches investigated the efficacy of focusing on localized PET-active areas for classification purposes. In this work, we propose a pipeline using learned features from semantically labelled PET images to perform group classification. A deformable multimodal PET-MRI registration method is employed to fuse an annotated MNI template to each patient-specific PET scan, generating a fully labelled volume from which 10 common regions of interest used for AD diagnosis are extracted. The method was evaluated on 660 subjects from the ADNI database, yielding a classification accuracy of 91.2% for AD versus NC when using random forests combining features from cross-sectional and follow-up exams. A considerable improvement in the early versus late MCI classification accuracy was achieved using FDG-PET compared to the AV-45 compound, yielding a 72.5% rate. The pipeline demonstrates the potential of exploiting longitudinal multiregion PET features to improve cognitive assessment.