In radiation oncology, predicting patient risk stratification allows specialization of therapy intensification as well as selecting between systemic and regional treatments, all of which helps to ...improve patient outcome and quality of life. Deep learning offers an advantage over traditional radiomics for medical image processing by learning salient features from training data originating from multiple datasets. However, while their large capacity allows to combine high-level medical imaging data for outcome prediction, they lack generalization to be used across institutions. In this work, a pseudo-volumetric convolutional neural network with a deep preprocessor module and self-attention (PreSANet) is proposed for the prediction of distant metastasis, locoregional recurrence, and overall survival occurrence probabilities within the 10 year follow-up time frame for head and neck cancer patients with squamous cell carcinoma. The model is capable of processing multi-modal inputs of variable scan length, as well as integrating patient data in the prediction model. These proposed architectural features and additional modalities all serve to extract additional information from the available data when availability to additional samples is limited. This model was trained on the public Cancer Imaging Archive Head-Neck-PET-CT dataset consisting of 298 patients undergoing curative radio/chemo-radiotherapy and acquired from 4 different institutions. The model was further validated on an internal retrospective dataset with 371 patients acquired from one of the institutions in the training dataset. An extensive set of ablation experiments were performed to test the utility of the proposed model characteristics, achieving an AUROC of Formula: see text, Formula: see text and Formula: see text for DM, LR and OS respectively on the public TCIA Head-Neck-PET-CT dataset. External validation was performed on a retrospective dataset with 371 patients, achieving Formula: see text AUROC in all outcomes. To test for model generalization across sites, a validation scheme consisting of single site-holdout and cross-validation combining both datasets was used. The mean accuracy across 4 institutions obtained was Formula: see text, Formula: see text and Formula: see text for DM, LR and OS respectively. The proposed model demonstrates an effective method for tumor outcome prediction for multi-site, multi-modal combining both volumetric data and structured patient clinical data.
•Free-breathing motion compensation aimed at image-guided radiotherapy.•Novel framework with multi-scale feature extraction and spatial transformation for in-plane motion prediction and future frame ...generation.•Prediction of the next k deformations from a given input image sequence.•Validation on different imaging modalities: MRI, US and CT.
Display omitted
External beam radiotherapy is a commonly used treatment option for patients with cancer in the thoracic and abdominal regions. However, respiratory motion constitutes a major limitation during the intervention. It may stray the pre-defined target and trajectories determined during planning from the actual anatomy. We propose a novel framework to predict the in-plane organ motion. We introduce a recurrent encoder-decoder architecture which leverages feature representations at multiple scales. It simultaneously learns to map dense deformations between consecutive images from a given input sequence and to extrapolate them through time. Subsequently, several cascade-arranged spatial transformers use the predicted deformation fields to generate a future image sequence. We propose the use of a composite loss function which minimizes the difference between ground-truth and predicted images while maintaining smooth deformations. Our model is trained end-to-end in an unsupervised manner, thus it does not require additional information beyond image data. Moreover, no pre-processing steps such as segmentation or registration are needed. We report results on 85 different cases (healthy subjects and patients) belonging to multiples datasets across different imaging modalities. Experiments were aimed at investigating the importance of the proposed multi-scale architecture design and the effect of increasing the number of predicted frames on the overall accuracy of the model. The proposed model was able to predict vessel positions in the next temporal image with a median accuracy of 0.45 (0.55) mm, 0.45 (0.74) mm and 0.28 (0.58) mm in MRI, US and CT datasets, respectively. The obtained results show the strong potential of the model by achieving accurate matching between the predicted and target images on several imaging modalities.
According to the World Health Organization, cardiovascular diseases are the leading cause of death worldwide. Many coronary diseases involve the left ventricle; therefore, estimation of several ...functional parameters from a previous segmentation of this structure can be helpful in diagnosis. Although a high number of automated methods have been proposed, left ventricle segmentation in cardiac MRI images remains an open problem. In this work we propose a deep fully convolutional neural network architecture to address this issue and assess its performance. The model was trained end to end in a supervised learning stage from whole image input and ground truths to make a per pixel classification in order to segment the myocardium. For its design, development and experimentation a Caffe deep learning framework over an NVidia Quadro K4200 Graphics Processing Unit was used. Training and testing processes were carried out using 10-fold cross validation with short axis images. In addition, the performance of six optimization methods was compared. The proposed model was validated in 45 datasets of Sunnybrook database using a Dice coefficient, Average Perpendicular Distance (APD) and percentage of good contours (GC) metrics and compared with other state-of-the-art approaches. Results show the robustness and feasibility of the proposed method.
In developed countries, colorectal cancer is the second cause of cancer-related mortality. Chemotherapy is considered a standard treatment for colorectal liver metastases (CLM). Among patients who ...develop CLM, the assessment of patient response to chemotherapy is often required to determine the need for second-line chemotherapy and eligibility for surgery. However, while FOLFOX-based regimens are typically used for CLM treatment, the identification of responsive patients remains elusive. Computer-aided diagnosis systems may provide insight in the classification of liver metastases identified on diagnostic images. In this paper, we propose a fully automated framework based on deep convolutional neural networks (DCNN) which first differentiates treated and untreated lesions to identify new lesions appearing on CT scans, followed by a fully connected neural networks to predict from untreated lesions in pre-treatment computed tomography (CT) for patients with CLM undergoing chemotherapy, their response to a FOLFOX with Bevacizumab regimen as first-line of treatment. The ground truth for assessment of treatment response was histopathology-determined tumor regression grade. Our DCNN approach trained on 444 lesions from 202 patients achieved accuracies of 91% for differentiating treated and untreated lesions, and 78% for predicting the response to FOLFOX-based chemotherapy regimen. Experimental results showed that our method outperformed traditional machine learning algorithms and may allow for the early detection of non-responsive patients.
Fundamento: la detección y clasificación precisa del cáncer de mama mediante el diagnóstico histopatológico es de vital importancia para el tratamiento efectivo de la enfermedad. Entre los tipos de ...cáncer de mama, el carcinoma ductal invasivo es el más frecuente. El análisis visual de las muestras de tejido en el microscopio es un proceso manual que consume tiempo y depende del observador. Sin embargo, en muchos países, incluido Cuba, es escaso el uso de herramientas software para asistir el diagnóstico.
Objetivo: desarrollar una herramienta software para detectar tejido de cáncer de mama, del subtipo carcinoma ductal invasivo, en imágenes histopatológicas.
Métodos: la herramienta se implementó en Python e incluye métodos de detección de carcinoma ductal invasivo en imágenes histopatológicas, basados en algoritmos de extracción de características de color y textura en combinación con un clasificador de bosques aleatorios.
Resultados: la herramienta de código abierto brinda una serie de facilidades para la lectura, escritura y visualización de imágenes histopatológicas, delineación automática y manual de zonas cancerígenas, gestión de los datos diagnósticos del paciente y evaluación colaborativa a distancia. Fue evaluada en una base de datos con 162 imágenes de pacientes diagnosticados con carcinoma ductal invasivo y se obtuvo una exactitud balanceada de 84 % y factor F1 de 75 %.
Conclusiones: la herramienta permitió un análisis interactivo, rápido, reproducible y colaborativo mediante una interfaz gráfica sencilla e intuitiva. En versiones futuras se prevé incluir nuevos métodos de aprendizaje automático incremental para el análisis de imágenes histopatológicas digitales.
Adapting visual object detectors to operational target domains is a challenging task, commonly achieved using unsupervised domain adaptation (UDA) methods. Recent studies have shown that when the ...labeled dataset comes from multiple source domains, treating them as separate domains and performing a multi-source domain adaptation (MSDA) improves the accuracy and robustness over blending these source domains and performing a UDA. For adaptation, existing MSDA methods learn domain-invariant and domain-specific parameters (for each source domain). However, unlike single-source UDA methods, learning domain-specific parameters makes them grow significantly in proportion to the number of source domains. This paper proposes a novel MSDA method called Prototype-based Mean Teacher (PMT), which uses class prototypes instead of domain-specific subnets to encode domain-specific information. These prototypes are learned using a contrastive loss, aligning the same categories across domains and separating different categories far apart. Given the use of prototypes, the number of parameters required for our PMT method does not increase significantly with the number of source domains, thus reducing memory issues and possible overfitting. Empirical studies indicate that PMT outperforms state-of-the-art MSDA methods on several challenging object detection datasets. Our code is available at https://github.com/imatif17/Prototype-Mean-Teacher
Breast cancer is the most diagnosed cancer and the most predominant cause of death in women worldwide. Imaging techniques such as breast cancer pathology helps in the diagnosis and monitoring of the ...disease. However identification of malignant cells can be challenging given the high heterogeneity in tissue absorption from staining agents. In this work, we present a novel approach for Invasive Ductal Carcinoma (IDC) cells discrimination in histopathology slides. We propose a model derived from the Inception architecture, proposing a multi-level batch normalization module between each convolutional steps. This module was used as a base block for feature extraction in a CNN architecture. We used the open IDC dataset in which we obtained a balanced accuracy of 0.89 and an F1 score of 0.90, thus surpassing recent state of the art classification algorithms tested on this public dataset.
In medical imaging, radiological scans of different modalities serve to enhance different sets of features for clinical diagnosis and treatment planning. This variety enriches the source information ...that could be used for outcome prediction. Deep learning methods are particularly well-suited for feature extraction from high-dimensional inputs such as images. In this work, we apply a CNN classification network augmented with a FCN preprocessor sub-network to a public TCIA head and neck cancer dataset. The training goal is survival prediction of radiotherapy cases based on pre-treatment FDG PET-CT scans, acquired across 4 different hospitals. We show that the preprocessor sub-network in conjunction with aggregated residual connection leads to improvements over state-of-the-art results when combining both CT and PET input images.
ABSTRACT Cardiovascular diseases are the leading cause of death worldwide, accounting for 17.3 million deaths per year. The electrocardiogram (ECG) is a non-invasive technique widely used for the ...detection of cardiac diseases. To increase diagnostic sensitivity, ECG is acquired during exercise stress tests or in ambulatory way. Under these acquisition conditions, the ECG is strongly affected by some types of noises, mainly by baseline wander (BLW). In this work, nine methods, widely used, for the elimination of BLW were implemented, which are: interpolation using cubic splines, finite impulse response (FIR) filter, infinite impulse response (IIR) filter, least mean square adaptive filtering, moving-average filter, independent component analysis, interpolation and successive subtraction of median values in RR interval, empirical mode decomposition, and wavelet filtering. For the quantitative evaluation, the following similarity metrics were used: absolute maximum distance, the sum of squares of distances and percentage root-mean-square difference. Several experiments were performed using synthetic ECG signals generated by ECGSYM software, real ECG signals from QT Database, artificial BLW generated by software and real BLW from the Noise Stress Test Database. The best results were obtained by the method based on FIR high-pass filter with a cut-off frequency of 0.67 Hz.
Colorectal liver metastasis is one of most aggressive liver malignancies. While the definition of lesion type based on CT images determines the diagnosis and therapeutic strategy, the discrimination ...between cancerous and non-cancerous lesions are critical and requires highly skilled expertise, experience and time. In the present work we introduce an end-to-end deep learning approach to assist in the discrimination between liver metastases from colorectal cancer and benign cysts in abdominal CT images of the liver. Our approach incorporates the efficient feature extraction of InceptionV3 combined with residual connections and pre-trained weights from ImageNet. The architecture also includes fully connected classification layers to generate a probabilistic output of lesion type. We use an in-house clinical biobank with 230 liver lesions originating from 63 patients. With an accuracy of 0.96 and a F1-score of 0.92, the results obtained with the proposed approach surpasses state of the art methods. Our work provides the basis for incorporating machine learning tools in specialized radiology software to assist physicians in the early detection and treatment of liver lesions.