Low rank and sparse representation based methods, which make few specific assumptions about the background, have recently attracted wide attention in background modeling. With these methods, moving ...objects in the scene are modeled as pixel-wised sparse outliers. However, in many practical scenarios, the distributions of these moving parts are not truly pixel-wised sparse but structurally sparse. Meanwhile a robust analysis mechanism is required to handle background regions or foreground movements with varying scales. Based on these two observations, we first introduce a class of structured sparsity-inducing norms to model moving objects in videos. In our approach, we regard the observed sequence as being constituted of two terms, a low-rank matrix (background) and a structured sparse outlier matrix (foreground). Next, in virtue of adaptive parameters for dynamic videos, we propose a saliency measurement to dynamically estimate the support of the foreground. Experiments on challenging well known data sets demonstrate that the proposed approach outperforms the state-of-the-art methods and works effectively on a wide range of complex videos.
Image-based precision medicine techniques can be used to better treat cancer patients. However, the gigapixel resolution of Whole Slide Histopathological Images (WSIs) makes traditional survival ...models computationally impossible. These models usually adopt manually labeled discriminative patches from region of interests (ROIs) and are unable to directly learn discriminative patches from WSIs. We argue that only a small set of patches cannot fully represent the patients survival status due to the heterogeneity of tumor. Another challenge is that survival prediction usually comes with insufficient training patient samples. In this paper, we propose an effective Whole Slide Histopathological Images Survival Analysis framework (WSISA) to overcome above challenges. To exploit survival-discriminative patterns from WSIs, we first extract hundreds of patches from each WSI by adaptive sampling and then group these images into different clusters. Then we propose to train an aggregation model to make patient-level predictions based on cluster-level Deep Convolutional Survival (DeepConvSurv) prediction results. Different from existing state-of-the-arts image-based survival models which extract features using some patches from small regions of WSIs, the proposed framework can efficiently exploit and utilize all discriminative patterns in WSIs to predict patients survival status. To the best of our knowledge, this has not been shown before. We apply our method to the survival predictions of glioma and non-small-cell lung cancer using three datasets. Results demonstrate the proposed framework can significantly improve the prediction performance compared with the existing state-of-the-arts survival methods.
Abstract Extracting precise stellar labels is crucial for large spectroscopic surveys like the Sloan Digital Sky Survey (SDSS) and APOGEE. In this paper, we report the newest implementation of ...StellarGAN, a data-driven method based on generative adversarial networks (GANs). Using 1D operators like convolution, the 2D GAN is modified into StellarGAN. This allows it to learn the relevant features of 1D stellar spectra without needing labels for specific stellar types. We test the performance of StellarGAN on different stellar spectra trained on SDSS and APOGEE data sets. Our result reveals that StellarGAN attains the highest overall F1-score on SDSS data sets (F1-score = 0.82, 0.77, 0.74, 0.53, 0.51, 0.61, and 0.55, for O-type, B-type, A-type, F-type, G-type, K-type, and M-type stars) when the signal-to-noise ratio (S/N) is low (90% of the spectra have an S/N < 50), with 1% of labeled spectra used for training. Using 50% of the labeled spectral data for training, StellarGAN consistently demonstrates performance that surpasses or is comparable to that of other data-driven models, as evidenced by the F1-scores of 0.92, 0.77, 0.77, 0.84, 0.84, 0.80, and 0.67. In the case of APOGEE (90% of the spectra have an S/N < 500), our method is also superior regarding its comprehensive performance (F1-score = 0.53, 0.60, 0.56, 0.56, and 0.78 for A-type, F-type, G-type, K-type, and M-type stars) with 1% of labeled spectra for training, manifesting its learning ability out of a limited number of labeled spectra. Our proposed method is also applicable to other types of data that need to be classified (such as gravitational-wave signals, light curves, etc.).
•A Whole Slide Images-based survival model is proposed which doesn’t need ROI annotations.•The proposed model is more adaptive and flexible than recent WSI-based survival learning approaches.•The ...proposed approach has better interpretability in locating important patterns that contribute to accurate cancer survival predictions.
Traditional image-based survival prediction models rely on discriminative patch labeling which make those methods not scalable to extend to large datasets. Recent studies have shown Multiple Instance Learning (MIL) framework is useful for histopathological images when no annotations are available in classification task. Different to the current image-based survival models that limit to key patches or clusters derived from Whole Slide Images (WSIs), we propose Deep Attention Multiple Instance Survival Learning (DeepAttnMISL) by introducing both siamese MI-FCN and attention-based MIL pooling to efficiently learn imaging features from the WSI and then aggregate WSI-level information to patient-level. Attention-based aggregation is more flexible and adaptive than aggregation techniques in recent survival models. We evaluated our methods on two large cancer whole slide images datasets and our results suggest that the proposed approach is more effective and suitable for large datasets and has better interpretability in locating important patterns and features that contribute to accurate cancer survival predictions. The proposed framework can also be used to assess individual patient’s risk and thus assisting in delivering personalized medicine.
Display omitted
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
Traditional Cox proportional hazard model for survival analysis are based on structured features like patients' sex, smoke years, BMI, etc. With the development of medical imaging technology, more ...and more unstructured medical images are available for diagnosis, treatment and survival analysis. Traditional survival models utilize these unstructured images by extracting human-designed features from them. However, we argue that those hand-crafted features have limited abilities in representing highly abstract information. In this paper, we for the first time develop a deep convolutional neural network for survival analysis (DeepConvSurv) with pathological images. The deep layers in our model could represent more abstract information compared with hand-crafted features from the images. Hence, it will improve the survival prediction performance. From our extensive experiments on the National Lung Screening Trial (NLST) lung cancer data, we show that the proposed DeepConvSurv model improves significantly compared with four state-of-the-art methods.
ObjectivesTo explore how people perceive different advice for rotator cuff disease in terms of words/feelings evoked by the advice and treatment needs.SettingWe performed a content analysis of ...qualitative data collected in a randomised experiment.Participants2028 people with shoulder pain read a vignette describing someone with rotator cuff disease and were randomised to: bursitis label plus guideline-based advice, bursitis label plus treatment recommendation, rotator cuff tear label plus guideline-based advice and rotator cuff tear label plus treatment recommendation. Guideline-based advice included encouragement to stay active and positive prognostic information. Treatment recommendation emphasised that treatment is needed for recovery.Primary and secondary outcomesParticipants answered questions about: (1) words/feelings evoked by the advice; (2) treatments they feel are needed. Two researchers developed coding frameworks to analyse responses.Results1981 (97% of 2039 randomised) responses for each question were analysed. Guideline-based advice (vs treatment recommendation) more often elicited words/feelings of reassurance, having a minor issue, trust in expertise and feeling dismissed, and treatment needs of rest, activity modification, medication, wait and see, exercise and normal movements. Treatment recommendation (vs guideline-based advice) more often elicited words/feelings of needing treatment/investigation, psychological distress and having a serious issue, and treatment needs of injections, surgery, investigations, and to see a doctor.ConclusionsWords/feelings evoked by advice for rotator cuff disease and perceived treatment needs may explain why guideline-based advice reduces perceived need for unnecessary care compared to a treatment recommendation.
This study investigated the characteristics of activated carbon removing ultra-fined particulates PM10 and PM2.5 from sintering flue gas, and discussed the potential mechanism. Experimental results ...show that activated carbon (AC) exhibited greater removal efficiency to PM10 than PM2.5, and increasing the thickness of AC or reducing AC grain size facilitated the removal of PM10 and PM2.5. The removal ratio of PM10 and PM2.5 achieved 67.3% and 58.7% when AC bed thickness and grain size were 200 mm and 3–5 mm, respectively. Bigger particles in PM10 was susceptible to inertial effect, making their easier removal and higher removal ratio. AC bed thickness and bulk porosity (negatively relatively to AC grain size) presented positive relationship with the removal efficiency of PM10 and PM2.5, which therefore exhibited higher removal ratio with the bed thickness increased and grain size reduced. The research findings benefit the effective control of ultra-fined particulates in practical sintering plants.
•A fast algorithm for dynamic MRI reconstruction is proposed.•The proposed algorithm has an explicit solution in each step which can be solved inexpensively.•The proposed algorithm has a ...theoretically proved convergence rate.•Extensive experiments on dynamic MR data demonstrate its superior performance over all previous methods in terms of both reconstruction accuracy and computational complexity.
Display omitted
In this paper, we propose an efficient algorithm for dynamic magnetic resonance (MR) image reconstruction. With the total variation (TV) and the nuclear norm (NN) regularization, the TVNNR model can utilize both spatial and temporal redundancy in dynamic MR images. Such prior knowledge can help model dynamic MRI data significantly better than a low-rank or a sparse model alone. However, it is very challenging to efficiently minimize the energy function due to the non-smoothness and non-separability of both TV and NN terms. To address this issue, we propose an efficient algorithm by solving a primal-dual form of the original problem. We theoretically prove that the proposed algorithm achieves a convergence rate of O(1/N) for N iterations. In comparison with state-of-the-art methods, extensive experiments on single-coil and multi-coil dynamic MR data demonstrate the superior performance of the proposed method in terms of both reconstruction accuracy and time complexity.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UL, UM, UPCLJ, UPUK, ZRSKP
Purpose: Computed tomography (CT) characteristics associated with critical outcomes of patients with coronavirus disease 2019 (COVID-19) have been reported. However, CT risk factors for mortality ...have not been directly reported. We aim to determine the CT-based quantitative predictors for COVID-19 mortality. Methods: In this retrospective study, laboratory-confirmed COVID-19 patients at Wuhan Central Hospital between December 9, 2019, and March 19, 2020, were included. A novel prognostic biomarker, V-HU score, depicting the volume (V) of total pneumonia infection and the average Hounsfield unit (HU) of consolidation areas was automatically quantified from CT by an artificial intelligence (AI) system. Cox proportional hazards models were used to investigate risk factors for mortality. Results: The study included 238 patients (women 136/238, 57%; median age, 65 years, IQR 51–74 years), 126 of whom were survivors. The V-HU score was an independent predictor (hazard ratio HR 2.78, 95% confidence interval CI 1.50–5.17; p = 0.001) after adjusting for several COVID-19 prognostic indicators significant in univariable analysis. The prognostic performance of the model containing clinical and outpatient laboratory factors was improved by integrating the V-HU score (c-index: 0.695 vs. 0.728; p < 0.001). Older patients (age ≥ 65 years; HR 3.56, 95% CI 1.64–7.71; p < 0.001) and younger patients (age < 65 years; HR 4.60, 95% CI 1.92–10.99; p < 0.001) could be further risk-stratified by the V-HU score. Conclusions: A combination of an increased volume of total pneumonia infection and high HU value of consolidation areas showed a strong correlation to COVID-19 mortality, as determined by AI quantified CT.