Objectives
To compare the treatment success and safety of ultrasound- and MR-guided high-intensity focused ultrasound (HIFU) with surgery for treating symptomatic uterine fibroids.
Methods
We ...searched studies comparing HIFU with surgery for fibroids in different databases from January 2000 to July 2020. The mean difference (MD) or relative risk (RR) with 95% confidence interval (CI) for different outcome parameters was synthesized.
Results
We included 10 studies involving 4450 women. Compared with the surgery group, the decrease in uterine fibroid severity score at 6- and 12-month follow-up was higher in the HIFU group (MD − 4.16, 95% CI − 7.39 to − 0.94, and MD − 2.44, 95% CI − 3.67 to − 1.20,
p
< 0.05). The increase in quality-of-life (QoL) score at 6- and 12-month follow-up was higher in the HIFU group (MD 2.13, 95% CI 0.86 to 3.14, and MD 2.34, 95% CI 0.82 to 3.85,
p
< 0.05). The duration of hospital stay and the time to return to work was shorter in the HIFU group (MD − 3.41 days, 95% CI − 5.11 to − 1.70, and MD − 11.61 days, 95% CI − 19.73 to − 3.50,
p
< 0.05). The incidence of significant complications was lower in the HIFU group (RR 0.33, 95% CI 0.13 to 0.81,
p
< 0.05). The differences in the outcomes of adverse events, symptom recurrence, re-intervention, and pregnancy were not statistically significant (
p
> 0.05).
Conclusions
HIFU is superior to surgery in terms of symptomatic relief, improvement in QoL, recovery, and significant complications. However, HIFU showed comparable effects to surgery regarding the incidence of adverse events, symptom recurrence, re-intervention, and pregnancy.
Key Points
•
HIFU ablation is superior to surgery in terms of symptomatic relief, improvement in QoL, recovery, and significant complications.
• HIFU has comparable effects to surgery in terms of symptom recurrence rate, re-intervention rate, and pregnancy rate, indicating that HIFU is a promising non-invasive therapy that seems not to raise the risk of recurrence and re-intervention or deteriorate fertility compared to surgical approaches in women with fibroids.
• There is still a lack of good-quality comparative data and further randomized studies are necessary to provide sufficient and reliable data, especially on re-intervention rate and pregnancy outcome.
In this paper, we present a novel framework for dermoscopy image recognition via both a deep learning method and a local descriptor encoding strategy. Specifically, deep representations of a rescaled ...dermoscopy image are first extracted via a very deep residual neural network pretrained on a large natural image dataset. Then these local deep descriptors are aggregated by orderless visual statistic features based on Fisher vector (FV) encoding to build a global image representation. Finally, the FV encoded representations are used to classify melanoma images using a support vector machine with a Chi-squared kernel. Our proposed method is capable of generating more discriminative features to deal with large variations within melanoma classes, as well as small variations between melanoma and nonmelanoma classes with limited training data. Extensive experiments are performed to demonstrate the effectiveness of our proposed method. Comparisons with state-of-the-art methods show the superiority of our method using the publicly available ISBI 2016 Skin lesion challenge dataset.
Automatic delineation of skin lesion contours from dermoscopy images is a basic step in the process of diagnosis and treatment of skin lesions. However, it is a challenging task due to the high ...variation of appearances and sizes of skin lesions. In order to deal with such challenges, we propose a new dense deconvolutional network (DDN) for skin lesion segmentation based on residual learning. Specifically, the proposed network consists of dense deconvolutional layers (DDLs), chained residual pooling (CRP), and hierarchical supervision (HS). First, unlike traditional deconvolutional layers, DDLs are adopted to maintain the dimensions of the input and output images unchanged. The DDNs are trained in an end-to-end manner without the need of prior knowledge or complicated postprocessing procedures. Second, the CRP aims to capture rich contextual background information and to fuse multilevel features. By combining the local and global contextual information via multilevel feature fusion, the high-resolution prediction output is obtained. Third, HS is added to serve as an auxiliary loss and to refine the prediction mask. Extensive experiments based on the public ISBI 2016 and 2017 skin lesion challenge datasets demonstrate the superior segmentation results of our proposed method over the state-of-the-art methods.
In this paper, a multiscale convolutional network (MSCN) and graph-partitioning-based method is proposed for accurate segmentation of cervical cytoplasm and nuclei. Specifically, deep learning via ...the MSCN is explored to extract scale invariant features, and then, segment regions centered at each pixel. The coarse segmentation is refined by an automated graph partitioning method based on the pretrained feature. The texture, shape, and contextual information of the target objects are learned to localize the appearance of distinctive boundary, which is also explored to generate markers to split the touching nuclei. For further refinement of the segmentation, a coarse-to-fine nucleus segmentation framework is developed. The computational complexity of the segmentation is reduced by using superpixel instead of raw pixels. Extensive experimental results demonstrate that the proposed cervical nucleus cell segmentation delivers promising results and outperforms existing methods.
Medical image fusion techniques can further improve the accuracy and time efficiency of clinical diagnosis by obtaining comprehensive salient features and detail information from medical images of ...different modalities. We propose a novel medical image fusion algorithm based on deep convolutional generative adversarial network and dense block models, which is used to generate fusion images with rich information. Specifically, this network architecture integrates two modules: an image generator module based on dense block and encoder–decoder and a discriminator module. In this paper, we use the encoder network to extract the image features, process the features using fusion rule based on the Lmax norm, and use it as the input of the decoder network to obtain the final fusion image. This method can overcome the weaknesses of the active layer measurement by manual design in the traditional methods and can process the information of the intermediate layer according to the dense blocks to avoid the loss of information. Besides, this paper uses detail loss and structural similarity loss to construct the loss function, which is used to improve the extraction ability of target information and edge detail information related to images. Experiments on the public clinical diagnostic medical image dataset show that the proposed algorithm not only has excellent detail preserve characteristics but also can suppress the artificial effects. The experiment results are better than other comparison methods in different types of evaluation.
Recently, deep convolutional neural networks (C-NNs) have provided us an effective tool for automated polyp segmentation in colonoscopy images. However, most CNN-based methods do not fully consider ...the feature interaction among different layers and often cannot provide satisfactory segmentation performance. In this paper, a novel attention-guided pyramid context network (APCNet) is proposed for accurate and robust polyp segmentation in colonoscopy images. Specifically, considering that different network layers represent the polyp in different aspects, APCNet first extracts multi-layer features in a pyramid structure, then utilizes an attention-guided multi-layer aggregation strategy to refine the context features of each layer by utilizing the complementary information of different layers. To obtain abundant context features, APCNet employs a context extraction module that explores the context information of each layer via local information retainment and global information compaction. Through the top-down deep supervision, our APCNet implements a coarse-to-fine polyp segmentation and finally localizes the polyp region precisely. Extensive experiments on two in-domain and four out-of-domain experiments show that APCNet is comparable to 19 state-of-the-art methods. Moreover, it holds a more appropriate trade-off between effectiveness and computational complexity than these competing methods.
Accurate and efficient prediction of drug-target interaction (DTI) is critical to advance drug development and reduce the cost of drug discovery. Recently, the employment of deep learning methods has ...enhanced DTI prediction precision and efficacy, but it still encounters several challenges. The first challenge lies in the efficient learning of drug and protein feature representations alongside their interaction features to enhance DTI prediction. Another important challenge is to improve the generalization capability of the DTI model within real-world scenarios. To address these challenges, we propose CAT-DTI, a model based on cross-attention and Transformer, possessing domain adaptation capability. CAT-DTI effectively captures the drug-target interactions while adapting to out-of-distribution data. Specifically, we use a convolution neural network combined with a Transformer to encode the distance relationship between amino acids within protein sequences and employ a cross-attention module to capture the drug-target interaction features. Generalization to new DTI prediction scenarios is achieved by leveraging a conditional domain adversarial network, aligning DTI representations under diverse distributions. Experimental results within in-domain and cross-domain scenarios demonstrate that CAT-DTI model overall improves DTI prediction performance compared with previous methods.
Ultrasound (US) has become one of the most commonly performed imaging modalities in clinical practice. It is a rapidly evolving technology with certain advantages and with unique challenges that ...include low imaging quality and high variability. From the perspective of image analysis, it is essential to develop advanced automatic US image analysis methods to assist in US diagnosis and/or to make such assessment more objective and accurate. Deep learning has recently emerged as the leading machine learning tool in various research fields, and especially in general imaging analysis and computer vision. Deep learning also shows huge potential for various automatic US image analysis tasks. This review first briefly introduces several popular deep learning architectures, and then summarizes and thoroughly discusses their applications in various specific tasks in US image analysis, such as classification, detection, and segmentation. Finally, the open challenges and potential trends of the future application of deep learning in medical US image analysis are discussed.
Alzheimer's disease (AD) is a neurodegenerative disease with an irreversible and progressive process. To understand the brain functions and identify the biomarkers of AD and early stages of the ...disease also known as, mild cognitive impairment (MCI), it is crucial to build the brain functional connectivity network (BFCN) using resting-state functional magnetic resonance imaging (rs-fMRI). Existing methods have been mainly developed using only a single time-point rs-fMRI data for classification. In fact, multiple time-point data is more effective than a single time-point data in diagnosing brain diseases by monitoring the disease progression patterns using longitudinal analysis. In this article, we utilize multiple rs-fMRI time-point to identify early MCI (EMCI) and late MCI (LMCI), by integrating the fused sparse network (FSN) model with parameter-free centralized (PFC) learning. Specifically, we first construct the FSN framework by building multiple time-point BFCNs. The multitask learning via PFC is then leveraged for longitudinal analysis of EMCI and LMCI. Accordingly, we can jointly learn the multiple time-point features constructed from the BFCN model. The proposed PFC method can automatically balance the contributions of different time-point information via learned specific and common features. Finally, the selected multiple time-point features are fused by a similarity network fusion (SNF) method. Our proposed method is evaluated on the public AD neuroimaging initiative phase-2 (ADNI-2) database. The experimental results demonstrate that our method can achieve quite promising performance and outperform the state-of-the-art methods.
For significant memory concern (SMC) and mild cognitive impairment (MCI), their classification performance is limited by confounding features, diverse imaging protocols, and limited sample size. To ...address the above limitations, we introduce a dual-modality fused brain connectivity network combining resting-state functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI), and propose three mechanisms in the current graph convolutional network (GCN) to improve classifier performance. First, we introduce a DTI-strength penalty term for constructing functional connectivity networks. Stronger structural connectivity and bigger structural strength diversity between groups provide a higher opportunity for retaining connectivity information. Second, a multi-center attention graph with each node representing a subject is proposed to consider the influence of data source, gender, acquisition equipment, and disease status of those training samples in GCN. The attention mechanism captures their different impacts on edge weights. Third, we propose a multi-channel mechanism to improve filter performance, assigning different filters to features based on feature statistics. Applying those nodes with low-quality features to perform convolution would also deteriorate filter performance. Therefore, we further propose a pooling mechanism, which introduces the disease status information of those training samples to evaluate the quality of nodes. Finally, we obtain the final classification results by inputting the multi-center attention graph into the multi-channel pooling GCN. The proposed method is tested on three datasets (i.e., an ADNI 2 dataset, an ADNI 3 dataset, and an in-house dataset). Experimental results indicate that the proposed method is effective and superior to other related algorithms, with a mean classification accuracy of 93.05% in our binary classification tasks. Our code is available at: https://github.com/Xuegang-S .