Due to the urgency caused by the COVID-19 pandemic worldwide, vaccine manufacturers have to shorten and parallel the development steps to accelerate COVID-19 vaccine production. Although all usual ...safety and efficacy monitoring mechanisms remain in place, varied attitudes toward the new vaccines have arisen among different population groups.
This study aimed to discern the evolution and disparities of attitudes toward COVID-19 vaccines among various population groups through the study of large-scale tweets spanning over a whole year.
We collected over 1.4 billion tweets from June 2020 to July 2021, which cover some critical phases concerning the development and inoculation of COVID-19 vaccines worldwide. We first developed a data mining model that incorporates a series of deep learning algorithms for inferring a range of individual characteristics, both in reality and in cyberspace, as well as sentiments and emotions expressed in tweets. We further conducted an observational study, including an overall analysis, a longitudinal study, and a cross-sectional study, to collectively explore the attitudes of major population groups.
Our study derived 3 main findings. First, the whole population's attentiveness toward vaccines was strongly correlated (Pearson r=0.9512) with official COVID-19 statistics, including confirmed cases and deaths. Such attentiveness was also noticeably influenced by major vaccine-related events. Second, after the beginning of large-scale vaccine inoculation, the sentiments of all population groups stabilized, followed by a considerably pessimistic trend after June 2021. Third, attitude disparities toward vaccines existed among population groups defined by 8 different demographic characteristics. By crossing the 2 dimensions of attitude, we found that among population groups carrying low sentiments, some had high attentiveness ratios, such as males and individuals aged ≥40 years, while some had low attentiveness ratios, such as individuals aged ≤18 years, those with occupations of the 3rd category, those with account age <5 years, and those with follower number <500. These findings can be used as a guide in deciding who should be given more attention and what kinds of help to give to alleviate the concerns about vaccines.
This study tracked the year-long evolution of attitudes toward COVID-19 vaccines among various population groups defined by 8 demographic characteristics, through which significant disparities in attitudes along multiple dimensions were revealed. According to these findings, it is suggested that governments and public health organizations should provide targeted interventions to address different concerns, especially among males, older people, and other individuals with low levels of education, low awareness of news, low income, and light use of social media. Moreover, public health authorities may consider cooperating with Twitter users having high levels of social influence to promote the acceptance of COVID-19 vaccines among all population groups.
•A novel deep neural network, titled Multi-scale Residual Encoding and Decoding network (Ms RED), is proposed for skin lesion segmentation.•A pair of multi-scale residual feature fusion modules is ...further proposed to collectively augment the capability of the overall network in exploiting multi-scale perceptual context and characteristics.•A novel Multi-scale, Multi-channel Feature Fusion module is additionally proposed, which is able to further boost Ms RED capability in hierarchically extracting and adaptively learning from perceptual characteristics latent in an input image.
Display omitted
Computer-Aided Diagnosis (CAD) for dermatological diseases offers one of the most notable showcases where deep learning technologies display their impressive performance in acquiring and surpassing human experts. In such the CAD process, a critical step is concerned with segmenting skin lesions from dermoscopic images. Despite remarkable successes attained by recent deep learning efforts, much improvement is still anticipated to tackle challenging cases, e.g., segmenting lesions that are irregularly shaped, bearing low contrast, or possessing blurry boundaries. To address such inadequacies, this study proposes a novel Multi-scale Residual Encoding and Decoding network (Ms RED) for skin lesion segmentation, which is able to accurately and reliably segment a variety of lesions with efficiency. Specifically, a multi-scale residual encoding fusion module (MsR-EFM) is employed in an encoder, and a multi-scale residual decoding fusion module (MsR-DFM) is applied in a decoder to fuse multi-scale features adaptively. In addition, to enhance the representation learning capability of the newly proposed pipeline, we propose a novel multi-resolution, multi-channel feature fusion module (M2F2), which replaces conventional convolutional layers in encoder and decoder networks. Furthermore, we introduce a novel pooling module (Soft-pool) to medical image segmentation for the first time, retaining more helpful information when down-sampling and getting better segmentation performance. To validate the effectiveness and advantages of the proposed network, we compare it with several state-of-the-art methods on ISIC 2016, 2017, 2018, and PH2. Experimental results consistently demonstrate that the proposed Ms RED attains significantly superior segmentation performance across five popularly used evaluation criteria. Last but not least, the new model utilizes much fewer model parameters than its peer approaches, leading to a greatly reduced number of labeled samples required for model training, which in turn produces a substantially faster converging training process than its peers. The source code is available at https://github.com/duweidai/Ms-RED.
Medical image segmentation methods based on deep learning have made remarkable progress. However, such existing methods are sensitive to data distribution. Therefore, slight domain shifts will cause ...a decline of performance in practical applications. To relieve this problem, many domain adaptation methods learn domain-invariant representations by alignment or adversarial training whereas ignoring domain-specific representations. In response to this issue, this paper rethinks the traditional domain adaptation framework and proposes a novel orthogonal decomposition adversarial domain adaptation (ODADA) architecture for medical image segmentation. The main idea behind our proposed ODADA model is to decompose the input features into domain-invariant and domain-specific representations and then use the newly designed orthogonal loss function to encourage their independence. Furthermore, we propose a two-step optimization strategy to extract domain-invariant representations by separating domain-specific representations, fighting the performance degradation caused by domain shifts. Encouragingly, the proposed ODADA framework is plug-and-play and can replace the traditional adversarial domain adaptation module. The proposed method has consistently demonstrated effectiveness through comprehensive experiments on three publicly available datasets, including cross-site prostate segmentation dataset, cross-site COVID-19 lesion segmentation dataset, and cross-modality cardiac segmentation dataset. The source code is available at https://github.com/YonghengSun1997/ODADA.
•We propose an enhanced adversarial domain adaptation framework to separate domain-specific representations to extract domain-invariant representations.•The orthogonal loss is designed to promote domain-invariant and domain-specific representations to be independent.•The two-step optimization is proposed to effectively extract domain-invariant and domain-specific representations.
Coronary artery segmentation is a crucial prerequisite for computer-aided diagnosis of coronary artery disease (CAD). However, this task remains challenging due to the intricate anatomical structures ...and morphologies of coronary arteries (CAs), which are characterized by tortuous and numerous slender branches, large inter-subject variations, and low contrast with adjacent tissues. To address these challenges, we propose a novel deep network with triangular-star spatial–spectral fusion encoding and an entropy-aware double decoding process to comprehensively explore CA features from diverse perspectives. Specifically, to enhance the encoder’s ability to exploit spectral characteristics, we incorporate a tri-stage attention-mediated Fourier (tri-AMF) structure. This structure dynamically modulates the global features of blood vessels in the frequency domain with superior resilience against spectral noises. Simultaneously, we introduce a triangular-star cross-domain feature fusion (▽-Star fusion) module, integrating features from a pair of closely intertwined encoders dedicated to spatial and spectral domains, along with features transmitted to the decoder. This module, facilitated by richly connected pairwise interaction pathways, is designed to learn to segment coronary arteries through cross-domain deep analysis. Furthermore, our network’s decoder employs a novel local entropy-aware double decoding (LEAD2) process to adaptively fuse feature maps across all scales with the local entropy associated with each scale, explicitly modeling the network’s derivation of the final segmentation outcome. Extensive experiments on two in-house and three publicly available datasets consistently demonstrate that the proposed method has superior performance and generalization ability, outperforming multiple state-of-the-art algorithms on various metrics. The code is available at https://github.com/Cassie-CV/CASeg.
•A network with ▽-star spatial–spectral fusion encoding and entropy-aware double decoding.•A ▽-star cross-domain feature fusion module with richly connected pairwise interaction pathways.•A tri-AMF structure that dynamically modulates global features of vessels in the frequency domain.•A local entropy-aware double decoding process that fuses features across all scales.
•A novel deep learning framework MSCA-Net for skin lesion segmentation.•A MSB module to integrate multi-scale features of the encoder.•A GL-CSAM module to capture global contextual information with ...four attentions.•A SADS module to integrate multi-scale features of the decoder to improve the output.•Extensive experiments and analysis confirm the superiority of the pro posed MSCA-Net.
Lesion segmentation algorithms automatically outline lesion areas in medical images, facilitating more effective identification and assessment of the clinically relevant features, and improving the efficacy and diagnosis accuracy. However, most fully convolutional network based segmentation methods suffer from spatial and contextual information loss when decreasing image resolution. To overcome this shortcoming, this paper proposes a skin lesion segmentation model, namely, the Multi-Scale Contextual Attention Network (MSCA-Net), which can exploit the multi-scale contextual information in images. Inspired by the skip connection of U-Net, we design a multi-scale bridge (MSB) module which interacts with multi-scale features to effectively fuse the multi-scale contextual information of the encoder and decoder path features. We further propose a global-local channel spatial attention module (GL-CSAM), aiming at capturing global contextual information. In addition, to take full advantage of the multi-scale features of the decoder, we propose a scale-aware deep supervision (SADS) module to achieve hierarchical iterative deep supervision. Comprehensive experimental results on the public dataset of ISIC 2017, ISIC 2018, and PH2 show that our proposed method outperforms other state-of-the-art methods, demonstrating the efficacy of our method in skin lesion segmentation. Our code is available at https://github.com/YonghengSun1997/MSCA-Net.
•We propose a novel deep neural network based on Transformer, named TransHRNet, which connects the different resolution streams in parallel and repeatedly exchanges the information across ...resolutions.•An Effective Transformer (EffTrans) is introduced to promote the performance, which uses the group linear transformations with an expand-reduce strategy and the spatial-reduction attention layer to further reduce the resource cost.•Our proposed method achieves higher performance than SoTA efficient medical image segmentation method with comparable computation cost.
Most recent 3D medical image segmentation methods adopt convolutional neural networks (CNNs) that rely on deep feature representation and achieve adequate performance. However, due to the convolutional architectures having limited receptive fields, they cannot explicitly model the long-range dependencies in the medical image. Recently, Transformer can benefit from global dependencies using self-attention mechanisms and learn highly expressive representations. Some works were designed based on the Transformers, but the existing Transformers suffer from extreme computational and memories, and they cannot take full advantage of the powerful feature representations in 3D medical image segmentation. In this paper, we aim to connect the different resolution streams in parallel and propose a novel network, named Transformer based High Resolution Network (TransHRNet), with an Effective Transformer (EffTrans) block, which has sufficient feature representation even at high feature resolutions. Given a 3D image, the encoder first utilizes CNN to extract the feature representations to capture the local information, and then the different feature maps are reshaped elaborately for tokens that are fed into each Transformer stream in parallel to learn the global information and repeatedly exchange the information across streams. Unfortunately, the proposed framework based on the standard Transformer needs a huge amount of computation, thus we introduce a deep and effective Transformer to deliver better performance with fewer parameters. The proposed TransHRNet is evaluated on the Multi-Atlas Labeling Beyond the Cranial Vault (BCV) dataset that consists of 11 major human organs and the Medical Segmentation Decathlon (MSD) dataset for brain tumor and spleen segmentation tasks. Experimental results show that it performs better than the convolutional and other related Transformer-based methods on the 3D multi-organ segmentation tasks. Code is available at https://github.com/duweidai/TransHRNet.
Automatic segmentation of coronary arteries provides vital assistance to enable accurate and efficient diagnosis and evaluation of coronary artery disease (CAD). However, the task of coronary artery ...segmentation (CAS) remains highly challenging due to the large-scale variations exhibited by coronary arteries, their complicated anatomical structures and morphologies, as well as the low contrast between vessels and their background. To comprehensively tackle these challenges, we propose a novel multi-attention, multi-scale 3D deep network for CAS, which we call CAS-Net. Specifically, we first propose an attention-guided feature fusion (AGFF) module to efficiently fuse adjacent hierarchical features in the encoding and decoding stages to capture more effectively latent semantic information. Then, we propose a scale-aware feature enhancement (SAFE) module, aiming to dynamically adjust the receptive fields to extract more expressive features effectively, thereby enhancing the feature representation capability of the network. Furthermore, we employ the multi-scale feature aggregation (MSFA) module to learn a more distinctive semantic representation for refining the vessel maps. In addition, considering that the limited training data annotated with a quality golden standard are also a significant factor restricting the development of CAS, we construct a new dataset containing 119 cases consisting of coronary computed tomographic angiography (CCTA) volumes and annotated coronary arteries. Extensive experiments on our self-collected dataset and three publicly available datasets demonstrate that the proposed method has good segmentation performance and generalization ability, outperforming multiple state-of-the-art algorithms on various metrics. Compared with U-Net3D, the proposed method significantly improves the Dice similarity coefficient (DSC) by at least 4% on each dataset, due to the synergistic effect among the three core modules, AGFF, SAFE, and MSFA. Our implementation is released at https://github.com/Cassie-CV/CAS-Net.
•A novel multi-attention, multi-scale 3D deep network is proposed to comprehensively tackle the challenges of coronary artery segmentation, which is abbreviated as CAS-Net.•An attention-guided feature fusion (AGFF) module is designed to adaptively select the useful semantic information and spatial information to disentangle coronary arteries from veins and noise.•A novel scale-aware feature enhancement (SAFE) module is proposed to extract concealed multi-scale context information effectively and dynamically aggregate the multi-scale features.•A novel multi-scale feature aggregation (MSFA) module is designed to learn more semantic representations to refine the vessel segmentation map.
Although the U-shape networks have achieved remarkable performances in many medical image segmentation tasks, they rarely model the sequential relationship of hierarchical layers. This weakness makes ...it difficult for the current layer to effectively utilize the historical information of the previous layer, leading to unsatisfactory segmentation results for lesions with blurred boundaries and irregular shapes. To solve this problem, we propose a novel dual-path U-Net, dubbed I2U-Net. The newly proposed network encourages historical information re-usage and re-exploration through rich information interaction among the dual paths, allowing deep layers to learn more comprehensive features that contain both low-level detail description and high-level semantic abstraction. Specifically, we introduce a multi-functional information interaction module (MFII), which can model cross-path, cross-layer, and cross-path-and-layer information interactions via a unified design, making the proposed I2U-Net behave similarly to an unfolded RNN and enjoying its advantage of modeling time sequence information. Besides, to further selectively and sensitively integrate the information extracted by the encoder of the dual paths, we propose a holistic information fusion and augmentation module (HIFA), which can efficiently bridge the encoder and the decoder. Extensive experiments on four challenging tasks, including skin lesion, polyp, brain tumor, and abdominal multi-organ segmentation, consistently show that the proposed I2U-Net has superior performance and generalization ability over other state-of-the-art methods. The code is available at https://github.com/duweidai/I2U-Net.
•We propose a dual path U-Net, named I2U-Net, for medical image segmentation.•We propose a multi-functional information interaction module via a unified design.•We propose an information fusion and augmentation module to integrate information.•Extensive experiments proved that I2U-Net has excellent segmentation performance.
Clinicians typically use semantic features to judge the malignant status of nodules, while artificial intelligence systems (AI) tend to extract unknown features to diagnose nodules. The former relies ...on clinical knowledge, while the latter explores AI knowledge. Although many studies indicate that fusing clinical and AI knowledge can help computer-aided diagnosis (CAD) systems improve diagnostic accuracy and gain clinician approval, how to effectively fuse them is still an open question. This paper proposes a simple and effective pipeline (abbreviated as CKAK), which fuses clinical and AI knowledge at both feature and decision levels for accurate lung nodule malignancy classification and semantic attributes characterization. The feature-level fusion can retain rich information in high-dimensional features and improve the model’s accuracy; the decision-level fusion can provide some interpretability for the model’s decision-making process, which is expected in clinical applications. Specifically, the proposed CKAK consists of two sequential stages: (i) the initial prediction stage (IPS); and (ii) the prediction refine stage (PRS). The IPS predicts eight radiologist-interpreted semantic attributes and an initial malignancy diagnosis in parallel. Then, these results are fed to the subsequent PRS to refine the diagnosis further by fully fusing them at feature and decision levels. Besides, to enhance the ability of feature learning, we propose a novel scale-aware feature extraction block (SAFE). It integrates multi-scale contextual features with a lightweight Transformer rather than adding or concatenating them roughly. Extensive experiments at the LIDC-IDRI data set show that the proposed CKAK can achieve superior benign-malignant classification accuracy with minor radiologist-interpreted semantic scores error, meeting the need for a reliable CAD system.
•We fully fuse clinical and AI knowledge at the feature and decision-making levels.•We integrate multi-scale information to improve model’s feature learning ability.•Experiments show our method has reliable performances in lung nodule diagnosis.