The immune system plays a critical role in protecting hosts from the invasion of organisms. CD4 T cells, as a key component of the immune system, are central in orchestrating adaptive immune ...responses. After decades of investigation, five major CD4 T helper cell (Th) subsets have been identified: Th1, Th2, Th17, Treg (T regulatory), and Tfh (follicular T helper) cells. Th1 cells, defined by the expression of lineage cytokine interferon (IFN)-γ and the master transcription factor T-bet, participate in type 1 immune responses to intracellular pathogens such as mycobacterial species and viruses; Th2 cells, defined by the expression of lineage cytokines interleukin (IL)-4/IL-5/IL-13 and the master transcription factor GAΤA3, participate in type 2 immune responses to larger extracellular pathogens such as helminths; Th17 cells, defined by the expression of lineage cytokines IL-17/IL-22 and the master transcription factor RORγt, participate in type 3 immune responses to extracellular pathogens including some bacteria and fungi; Tfh cells, by producing IL-21 and expressing Bcl6, help B cells produce corresponding antibodies; whereas Foxp3-expressing Treg cells, unlike Th1/Th2/Th17/Tfh exerting their effector functions, regulate immune responses to maintain immune cell homeostasis and prevent immunopathology. Interestingly, innate lymphoid cells (ILCs) have been found to mimic the functions of three major effector CD4 T helper subsets (Th1, Th2, and Th17) and thus can also be divided into three major subsets: ILC1s, ILC2s, and ILC3s. In this review, we will discuss the differentiation and functions of each CD4 T helper cell subset in the context of ILCs and human diseases associated with the dysregulation of these lymphocyte subsets particularly caused by monogenic mutations.
In view of the serious color and definition distortion in the process of the traditional image fusion, this study proposes a Haar-like multi-scale analysis model, in which Haar wavelet has been ...modified and used for the medical image fusion to obtain even better results. Firstly, when the improved Haar wavelet basis function is translated, inner product and down-sampled with each band of the original image, the band is decomposed into four sub-images containing one low-frequency subdomain and three high-frequency subdomains. Secondly, the different fusion rules are applied in the low-frequency domain and the high-frequency domains to get the low-frequency sub-image and the high-frequency sub-images in each band. The four new sub-frequency domains are inverse-decomposed to reconstruct each new band. The study configures and synthesizes these new bands to produce a fusion image. Lastly, the two groups of the medical images are used for experimental simulation. The Experimental results are analyzed and compared with those of other fusion methods. It can be found the fusion method proposed in the study obtain the superior effects in the spatial definition and the color depth feature, especially in color criteria such as OP, SpD, CR and SSIM, comparing with the other methods.
Human emotions are complex psychological and physiological responses to external stimuli. Correctly identifying and providing feedback on emotions is an important goal in human–computer interaction ...research. Compared to facial expressions, speech, or other physiological signals, using electroencephalogram (EEG) signals for the task of emotion recognition has advantages in terms of authenticity, objectivity, and high reliability; thus, it is attracting increasing attention from researchers. However, the current methods have significant room for improvement in terms of the combination of information exchange between different brain regions and time–frequency feature extraction. Therefore, this paper proposes an EEG emotion recognition network, namely, self-organized graph pesudo-3D convolution (SOGPCN), based on attention and spatiotemporal convolution. Unlike previous methods that directly construct graph structures for brain channels, the proposed SOGPCN method considers that the spatial relationships between electrodes in each frequency band differ. First, a self-organizing map is constructed for each channel in each frequency band to obtain the 10 most relevant channels to the current channel, and graph convolution is employed to capture the spatial relationships between all channels in the self-organizing map constructed for each channel in each frequency band. Then, pseudo-three-dimensional convolution combined with partial dot product attention is implemented to extract the temporal features of the EEG sequence. Finally, LSTM is employed to learn the contextual information between adjacent time-series data. Subject-dependent and subject-independent experiments are conducted on the SEED dataset to evaluate the performance of the proposed SOGPCN method, which achieves recognition accuracies of 95.26% and 94.22%, respectively, indicating that the proposed method outperforms several baseline methods.
As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion ...recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.
The performance of a facial expression recognition network degrades obviously under situations of uneven illumination or partial occluded face as it is quite difficult to pinpoint the attention ...hotspots on the dynamically changing regions (e.g., eyes, nose, and mouth) as precisely as possible. To address the above issue, by a hybrid of the attention mechanism and pyramid feature, this paper proposes a cascade attention-based facial expression recognition network on the basis of a combination of (i) local spatial feature, (ii) multi-scale-stereoscopic spatial context feature (extracted from the 3-scale pyramid feature), and (iii) temporal feature. Experiments on the CK+, Oulu-CASIA, and RAF-DB datasets obtained recognition accuracy rates of 99.23%, 89.29%, and 86.80%, respectively. It demonstrates that the proposed method outperforms the state-of-the-art methods in both the experimental and natural environment.
Understanding learners’ emotions can help optimize instruction sand further conduct effective learning interventions. Most existing studies on student emotion recognition are based on multiple ...manifestations of external behavior, which do not fully use physiological signals. In this context, on the one hand, a learning emotion EEG dataset (LE-EEG) is constructed, which captures physiological signals reflecting the emotions of boredom, neutrality, and engagement during learning; on the other hand, an EEG emotion classification network based on attention fusion (ECN-AF) is proposed. To be specific, on the basis of key frequency bands and channels selection, multi-channel band features are first extracted (using a multi-channel backbone network) and then fused (using attention units). In order to verify the performance, the proposed model is tested on an open-access dataset SEED (N = 15) and the self-collected dataset LE-EEG (N = 45), respectively. The experimental results using five-fold cross validation show the following: (i) on the SEED dataset, the highest accuracy of 96.45% is achieved by the proposed model, demonstrating a slight increase of 1.37% compared to the baseline models; and (ii) on the LE-EEG dataset, the highest accuracy of 95.87% is achieved, demonstrating a 21.49% increase compared to the baseline models.
In recent years, massive open online courses (MOOCs) have received widespread attention owing to their flexibility and free access, which has attracted millions of online learners to participate in ...courses. With the wide application of MOOCs in educational institutions, a large amount of learners’ log data exist in the MOOCs platform, and this lays a solid data foundation for exploring learners’ online learning behaviors. Using data mining techniques to process these log data and then analyze the relationship between learner behavior and academic performance has become a hot topic of research. Firstly, this paper summarizes the commonly used predictive models in the relevant research fields. Based on the behavior log data of learners participating in 12 courses in MOOCs, an entropy-based indicator quantifying behavior change trends is proposed, which explores the relationships between behavior change trends and learners’ academic performance. Next, we build a set of behavioral features, which further analyze the relationships between behaviors and academic performance. The results demonstrate that entropy has a certain correlation with the corresponding behavior, which can effectively represent the change trends of behavior. Finally, to verify the effectiveness and importance of the predictive features, we choose four benchmark models to predict learners’ academic performance and compare them with the previous relevant research results. The results show that the proposed feature selection-based model can effectively identify the key features and obtain good prediction performance. Furthermore, our prediction results are better than the related studies in the performance prediction based on the same Xuetang MOOC platform, which demonstrates that the combination of the selected learner-related features (behavioral features + behavior entropy) can lead to a much better prediction performance.
Prosocial behavior has played an irreplaceable role during the COVID-19 pandemic, not only in infection prevention and control, but also in improving individual mental health. The current study was ...conducted after COVID-19 control was under the stage of Ongoing Prevention and Control in China. Using the Interpersonal Response Scale, Prosocial Tendencies Measure and Big Five Personality Questionnaire. In total, 898 college students participated in the current study (
age = 19.50,
age = 1.05, Age range = 16-24). The result showed that against the background of the COVID-19 pandemic, college students' social responsibility partially mediated the relationship between empathy and prosocial behavior. This study provides new insights and inspiration for improving college students' mental health in the context of the pandemic.
Because of its ability to objectively reflect people's emotional states, electroencephalogram (EEG) has been attracting increasing research attention for emotion classification. The classification ...method based on spatial-domain analysis is one of the research hotspots. However, most previous studies ignored the complementarity of information between different frequency bands, and the information in a single frequency band is not fully mined, which increases the computational time and the difficulty of improving classification accuracy. To address the above problems, this study proposes an emotion classification method based on dynamic simplifying graph convolutional (SGC) networks and a style recalibration module (SRM) for channels, termed SGC-SRM, with multi-band EEG data as input. Specifically, first, the graph structure is constructed using the differential entropy characteristics of each sub-band and the internal relationship between different channels is dynamically learned through SGC networks. Second, a convolution layer based on the SRM is introduced to recalibrate channel features to extract more emotion-related features. Third, the extracted sub-band features are fused at the feature level and classified. In addition, to reduce the redundant information between EEG channels and the computational time, (1) we adopt only 12 channels that are suitable for emotion classification to optimize the recognition algorithm, which can save approximately 90.5% of the time cost compared with using all channels; (2) we adopt information in the θ, α, β, and γ bands, consequently saving 23.3% of the time consumed compared with that in the full bands while maintaining almost the same level of classification accuracy. Finally, a subject-independent experiment is conducted on the public SEED dataset using the leave-one-subject-out cross-validation strategy. According to experimental results, SGC-SRM improves classification accuracy by 5.51-15.43% compared with existing methods.