A robust automatic micro-expression recognition system would have broad applications in national safety, police interrogation, and clinical diagnosis. Developing such a system requires high quality ...databases with sufficient training samples which are currently not available. We reviewed the previously developed micro-expression databases and built an improved one (CASME II), with higher temporal resolution (200 fps) and spatial resolution (about 280×340 pixels on facial area). We elicited participants' facial expressions in a well-controlled laboratory environment and proper illumination (such as removing light flickering). Among nearly 3000 facial movements, 247 micro-expressions were selected for the database with action units (AUs) and emotions labeled. For baseline evaluation, LBP-TOP and SVM were employed respectively for feature extraction and classifier with the leave-one-subject-out cross-validation method. The best performance is 63.41% for 5-class classification.
Micro-expression spotting is a fundamental step in the micro-expression analysis. This paper proposes a novel network based convolutional neural network (CNN) for spotting multi-scale spontaneous ...micro-expression intervals in long videos. We named the network as Micro-Expression Spotting Network (MESNet). It is composed of three modules. The first module is a 2+1D Spatiotemporal Convolutional Network, which uses 2D convolution to extract spatial features and 1D convolution to extract temporal features. The second module is a Clip Proposal Network, which gives some proposed micro-expression clips. The last module is a Classification Regression Network, which classifies the proposed clips to micro-expression or not, and further regresses their temporal boundaries. We also propose a novel evaluation metric for spotting micro-expression. Extensive experiments have been conducted on the two long video datasets: CAS(ME) 2 and SAMM, and the leave-one-subject-out cross-validation is used to evaluate the spotting performance. Results show that the proposed MESNet effectively enhances the F1-score metric. And comparative results show the proposed MESNet has achieved a good performance, which outperforms other state-of-the-art methods, especially in the SAMM dataset.
Micro-expressions are brief facial movements characterized by short duration, involuntariness and low intensity. Recognition of spontaneous facial micro-expressions is a great challenge. In this ...paper, we propose a simple yet effective Main Directional Mean Optical-flow (MDMO) feature for micro-expression recognition. We apply a robust optical flow method on micro-expression video clips and partition the facial area into regions of interest (ROIs) based partially on action units. The MDMO is a ROI-based, normalized statistic feature that considers both local statistic motion information and its spatial location. One of the significant characteristics of MDMO is that its feature dimension is small. The length of a MDMO feature vector is 36 × 2 = 72, where 36 is the number of ROIs. Furthermore, to reduce the influence of noise due to head movements, we propose an optical-flow-driven method to align all frames of a micro-expression video clip. Finally, a SVM classifier with the proposed MDMO feature is adopted for micro-expression recognition. Experimental results on three spontaneous micro-expression databases, namely SMIC, CASME and CASME II, show that the MDMO can achieve better performance than two state-of-the-art baseline features, i.e., LBP-TOP and HOOF.
Recently, there have been increasing interests in inferring mirco-expression from facial image sequences. Due to subtle facial movement of micro-expressions, feature extraction has become an ...important and critical issue for spontaneous facial micro-expression recognition. Recent works used spatiotemporal local binary pattern (STLBP) for micro-expression recognition and considered dynamic texture information to represent face images. However, they miss the shape attribute of face images. On the other hand, they extract the spatiotemporal features from the global face regions while ignore the discriminative information between two micro-expression classes. The above-mentioned problems seriously limit the application of STLBP to micro-expression recognition. In this paper, we propose a discriminative spatiotemporal local binary pattern based on an integral projection to resolve the problems of STLBP for micro-expression recognition. First, we revisit an integral projection for preserving the shape attribute of micro-expressions by using robust principal component analysis. Furthermore, a revisited integral projection is incorporated with local binary pattern across spatial and temporal domains. Specifically, we extract the novel spatiotemporal features incorporating shape attributes into spatiotemporal texture features. For increasing the discrimination of micro-expressions, we propose a new feature selection based on Laplacian method to extract the discriminative information for facial micro-expression recognition. Intensive experiments are conducted on three availably published micro-expression databases including CASME, CASME2 and SMIC databases. We compare our method with the state-of-the-art algorithms. Experimental results demonstrate that our proposed method achieves promising performance for micro-expression recognition.
Alcohol use disorder (AUD) is an important brain disease. It alters the brain structure. Recently, scholars tend to use computer vision based techniques to detect AUD. We collected 235 subjects, 114 ...alcoholic and 121 non-alcoholic. Among the 235 image, 100 images were used as training set, and data augmentation method was used. The rest 135 images were used as test set. Further, we chose the latest powerful technique—convolutional neural network (CNN) based on convolutional layer, rectified linear unit layer, pooling layer, fully connected layer, and softmax layer. We also compared three different pooling techniques: max pooling, average pooling, and stochastic pooling. The results showed that our method achieved a sensitivity of 96.88%, a specificity of 97.18%, and an accuracy of 97.04%. Our method was better than three state-of-the-art approaches. Besides, stochastic pooling performed better than other max pooling and average pooling. We validated CNN with five convolution layers and two fully connected layers performed the best. The GPU yielded a 149× acceleration in training and a 166× acceleration in test, compared to CPU.
Micro-Expression Recognition Using Color Spaces Wang, Su-Jing; Yan, Wen-Jing; Li, Xiaobai ...
IEEE transactions on image processing,
2015-Dec., 2015-Dec, 2015-12-00, 20151201, Letnik:
24, Številka:
12
Journal Article
Recenzirano
Micro-expressions are brief involuntary facial expressions that reveal genuine emotions and, thus, help detect lies. Because of their many promising applications, they have attracted the attention of ...researchers from various fields. Recent research reveals that two perceptual color spaces (CIELab and CIELuv) provide useful information for expression recognition. This paper is an extended version of our International Conference on Pattern Recognition paper, in which we propose a novel color space model, tensor independent color space (TICS), to help recognize micro-expressions. In this paper, we further show that CIELab and CIELuv are also helpful in recognizing micro-expressions, and we indicate why these three color spaces achieve better performance. A micro-expression color video clip is treated as a fourth-order tensor, i.e., a four-dimension array. The first two dimensions are the spatial information, the third is the temporal information, and the fourth is the color information. We transform the fourth dimension from RGB into TICS, in which the color components are as independent as possible. The combination of dynamic texture and independent color components achieves a higher accuracy than does that of RGB. In addition, we define a set of regions of interests (ROIs) based on the facial action coding system and calculated the dynamic texture histograms for each ROI. Experiments are conducted on two micro-expression databases, CASME and CASME 2, and the results show that the performances for TICS, CIELab, and CIELuv are better than those for RGB or gray.
Unlike the conventional facial expressions, micro-expressions are involuntary and transient facial expressions capable of revealing the genuine emotions that people attempt to hide. Therefore, they ...can provide important information in a broad range of applications such as lie detection, criminal detection, etc. Since micro-expressions are transient and of low intensity, however, their detection and recognition is difficult and relies heavily on expert experiences. Due to its intrinsic particularity and complexity, video-based micro-expression analysis is attractive but challenging, and has recently become an active area of research. Although there have been numerous developments in this area, thus far there has been no comprehensive survey that provides researchers with a systematic overview of these developments with a unified evaluation. Accordingly, in this survey paper, we first highlight the key differences between macro- and micro-expressions, then use these differences to guide our research survey of video-based micro-expression analysis in a cascaded structure, encompassing the neuropsychological basis, datasets, features, spotting algorithms, recognition algorithms, applications and evaluation of state-of-the-art approaches. For each aspect, the basic techniques, advanced developments and major challenges are addressed and discussed. Furthermore, after considering the limitations of existing micro-expression datasets, we present and release a new dataset - called micro-and-macro expression warehouse (MMEW) - containing more video samples and more labeled emotion types. We then perform a unified comparison of representative methods on CAS(ME)<inline-formula><tex-math notation="LaTeX">^2</tex-math> <mml:math><mml:msup><mml:mrow/><mml:mn>2</mml:mn></mml:msup></mml:math><inline-graphic xlink:href="liu-ieq1-3067464.gif"/> </inline-formula> for spotting, and on MMEW and SAMM for recognition, respectively. Finally, some potential future research directions are explored and outlined.
This paper proposes a portable wireless transmission system for the multi-channel acquisition of surface electromyography (EMG) signals. Because EMG signals have great application value in ...psychotherapy and human–computer interaction, this system is designed to acquire reliable, real-time facial-muscle-movement signals. Electrodes placed on the surface of a facial-muscle source can inhibit facial-muscle movement due to weight, size, etc., and we propose to solve this problem by placing the electrodes at the periphery of the face to acquire the signals. The multi-channel approach allows this system to detect muscle activity in 16 regions simultaneously. Wireless transmission (Wi-Fi) technology is employed to increase the flexibility of portable applications. The sampling rate is 1 KHz and the resolution is 24 bit. To verify the reliability and practicality of this system, we carried out a comparison with a commercial device and achieved a correlation coefficient of more than 70% on the comparison metrics. Next, to test the system’s utility, we placed 16 electrodes around the face for the recognition of five facial movements. Three classifiers, random forest, support vector machine (SVM) and backpropagation neural network (BPNN), were used for the recognition of the five facial movements, in which random forest proved to be practical by achieving a classification accuracy of 91.79%. It is also demonstrated that electrodes placed around the face can still achieve good recognition of facial movements, making the landing of wearable EMG signal-acquisition devices more feasible.
Flexible optoelectronic devices attract considerable attention due to their prominent role in creating novel wearable apparatus for bionics, robotics, health care, and so forth. Although bulk ...single-crystalline perovskite-based materials are well-recognized for the high photoelectric conversion efficiency than the polycrystalline ones, their stiff and brittle nature unfortunately prohibits their application for flexible devices. Here, we introduce ultrathin single-crystalline perovskite film as the active layer and demonstrate a high-performance flexible photodetector with prevailing bending reliability. With a much-reduced thickness of 20 nm, the photodetector made of this ultrathin film can achieve a significantly increased responsivity as 5600A/W, 2 orders of magnitude higher than that of recently reported flexible perovskite photodetectors. The demonstrated 0.2 MHz 3 dB bandwidth further paves the way for high-speed photodetection. Notably, all its optoelectronic characteristics resume after being bent over thousands of times. These results manifest the great potential of single-crystalline perovskite ultrathin films for developing wearable and flexible optoelectronic devices.