To perform unconstrained face recognition robust to variations in illumination, pose and expression, this paper presents a new scheme to extract "Multi-Directional Multi-Level Dual-Cross Patterns" ...(MDML-DCPs) from face images. Specifically, the MDML-DCPs scheme exploits the first derivative of Gaussian operator to reduce the impact of differences in illumination and then computes the DCP feature at both the holistic and component levels. DCP is a novel face image descriptor inspired by the unique textural structure of human faces. It is computationally efficient and only doubles the cost of computing local binary patterns, yet is extremely robust to pose and expression variations. MDML-DCPs comprehensively yet efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations. Experimental results on the FERET, CAS-PERL-R1, FRGC 2.0, and LFW databases indicate that DCP outperforms the state-of-the-art local descriptors (e.g., LBP, LTP, LPQ, POEM, tLBP, and LGXP) for both face identification and face verification tasks. More impressively, the best performance is achieved on the challenging LFW and FRGC 2.0 databases by deploying MDML-DCPs in a simple recognition scheme.
The performance of face analysis and recognition systems depends on the quality of the acquired face data, which is influenced by numerous factors. Automatically assessing the quality of face data in ...terms of biometric utility can thus be useful to detect low-quality data and make decisions accordingly. This survey provides an overview of the face image quality assessment literature, which predominantly focuses on visible wavelength face image input. A trend towards deep learning-based methods is observed, including notable conceptual differences among the recent approaches, such as the integration of quality assessment into face recognition models. Besides image selection, face image quality assessment can also be used in a variety of other application scenarios, which are discussed herein. Open issues and challenges are pointed out, i.a., highlighting the importance of comparability for algorithm evaluations and the challenge for future work to create deep learning approaches that are interpretable in addition to providing accurate utility predictions.
Digitally altering, or retouching, face images is a common practice for images on social media, photo sharing websites, and even identification cards when the standards are not strictly enforced. ...This research demonstrates the effect of digital alterations on the performance of automatic face recognition, and also introduces an algorithm to classify face images as original or retouched with high accuracy. We first introduce two face image databases with unaltered and retouched images. Face recognition experiments performed on these databases show that when a retouched image is matched with its original image or an unaltered gallery image, the identification performance is considerably degraded, with a drop in matching accuracy of up to 25%. However, when images are retouched with the same style, the matching accuracy can be misleadingly high in comparison with matching original images. To detect retouching in face images, a novel supervised deep Boltzmann machine algorithm is proposed. It uses facial parts to learn discriminative features to classify face images as original or retouched. The proposed approach for classifying images as original or retouched yields an accuracy of over 87% on the data sets introduced in this paper and over 99% on three other makeup data sets used by previous researchers. This is a substantial increase in accuracy over the previous state-of-the-art algorithm, which has shown <;50% accuracy in classifying original and retouched images from the ND-IIITD retouched faces database.
► We investigate gender recognition on real-life faces. ► We use the Labeled Faces in the Wild database in our study. ► Discriminative LBP features are learned to describe faces. ► The performance of ...94.81% is obtained by applying SVM with the learned features.
Gender recognition is one of fundamental face analysis tasks. Most of the existing studies have focused on face images acquired under controlled conditions. However, real-world applications require gender classification on real-life faces, which is much more challenging due to significant appearance variations in unconstrained scenarios. In this paper, we investigate gender recognition on real-life faces using the recently built database, the Labeled Faces in the Wild (LFW). Local Binary Patterns (LBP) is employed to describe faces, and Adaboost is used to select the discriminative LBP features. We obtain the performance of 94.81% by applying Support Vector Machine (SVM) with the boosted LBP features. The public database used in this study makes future benchmark and evaluation possible.
The disclosure of face image features can seriously threaten the security of user information, which limits the application of face recognition technology in Internet of Vehicles. This paper proposes ...a new method of generating and restoring private face images based on semantic features and adversarial examples. The Segnet network first segments the face images semantically, then a generative adversarial network generates adversarial examples and perturbs the semantic features of the face image. The perturbation position can be accurately controlled through a coefficient matrix as the identity tag of the face image is concealed steganographically. A restoration network is trained to extract the real identity tag from the private face image using a discriminator against the generation network, then it restores the private face image to its original state. Compared to other state-of-the-art methods, private face images generated by the proposed method experimentally show high detection resistance, better quality, and stronger median filtering defense.
With the proliferation of face image manipulation (FIM) techniques such as Face2Face and Deepfake, more fake face images are spreading over the internet, which brings serious challenges to public ...confidence. Face image forgery detection has made considerable progresses in exposing specific FIM, but it is still in scarcity of a robust fake face detector to expose face image forgeries under complex scenarios such as with further compression, blurring, scaling, etc. Due to the relatively fixed structure, convolutional neural network (CNN) tends to learn image content representations. However, CNN should learn subtle manipulation traces for image forensics tasks. Thus, we propose an adaptive manipulation traces extraction network (AMTEN), which serves as pre-processing to suppress image content and highlight manipulation traces. AMTEN exploits an adaptive convolution layer to predict manipulation traces in the image, which are reused in subsequent layers to maximize manipulation artifacts by updating weights during the back-propagation pass. A fake face detector, namely AMTENnet, is constructed by integrating AMTEN with CNN. Experimental results prove that the proposed AMTEN achieves desirable pre-processing. When detecting fake face images generated by various FIM techniques, AMTENnet achieves an average accuracy up to 98.52%, which outperforms the state-of-the-art works. When detecting face images with unknown post-processing operations, the detector also achieves an average accuracy of 95.17%.
•A pre-processing module, namely AMTEN, is designed to learn manipulation traces for face image forensics.•By integrating AMTEN with CNN, a robust fake face detector, namely AMTENnet, is presented to expose face image manipulations under complex scenarios.•A series of experiments prove that AMTENnet achieves better detection accuracies than existing works, especially under complex scenarios with typical postprocessing operations.•The generalization ability of the proposed AMTENnet is also explored.
Recently, researchers focused on face image manipulation detection and localization techniques because of their importance in image security applications. The previous research has not highlighted ...the recovery of the face region after manipulation detection. This paper presents a new face region recovery algorithm (FRRA) to be included in the face image manipulation detection algorithms (FIMD). The proposed FRRA consists of two main algorithms: face data generation algorithm and face region restoration algorithm. Both algorithms start by detecting the face region using Multi-task Cascaded Neural Network followed by a face window selection process. In the face data generation algorithm, the recovery information is generated from the shirked face window using bicubic interpolation technique. In the face region restoration algorithm, the face region zoomed using bicubic interpolation technique. The proposed FRRA has been tested and compared with previous recovery methods for different color face images, and the results proved that the FRRA could recover the face region with better visual quality at the same data length compared to previous methods. The main contributions of this research are a) the suggestion of including a face region recovery algorithm to FIMD, b) the study of previous recovery data generation algorithms for color face images, and c) introducing a new algorithm for generating the recovery data based on bicubic interpolation. In the future, the proposed algorithm can be included in the recent FIMD algorithms to recover the face region, which can be very useful in practical applications, especially those used in data forensics systems.
Recent years have witnessed significant advancements in face image generation using generative adversarial networks (GANs), leading to a high demand for GAN-generated face image quality assessment ...(GFIQA). However, the intrinsic distortion caused by the generation brings a significant challenge for existing image quality assessment (IQA) models which are typically designed for natural images. In addition, the image distortion usually varies depending on different GAN models, resulting in a high generalization capability that a GFIQA model should possess. To account for this, we first establish a large GFIQA database by collecting various GFIs from existing popular GAN models. Subsequently, we further propose a causal representation learning (CRL) scheme for the generalized GFIQA model (CRL-GFIQA) with the assumption that the causal knowledge of human quality assessment is shareable in different scenarios. In particular, we disentangle the learned features into casual and non-causal components by an invertible neural network, facilitating the proposed CRL-GFIQA model with a high generalization on unseen domains. Extensive experimental results demonstrate the effectiveness of our CRL-GFIQA model. The codes and the constructed dataset will be publicly available.
Due to various factors that cause visual alterations in the collected facial images, gender classification based on image processing continues to be a performance challenge for classifier models. The ...Vision Transformer model is used in this study to suggest a technique for identifying a person’s gender from their face images. This study investigates how well a facial image-based model can distinguish between male and female genders. It also investigates the rarely discussed performance on the variation and complexity of data caused by differences in racial and age groups. We trained on the AFAD dataset and then carried out same-dataset and cross-dataset evaluations, the latter of which considers the UTKFace dataset. From the experiments and analysis in the same-dataset evaluation, the highest validation accuracy of happens for the image of size pixels with eight patches. In comparison, the highest testing accuracy of occurs for the image of size pixels with patches. Moreover, the experiments and analysis in the cross-dataset evaluation show that the model works optimally for the image size pixels with patches, with the value of the model’s accuracy, precision, recall, and F1-score being , , , and , respectively. Furthermore, the misclassification analysis shows that the model works optimally in classifying the gender of people between 21-70 years old. The findings of this study can serve as a baseline for conducting further analysis on the effectiveness of gender classifier models considering various physical factors.