•Prior familiarity enhances face recognition when it includes conceptual information.•Perceptual familiarity does not improve face recognition and may even harm it.•Prior familiarity leads to a bias ...towards positive identifications.
Prior familiarity with a face seems to substantively change the way we encode and recognize later instances of that face. We report five experiments that examine the effects of varying levels of prior familiarity and conceptual knowledge on face recognition memory. All experiments employed a 3-phase procedure, in which faces were familiarized in varying ways and to varying extents prior to study and test. Across experiments, increased prior familiarity led to a simultaneous increase in both correct and false identification rates, either when familiarity was gained through passive exposures or conceptual processing. Discriminability, on the other hand, was enhanced by prior familiarity only when the level of familiarity was high and when it involved conceptual processing (Experiments 1–3). Familiarity engendered by passive exposure affected response bias equivalently to more active orienting tasks, but it reduced discriminability in a standard Old/New recognition test (Experiment 4) and did not lead to an enhancement in discriminability in a lineup identification task (Experiment 5). Familiarity engendered by trait evaluations (Experiments 1–3) or name learning (Experiments 2–5) increased discriminability and yielded a more liberal response bias. These results suggest that the benefits of prior familiarity on discriminability in recognition memory are determined by the presence of prior conceptual knowledge. The implications of this work for eyewitness identification situations in which the suspect is known or familiar to the witness are discussed.
Deep Learning for Face Anti-Spoofing: A Survey Yu, Zitong; Qin, Yunxiao; Li, Xiaobai ...
IEEE transactions on pattern analysis and machine intelligence,
05/2023, Letnik:
45, Številka:
5
Journal Article
Recenzirano
Odprti dostop
Face anti-spoofing (FAS) has lately attracted increasing attention due to its vital role in securing face recognition systems from presentation attacks (PAs). As more and more realistic PAs with ...novel types spring up, early-stage FAS methods based on handcrafted features become unreliable due to their limited representation capacity. With the emergence of large-scale academic datasets in the recent decade, deep learning based FAS achieves remarkable performance and dominates this area. However, existing reviews in this field mainly focus on the handcrafted features, which are outdated and uninspiring for the progress of FAS community. In this paper, to stimulate future research, we present the first comprehensive review of recent advances in deep learning based FAS. It covers several novel and insightful components: 1) besides supervision with binary label (e.g., '0' for bonafide vs. '1' for PAs), we also investigate recent methods with pixel-wise supervision (e.g., pseudo depth map); 2) in addition to traditional intra-dataset evaluation, we collect and analyze the latest methods specially designed for domain generalization and open-set FAS; and 3) besides commercial RGB camera, we summarize the deep learning applications under multi-modal (e.g., depth and infrared) or specialized (e.g., light field and flash) sensors. We conclude this survey by emphasizing current open issues and highlighting potential prospects.
In Mobile Edge Computing (MEC), many tasks require specific service support for execution and in addition, have a dependent order of execution among the tasks. However, previous works often ignore ...the impact of having limited services cached at the edge nodes on (dependent) task offloading, thus may lead to an infeasible offloading decision or a longer completion time. To bridge the gap, this article studies how to efficiently offload dependent tasks to edge nodes with limited (and predetermined) service caching. We formally define the problem of offloading dependent tasks with service caching (ODT-SC), and prove that there exists no algorithm with constant approximation for this hard problem. Then, we design an efficient convex programming based algorithm (CP) to solve this problem. Moreover, we study a special case with a homogeneous MEC and propose a favorite successor based algorithm (FS) to solve this special case with a competitive ratio of <inline-formula><tex-math notation="LaTeX">O(1)</tex-math> <mml:math><mml:mrow><mml:mi>O</mml:mi><mml:mo>(</mml:mo><mml:mn>1</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="zhao-ieq1-3076687.gif"/> </inline-formula>. Extensive simulation results using Google data traces show that our proposed algorithms can significantly reduce applications' completion time by about 21-47 percent compared with other alternatives.
•A set of six LBP-like features derived from local intensities and differences.•A labeled dominant pattern scheme is proposed to learn salient information.•Utilizing whitened PCA to produce more ...compact, robust and discriminative features.•Fused WPCA features improves the accuracy and robustness of face recognition.•Proposed face recognition system is highly robust to illumination variations.
This paper presents a simple and novel, yet highly effective approach for robust face recognition. Using LBP-like descriptors based on local accumulated pixel differences – Angular Differences and Radial Differences, the local differences were decomposed into complementary components of signs and magnitudes. Based on these descriptors we developed labeled dominant patterns where the most frequently occurring patterns and their labels were learned to capture discriminative textural information. Six histogram features were obtained from each given face image by concatenating spatial histograms extracted from non-overlapping subregions. A whitened PCA technique was used for dimensionality reduction to produce more compact, robust and discriminative features, which were then fused using the nearest neighbor classifier, with Euclidean distance as the similarity measure.
We evaluated the effectiveness of the proposed method on the Extended Yale B, the large-scale FERET, and CAS-PEAL-R1 databases, and found that that the proposed method impressively outperforms other well-known systems with a recognition rate of 74.6% on the CAS-PEAL-R1 lighting probe set.
We propose a method designed to push the frontiers of unconstrained face recognition in the wild with an emphasis on extreme out-of-plane pose variations. Existing methods either expect a single ...model to learn pose invariance by training on massive amounts of data or else normalize images by aligning faces to a single frontal pose. Contrary to these, our method is designed to explicitly tackle pose variations. Our proposed Pose-Aware Models (PAM) process a face image using several pose-specific, deep convolutional neural networks (CNN). 3D rendering is used to synthesize multiple face poses from input images to both train these models and to provide additional robustness to pose variations at test time. Our paper presents an extensive analysis of the IARPA Janus Benchmark A (IJB-A), evaluating the effects that landmark detection accuracy, CNN layer selection, and pose model selection all have on the performance of the recognition pipeline. It further provides comparative evaluations on IJB-A and the PIPA dataset. These tests show that our approach outperforms existing methods, even surprisingly matching the accuracy of methods that were specifically fine-tuned to the target dataset. Parts of this work previously appeared in <xref ref-type="bibr" rid="ref1">1 and <xref ref-type="bibr" rid="ref2">2 .
The problem of automatically matching composite sketches to facial photographs is addressed in this paper. Previous research on sketch recognition focused on matching sketches drawn by professional ...artists who either looked directly at the subjects (viewed sketches) or used a verbal description of the subject's appearance as provided by an eyewitness (forensic sketches). Unlike sketches hand drawn by artists, composite sketches are synthesized using one of the several facial composite software systems available to law enforcement agencies. We propose a component-based representation (CBR) approach to measure the similarity between a composite sketch and mugshot photograph. Specifically, we first automatically detect facial landmarks in composite sketches and face photos using an active shape model (ASM). Features are then extracted for each facial component using multiscale local binary patterns (MLBPs), and per component similarity is calculated. Finally, the similarity scores obtained from individual facial components are fused together, yielding a similarity score between a composite sketch and a face photo. Matching performance is further improved by filtering the large gallery of mugshot images using gender information. Experimental results on matching 123 composite sketches against two galleries with 10,123 and 1,316 mugshots show that the proposed method achieves promising performance (rank-100 accuracies of 77.2% and 89.4%, respectively) compared to a leading commercial face recognition system (rank-100 accuracies of 22.8% and 52.0%) and densely sampled MLBP on holistic faces (rank-100 accuracies of 27.6% and 10.6%). We believe our prototype system will be of great value to law enforcement agencies in apprehending suspects in a timely fashion.
Developing an efficient long-range face recognition (FR) system involves multiple challenges that impact image quality and, as a result, FR performance. Such challenges include camera quality and ...settings, atmospheric conditions, non-cooperative subjects, face pose, and unfavorable lighting. To improve face image quality under such conditions, face restoration models have been proposed, which require using large-scale face datasets and augmentation methods to create low-quality, high-quality face image pairs for the restoration models to train on. However, choosing these augmentation methods is complex and can result in significant fluctuations in face recognition (verification or identification) performance. In this work, we explore the utilization of various image augmentation methods to generate pairs of low-quality images. These pairs are intended for training deep face restoration models, which will be integrated into an end-to-end long-range FR system. We assess our method's performance against benchmarks, achieving significant improvements, namely a 5% increase in Rank-1 accuracy, a 9% increase in Rank-5 accuracy, a 5% increase in AVC, and a 50% reduction in EER. These enhancements are achieved by employing Defocus Blur as the primary augmentation method for GAN Prior Embedded Network (GPEN). The dataset used for this work is a subset of the original MILAB-VTF(B) dataset, which includes indoor, high-quality face images of enrolled subjects that are matched against their outdoor 300-meter (~984ft) face image low-quality counterparts. This subset of faces simulates scenarios for long-range FR applications, such as perimeter security at airstrips, security at open-desert military bases, and similar environments. In these scenarios, subjects may be enrolled in the database during indoor sessions and subsequently matched against long-distance data.
Abstract
Face recognition is a method for recognizing human faces using a camera. There are various applications of facial recognition technology, one of which is Face Unlock technology on ...smartphones which functions as a security feature to open smartphone access through the user’s face. In this study, the use of facial recognition technology is used to automatically open the door of the room according to the registered face. The research method used is the waterfall method which has 5 stages. The research stages consist of requirements analysis, design, implementation & unit testing, integration & system testing, and operation & maintenance. This study uses a Raspberry Pi 4 to perform an automation system. The face detection process is based on the YuNet detection model, as well as the face recognition process using the SFace facial recognition model.
Near-infrared-visual (NIR-VIS) heterogeneous face recognition (HFR) aims to match NIR face images with the corresponding VIS ones. It is a challenging task due to the sensing gaps among different ...modalities. Occlusions in the input face images make the task extremely complex. To tackle these problems, we present a Saliency Search Network (SSN) to extract domain-invariant identity features. We propose to automatically search the efficient parts of face images in a modality-aware manner, and remove redundant information. Moreover, the searching process is guided by an information bottleneck network, which mitigates the overfitting problems caused by small datasets. Extensive experiments on both complete and partial NIR-VIS HFR on multiple datasets demonstrate the effectiveness and robustness of the proposed method to modality discrepancy and occlusions.
Due to the strong analytical ability of big data, deep learning has been widely applied to model on the collected data in industrial Internet of Things (IoT). However, for privacy issues, traditional ...data-gathering centralized learning is not applicable to industrial scenarios sensitive to training sets, such as face recognition and medical systems. Recently, federated learning has received widespread attention, since it trains a model by only sharing gradients without accessing training sets. But existing research works reveal that the shared gradient still retains the sensitive information of the training set. Even worse, a malicious aggregation server may return forged aggregated gradients. In this article, we propose the VFL, a verifiable federated learning with privacy-preserving for big data in industrial IoT. Specifically, we use Lagrange interpolation to elaborately set interpolation points for verifying the correctness of the aggregated gradients. Compared with existing schemes, the verification overhead of VFL remains constant regardless of the number of participants. Moreover, we employ the blinding technology to protect the privacy of the privacy gradients. If no more than <inline-formula><tex-math notation="LaTeX">\boldsymbol{n}</tex-math></inline-formula>-2 of <inline-formula><tex-math notation="LaTeX">\boldsymbol{n}</tex-math></inline-formula> participants collude with the aggregation server, VFL could guarantee the encrypted gradients of other participants not being inverted. Experimental evaluations corroborate the practical performance of the presented VFL with high accuracy and efficiency.