In the past few years, there has been a leap from traditional palmprint recognition methodologies, which use handcrafted features, to deep-learning approaches that are able to automatically learn ...feature representations from the input data. However, the information that is extracted from such deep-learning models typically corresponds to the global image appearance, where only the most discriminative cues from the input image are considered. This characteristic is especially problematic when data is acquired in unconstrained settings, as in the case of contactless palmprint recognition systems, where visual artifacts caused by elastic deformations of the palmar surface are typically present in spatially local parts of the captured images. In this study we address the problem of elastic deformations by introducing a new approach to
based on a novel CNN model, designed as a two-path architecture, where one path processes the input in a holistic manner, while the second path extracts local information from smaller image patches sampled from the input image. As elastic deformations can be assumed to most significantly affect the global appearance, while having a lesser impact on spatially local image areas, the local processing path addresses the issues related to elastic deformations thereby supplementing the information from the global processing path. The model is trained with a learning objective that combines the Additive Angular Margin (ArcFace) Loss and the well-known center loss. By using the proposed model design, the discriminative power of the learned image representation is significantly enhanced compared to standard holistic models, which, as we show in the experimental section, leads to state-of-the-art performance for contactless palmprint recognition. Our approach is tested on two publicly available contactless palmprint datasets-namely, IITD and CASIA-and is demonstrated to perform favorably against state-of-the-art methods from the literature. The source code for the proposed model is made publicly available.
Contactless palmprint recognition has recently begun to draw attention of researchers. Different from conventional palmprint images, contactless palmprint images are captured under free conditions ...and usually have significant variations on translations, rotations, illuminations and even backgrounds. Conventional powerful palmprint recognition methods are not very effective for the recognition of contactless palmprint. It is known that low-rank representation (LRR) is a promizing scheme for subspace clustering, owing to its success in exploring the multiple subspace structures of data. In this paper, we integrate LRR with the adaptive principal line distance for contactless palmprint recognition. The principal lines are the most distinctive features of the palmprint and can be correctly extracted in most cases; thereby, the principal line distances can be used to determine the neighbors of a palmprint image. With the principal line distance penalty, the proposed method effectively improves the clustering results of LRR by improving the weights of the affinities among nearby samples with small principal line distances. Therefore, the weighted affinity graph identified by the proposed method is more discriminative. Extensive experiments show that the proposed method can achieve higher accuracy than both the conventional powerful palmprint recognition methods and the subspace clustering-based methods in contactless palmprint recognition. Also, the proposed method shows promizing robustness to the noisy palmprint images. The effectiveness of the proposed method indicates that using LRR for contactless palmprint recognition is feasible.
•The LRR is first used to perform contactless palmprint recognition.•The LRRIPLD can capture both the global and local structure of the whole data.•The proposed LRRIPLD shows good robustness to the noises.•The proposed method performs better than state-of-the-art methods.