UNI-MB - logo
UMNIK - logo
 
E-viri
Celotno besedilo
Recenzirano
  • CAN-GAN: Conditioned-attent...
    Shi, Chenglong; Zhang, Jiachao; Yao, Yazhou; Sun, Yunlian; Rao, Huaming; Shu, Xiangbo

    Pattern recognition letters, October 2020, 2020-10-00, 20201001, Letnik: 138
    Journal Article

    •We build a novel network architecture of conditioned-attention normalization GAN (CAN-GAN) for age synthesis.•We design a new conditioned-attention normalization (CAN) to enhance the aging-relevant information by an attention map.•We extend a contribution-aware age classifier (CAAC) to improve the ability of discriminator. This work aims to freely translate an input face to an aging face with robust identity preservation, satisfying aging effect and authentic visual appearance. Witnessing the success of GAN in image synthesis, researchers employ GAN to address the problem of face aging synthesis. However, most GAN-based methods hold that the aging changing of all facial regions is equal, which ignores the fact that different facial regions have distinct aging speeds and aging patterns. To this end, we propose a novel Conditioned-Attention Normalization GAN (CAN-GAN) for age synthesis by leveraging the aging difference between two age groups to capture facial aging regions with different attention factors. In particular, a new Conditioned-Attention Normalization (CAN) layer is designed to enhance the aging-relevant information of face, while smoothing the aging-irrelevant information of face by attention map. Since different facial attributes contribute to the discrimination of age groups with divers degrees, we further present a Contribution-Aware Age Classifier (CAAC) that finely measures the importance of face vector’s elements in terms of the age classification. Qualitative and quantitative experiments on several commonly-used datasets show the advance of CAN-GAN compared with the other competitive methods.