UP - logo
E-viri
Celotno besedilo
Recenzirano
  • Image generation via latent...
    Chen, Yanxiang; Wu, Guang; Zhou, Jie; Qi, Guojun

    Neurocomputing (Amsterdam), 05/2019, Letnik: 340
    Journal Article

    Many researches have brought progress in learning a good generative model by combing the advantages of GAN and VAE, where latent space learning is always important for generating high-quality images. But these existing works mainly seek to impose the latent space a given distribution in advance or to obey a Gaussian distribution with KL divergence penalty, which leads to the difficulties of deciding a suitable prior distribution corresponding to different datasets. Thus in this paper we develop a two-stage combining method of AE and GAN under unsupervised and supervised conditions respectively, each stage performed to improve the effect of modeling latent distribution. In the first stage, an adversarial procedure achieves to match the latent distribution with real data distribution determined by arbitrary dataset without having access to a pre-set prior distribution. In the second stage, besides one adversarial procedure trained for outputting images, the other adversarial procedure is designed to attain the goal of optimizing the latent distribution of the first stage via back-propagation. Therefore, loop optimization of the network parameters during training will at last allow the framework to map an input noise to a high quality image. Extensive experiments are conducted to verify the performance of representing latent space and generating images on different datasets including MNIST, fashion-MNIST, CIFAR-10, and CelebA. The code and tutorials already release at https://github.com/TwistedW/CAE-CGAN.