NUK - logo
E-resources
Full text
Peer reviewed
  • Data-efficient image captio...
    Lu, Yue; Guo, Chao; Dai, Xingyuan; Wang, Fei-Yue

    Neurocomputing (Amsterdam), 06/2022, Volume: 490
    Journal Article

    The image captioning of fine art paintings aims at generating a sentence to describe the painting content. Compared with photographic images, there are few annotated data of clear and precise content descriptions for paintings. Besides, painting images usually have abstract expressions, making it hard to extract their representative features. In this paper, we propose a virtual-real semantic alignment training process to address these challenges in painting captioning. To provide sufficient training data, we generate a virtual painting captioning dataset by applying style transfer to a large-scale photographic image captioning dataset and maintaining their annotations. To tackle the difficulty of abstract expressions, we employ a semantic alignment loss between photographic image features and virtual painting features to guide the training of the painting feature extractor. We evaluate our method in two data-hungry scenarios where only a few or no annotated painting data for training. According to the evaluation results on a public painting captioning dataset and our annotated painting captioning dataset, our model achieves significant improvements and higher data efficiency than the baselines in the two data-hungry scenarios on all datasets.