NUK - logo
E-viri
Celotno besedilo
  • Training DCNN by Combining ...
    Shi, Weiwei; Gong, Yihong; Tao, Xiaoyu; Zheng, Nanning

    IEEE transaction on neural networks and learning systems, 07/2018, Letnik: 29, Številka: 7
    Journal Article

    In this paper, we build a multilabel image classifier using a general deep convolutional neural network (DCNN). We propose a novel objective function that consists of three parts, i.e., max-margin objective, max-correlation objective, and correntropy loss. The max-margin objective explicitly enforces that the minimum score of positive labels must be larger than the maximum score of negative labels by a predefined margin, which not only improves accuracies of the multilabel classifier, but also eases the threshold determination. The max-correlation objective can make the DCNN model learn a latent semantic space, which maximizes the correlations between the feature vectors of the training samples and their corresponding ground-truth label vectors projected into this space. Instead of using the traditional softmax loss, we adopt the correntropy loss from the information theory field to minimize the training errors of the DCNN model. The proposed framework can be end-to-end trained. Comprehensive experimental evaluations on Pascal VOC 2007 and MIR Flickr 25K multilabel benchmark data sets with four DCNN models, i.e., AlexNet, VGG-16, GoogLeNet, and ResNet demonstrate that the proposed objective function can remarkably improve the performance accuracies of a DCNN model for the task of multilabel image classification.