UP - logo
E-viri
Celotno besedilo
Recenzirano
  • Perturbation-Seeking Genera...
    Cheng, Gong; Sun, Xuxiang; Li, Ke; Guo, Lei; Han, Junwei

    IEEE transactions on geoscience and remote sensing, 2022, Letnik: 60
    Journal Article

    The methods for remote sensing image (RSI) scene classification based on deep convolutional neural networks (DCNNs) have achieved prominent success. However, confronted with adversarial examples obtained by adding imperceptible perturbations to clean images, the great vulnerability of DCNNs makes it worth exploring effective defense methods. To date, numerous countermeasures for adversarial examples have been proposed, but how to improve the defensive ability for unknown attacks still to be answered. To address this issue, in this article, we propose an effective defense framework specified for RSI scene classification, named perturbation-seeking generative adversarial networks (PSGANs). In brief, a new training framework is designed to train the classifier by introducing the examples generated during the image reconstruction process, in addition to clean examples and adversarial ones. These generated examples can be random kinds of unknown attacks during training and thus are utilized to eliminate the blind spots of a classifier. To assist the proposed training framework, a reconstruction method is developed. First, instead of modeling the distribution of clean examples, we model the distributions of the perturbations added in adversarial examples. Second, to make a tradeoff between the diversity of the reconstructed examples and the optimization of PSGAN, a scale factor named seeking radius is introduced to scale the generated perturbations before they are subtracted by the given adversarial examples. Comprehensive and extensive experimental results on three widely used benchmarks for RSI scene classification demonstrate the great effectiveness of PSGAN when faced with both known and unknown attacks. Our source code is available at https://github.com/xuxiangsun/PSGAN .