DIKUL - logo
E-resources
Full text
Peer reviewed Open access
  • Multiscale Dual-Branch Resi...
    Ghaderizadeh, Saeed; Abbasi-Moghadam, Dariush; Sharifi, Alireza; Tariq, Aqil; Qin, Shujing

    IEEE journal of selected topics in applied earth observations and remote sensing, 2022, Volume: 15
    Journal Article

    The development of remote sensing images in recent years has made it possible to identify materials in inaccessible environments and study natural materials on a large scale. But hyperspectral images (HSIs) are a rich source of information with their unique features in various applications. However, several problems reduce the accuracy of HSI classification; for example, the extracted features are not effective, noise, the correlation of bands, and most importantly, the limited labeled samples. To improve accuracy in the case of limited training samples, we propose a multiscale dual-branch residual spectral-spatial network with attention to the HSI classification model named MDBRSSN in this article. First, due to the correlation and redundancy between HSI bands, a principal component analysis operation is applied to preprocess the raw HSI data. Then, in MDBRSSN, a dual-branch structure is designed to extract the useful spectral-spatial features of HSI. The advanced feature, multiscale abstract information extracted by the convolution neural network, is applied to image processing, which can improve complex hyperspectral data classification accuracy. In addition, the attention mechanisms applied separately to each branch enable MDBRSSN to optimize and refine the extracted feature maps. Such an MDBRSSN framework can learn and fuse deeper hierarchical spectral-spatial features with fewer training samples. The purpose of designing the MDBRSSN model is to have high classification accuracy compared to state-of-the-art methods when the training samples are limited, which is proved by the results of the experiments in this article on four datasets. In Salinas, Pavia University, Indian Pines, and Houston 2013, the proposed model obtained 99.64%, 98.93%, 98.17%, and 96.57% overall accuracy using only 1%, 1%, 5%, and 5% of labeled data for training, respectively, which are much better compared to the state-of-the-art methods.