UNI-MB - logo
UMNIK - logo
 
E-viri
Celotno besedilo
  • CASNet: A Cross-Attention S...
    Ji, Yuzhu; Zhang, Haijun; Jie, Zequn; Ma, Lin; Jonathan Wu, Q. M.

    IEEE transaction on neural networks and learning systems, 2021-June, 2021-6-00, 20210601, Letnik: 32, Številka: 6
    Journal Article

    Recent works on video salient object detection have demonstrated that directly transferring the generalization ability of image-based models to video data without modeling spatial-temporal information remains nontrivial and challenging. Considering both intraframe accuracy and interframe consistency of saliency detection, this article presents a novel cross-attention based encoder-decoder model under the Siamese framework (CASNet) for video salient object detection. A baseline encoder-decoder model trained with Lovász softmax loss function is adopted as a backbone network to guarantee the accuracy of intraframe salient object detection. Self- and cross-attention modules are incorporated into our model in order to preserve the saliency correlation and improve intraframe salient detection consistency. Extensive experimental results obtained by ablation analysis and cross-data set validation demonstrate the effectiveness of our proposed method. Quantitative results indicate that our CASNet model outperforms 19 state-of-the-art image- and video-based methods on six benchmark data sets.