Akademska digitalna zbirka SLovenije - logo
E-viri
Celotno besedilo
Recenzirano Odprti dostop
  • Detecting Camouflaged Objec...
    Wang, Yuye; Chen, Tianyou; Hu, Xiaoguang; Shi, Jiaqi; Jia, Zichong

    IEEE access, 01/2024, Letnik: 12
    Journal Article

    Camouflaged objects are typically assimilated into their surroundings. Consequently, in contrast to generic object detection/segmentation, camouflaged object detection proves to be considerably more intricate due to the indistinct boundaries and heightened intrinsic similarities between foreground targets and the surrounding environment. Despite the proposition of numerous algorithms that have demonstrated commendable performance across various scenarios, these approaches may still grapple with blurred boundaries, leading to the inadvertent omission of camouflaged targets in challenging scenes. In this paper, we introduce a multi-stage framework tailored for segmenting camouflaged objects through a process of coarse-to-fine refinement. Specifically, our network encompasses three distinct decoders, each fulfilling a unique role in the model. In the initial decoder, we introduce the Bi-directional Locating Module to excavate foreground and background cues, enhancing target localization. The second decoder focuses on leveraging boundary information to augment overall performance, incorporating the Multi-level Feature Fusion Module to generate prediction maps with finer boundaries. Subsequently, the third decoder introduces the Mask-guided Fusion Module, designed to process high-resolution features under the guidance of the second decoder's results. This approach enables the preservation of structural details and the generation of fine-grained prediction maps. Through the integration of the three decoders, our model effectively identifies and segments camouflaged targets. Extensive experiments are conducted on three commonly used benchmark datasets. The results of these experiments demonstrate that, even without the application of pre-processing or post-processing techniques, our model outperforms 14 state-of-the-art algorithms.