UNI-MB - logo
UMNIK - logo
 
E-viri
Celotno besedilo
Recenzirano Odprti dostop
  • Object representation enhan...
    Li, Huifang; Li, Yidong; Jin, Yi; Wang, Tao

    International journal of intelligent systems, November 2022, 2022-11-00, 20221101, Letnik: 37, Številka: 11
    Journal Article

    Self‐supervised colocalization is to localize common objects in the data set containing only one superclass without using human‐annotated labels. Existing methods achieve impressive results by employing self‐supervised pretext learning. However, a common limitation still exists. They either tend to overextend activations to the background, or they tend to activate the most discriminative object part. To alleviate this problem, we propose an object representation enhancement model to weaken background distraction and to mine complementary object regions during the object representation learning. Specifically, we first propose an Object‐aware Representation Enhancement (ORE) module to estimate an object mask for each input image, guiding the model to disregard the background content and focus on the foreground object. The ORE module and the subsequent self‐supervised learning can mutually reinforce each other. Then we propose a Masked Self‐supervised Learning branch and design a masked attention consistency objective to induce the model to activate complementary parts of the object effectively. Extensive experiments on four fine‐grained data sets demonstrate the superiority of the proposed model.