Akademska digitalna zbirka SLovenije - logo
E-viri
Celotno besedilo
Recenzirano
  • Class Enhancement Losses Wi...
    Dao, Son Duy; Shi, Hengcan; Phung, Dinh; Cai, Jianfei

    IEEE transactions on multimedia, 01/2024, Letnik: 26
    Journal Article

    Recent mask proposal models have significantly improved the performance of open-vocabulary semantic segmentation. However, the use of a 'background' embedding during training in these methods is problematic as the resulting model tends to over-learn and assign all unseen classes as the background class instead of their correct labels. Furthermore, they ignore the semantic relationship of text embeddings, which arguably can be highly informative for open-vocabulary prediction as some classes may have close relationship with other classes. To this end, this article proposes novel class enhancement losses to bypass the use of the 'background' embbedding during training, and simultaneously exploit the semantic relationship between text embeddings and mask proposals by ranking the similarity scores. To further capture the relationship between base and novel classes, we propose an effective pseudo label generation pipeline using the pretrained vision-language model. Extensive experiments on several benchmark datasets show that our method achieves overall the best performance for open-vocabulary semantic segmentation. Our method is flexible, and can also be applied to the zero-shot semantic segmentation problem.