Akademska digitalna zbirka SLovenije - logo
E-viri
Celotno besedilo
Recenzirano Odprti dostop
  • Latent relation shared lear...
    Li, Jiaqi; Liao, Lejian; Jia, Meihuizi; Chen, Zhendong; Liu, Xin

    iScience, 08/2024, Letnik: 27, Številka: 8
    Journal Article

    Magnetic resonance imaging (MRI), ultrasound (US), and contrast-enhanced ultrasound (CEUS) can provide different image data about uterus, which have been used in the preoperative assessment of endometrial cancer. In practice, not all the patients have complete multi-modality medical images due to the high cost or long examination period. Most of the existing methods need to perform data cleansing or discard samples with missing modalities, which will influence the performance of the model. In this work, we propose an incomplete multi-modality images data fusion method based on latent relation shared to overcome this limitation. The shared space contains the common latent feature representation and modality-specific latent feature representation from the complete and incomplete multi-modality data, which jointly exploits both consistent and complementary information among multiple images. The experimental results show that our method outperforms the current representative approaches in terms of classification accuracy, sensitivity, specificity, and area under curve (AUC). Furthermore, our method performs well under varying imaging missing rates. Display omitted •An incomplete multi-modality medical images classification method is introduced•Latent relation shared learning boosts sensitivity in incomplete multimodality models•The fusion of MRI, US and CEUS methods enhances diagnostic performance Cancer; Image-guided intervention; Machine learning