Akademska digitalna zbirka SLovenije - logo
E-viri
Celotno besedilo
Recenzirano
  • Residual 3D convolutional n...
    Hernández, Leandro José Rodríguez; Domínguez, Humberto de Jesús Ochoa; Villegas, Osslan Osiris Vergara; Sánchez, Vianey Guadalupe Cruz; Azuela, Juan Humberto Sossa; González, Javier Polanco

    Pattern recognition letters, August 2023, 2023-08-00, Letnik: 172
    Journal Article

    •3D processing of PET sinograms performs better when compared to 2D processing.•Synthetic PET data is useful for training convolutional networks that will be tested with real data.•A short network processing 3D PET sinograms can achieve better results than a deep network processing 2D sinograms.•For 3D PET sinograms, a residual architecture speeds up performance time and increases reconstruction quality.•Collection of the synthetic PET sinogram database to train deep learning methods for preclinical PET studies. Positron emission tomography (PET) has been widely used in nuclear medicine to diagnose cancer. PET images suffer from degradation because of the scanner's physical limitations, the radiotracer's reduced dose, and the acquisition time. In this work, we propose a residual three-dimensional (3D) and convolutional neural network (CNN) to enhance sinograms acquired from a small-animal PET scanner. The network comprises three convolutional layers created with 3D filters of sizes 9, 5, and 5, respectively. For training, we extracted 15250 3D patches from low- and high-count sinograms to build the low- and high-resolution pairs. After training and prediction, the image was reconstructed from the enhanced sinogram using the ordered subset expectation maximization (OSEM) algorithm. The results revealed that the proposed network improves the spillover ratio by up to 4.5% and the uniformity by 55% compared to the U-Net. The NEMA phantom data were obtained in a simulation environment. The network was tested on acquired real data from a mouse. The reconstructed images and the profiles of maximum intensity projection show that the proposed method visually yields sharper images. 2023 Elsevier Ltd. All rights reserved.