Akademska digitalna zbirka SLovenije - logo
E-viri
Recenzirano Odprti dostop
  • SHREC 2020 Track: 3D Point ...
    Ku, Tao; Veltkamp, Remco C; Boom, Bas; Duque-Arias, David; Velasco-Forero, Santiago; Deschaud, Jean-Emmanuel; Goulette, Francois; Marcotegui, Beatriz; Ortega, Sebastián; Trujillo, Agustín; Pablo Suárez, José; Santana, José Miguel; Ramírez, Cristian; Akadas, Kiran; Gangisetty, Shankar

    Computers & graphics, 12/2020, Letnik: 93
    Journal Article

    Scene understanding of large-scale 3D point clouds of an outer space is still a challenging task. Compared with simulated 3D point clouds, the raw data from LiDAR scanners consist of tremendous points returned from all possible reflective objects and they are usually non-uniformly distributed. Therefore, it's cost-effective to develop a solution for learning from raw large-scale 3D point clouds. In this track, we provide large-scale 3D point clouds of street scenes for the semantic segmentation task. The data set consists of 80 samples with 60 for training and 20 for testing. Each sample with over 2 million points represents a street scene and includes a couple of objects. There are five meaningful classes: building, car, ground, pole and vegetation. We aim at localizing and segmenting semantic objects from these large-scale 3D point clouds. Four groups contributed their results with different methods. The results show that learning-based methods are the trend and one of them achieves the best performance on both Overall Accuracy and mean Intersection over Union. Next to the learning-based methods, the combination of hand-crafted detectors are also reliable and rank second among comparison algorithms.