Road extraction in remote sensing data: A survey Chen, Ziyi; Deng, Liai; Luo, Yuhua ...
International journal of applied earth observation and geoinformation,
August 2022, Letnik:
112
Journal Article
Recenzirano
Odprti dostop
•This review covers a wider perspective in terms of both 2D remote sensing images and 3D point clouds.•This review provides a detail survey on 2D and 3D remote sensing datasets used for road ...extraction.•This review presents a detail analysis of challenges and future trends of road extraction from remote sensing data.
Automated extraction of roads from remotely sensed data come forth various usages ranging from digital twins for smart cities, intelligent transportation, urban planning, autonomous driving, to emergency management. Many studies have focused on promoting the progress of methods for automated road extraction from aerial and satellite optical images, synthetic aperture radar (SAR) images, and LiDAR point clouds. In the past 10 years, no a more comprehensive survey on this topic could be found in literature. This paper attempts to provide a comprehensive survey on road extraction methods that use 2D earth observing images and 3D LiDAR point clouds. In this review, we first present a tree-structure that separate the literature into 2D and 3D. Then, further methodologies level classification is demonstrated both in 2D and 3D. In 2D and 3D, we introduce and analyze the literature published in the last ten years. Except for the methodologies, we also review the aspects of data commonly used. Finally, this paper explores the existing challenges and future trends.
We propose a high-performance 3D feature extraction deep learning network based on point cloud and shifted voxel, named Point and Shifted Voxel MLP (PSVMLP). The main component of PSVMLP is simple ...Multi-Layer Perceptron (MLP) structure. PSVMLP achieves effective extraction of multi-scale features from 3D data. Specifically, we combine point cloud and voxel-based feature extraction methods. In voxel representation learning, we propose a wide-range geometric feature extraction method based on axial shifting operations and simple MLP structure. The axial shifting operations allow shifting voxels in the depth, height, and width directions, capturing more geometric information. In point cloud representation learning, we use simple MLP structure to extract local features, and we also extract global features by combining transformer structure. By combining point cloud and voxel feature extraction methods, we obtain rich feature representations from different scales, enhancing the model’s expressive power and generalization performance. Applying our designed model to basic geometric feature learning tasks, we achieve excellent results. Despite being built primarily on a simple MLP framework, our model demonstrates remarkable performance on both shape classification and shape part segmentation tasks. Our code is available at https://github.com/hitxraz/psvmlp.
•Novel voxel shift and simple MLP for geometric feature extraction.•Multi-scale feature extraction model, combining point cloud and voxel learning.•Model validated on public datasets with excellent results.
The extraction of ground points and breaklines is a crucial step during generation of high quality digital elevation models (DEMs) from airborne LiDAR point clouds. In this study, we propose a novel ...automated method for this task. To overcome the disadvantages of applying a single filtering method in areas with various types of terrain, the proposed method first classifies the points into a set of segments and one set of individual points, which are filtered by segment-based filtering and multi-scale morphological filtering, respectively. In the process of multi-scale morphological filtering, the proposed method removes amorphous objects from the set of individual points to decrease the effect of the maximum scale on the filtering result. The proposed method then extracts the breaklines from the ground points, which provide a good foundation for generation of a high quality DEM. Finally, the experimental results demonstrate that the proposed method extracts ground points in a robust manner while preserving the breaklines.
Deep Learning for 3D Point Clouds: A Survey Guo, Yulan; Wang, Hanyun; Hu, Qingyong ...
IEEE transactions on pattern analysis and machine intelligence,
2021-Dec.-1, 2021-12-1, 20211201, Letnik:
43, Številka:
12
Journal Article
Recenzirano
Odprti dostop
Point cloud learning has lately attracted increasing attention due to its wide applications in many areas, such as computer vision, autonomous driving, and robotics. As a dominating technique in AI, ...deep learning has been successfully used to solve various 2D vision problems. However, deep learning on point clouds is still in its infancy due to the unique challenges faced by the processing of point clouds with deep neural networks. Recently, deep learning on point clouds has become even thriving, with numerous methods being proposed to address different problems in this area. To stimulate future research, this paper presents a comprehensive review of recent progress in deep learning methods for point clouds. It covers three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation. It also presents comparative results on several publicly available datasets, together with insightful observations and inspiring future research directions.
Display omitted
Surveying techniques such as terrestrial laser scanner have recently been used to measure surface changes via 3D point cloud (PC) comparison. Two types of approaches have been ...pursued: 3D tracking of homologous parts of the surface to compute a displacement field, and distance calculation between two point clouds when homologous parts cannot be defined. This study deals with the second approach, typical of natural surfaces altered by erosion, sedimentation or vegetation between surveys. Current comparison methods are based on a closest point distance or require at least one of the PC to be meshed with severe limitations when surfaces present roughness elements at all scales. To solve these issues, we introduce a new algorithm performing a direct comparison of point clouds in 3D. The method has two steps: (1) surface normal estimation and orientation in 3D at a scale consistent with the local surface roughness; (2) measurement of the mean surface change along the normal direction with explicit calculation of a local confidence interval. Comparison with existing methods demonstrates the higher accuracy of our approach, as well as an easier workflow due to the absence of surface meshing or Digital Elevation Model (DEM) generation. Application of the method in a rapidly eroding, meandering bedrock river (Rangitikei River canyon) illustrates its ability to handle 3D differences in complex situations (flat and vertical surfaces on the same scene), to reduce uncertainty related to point cloud roughness by local averaging and to generate 3D maps of uncertainty levels. We also demonstrate that for high precision survey scanners, the total error budget on change detection is dominated by the point clouds registration error and the surface roughness. Combined with mm-range local georeferencing of the point clouds, levels of detection down to 6mm (defined at 95% confidence) can be routinely attained in situ over ranges of 50m. We provide evidence for the self-affine behaviour of different surfaces. We show how this impacts the calculation of normal vectors and demonstrate the scaling behaviour of the level of change detection. The algorithm has been implemented in a freely available open source software package. It operates in complex 3D cases and can also be used as a simpler and more robust alternative to DEM differencing for the 2D cases.
•The definition of 3D keypoints is analyzed for with deep learning.•Four kinds of 3D keypoints definitions are discussed on large-scale point clouds.•MLP-based definition achieves the best ...performance on indoor and outdoor datasets.
The main solution for large-scale point clouds registration is to first obtain a set of matched 3D keypoint pairs and then accomplish the point cloud registration task based on these matched keypoint pairs. However, at present, many methods study the feature descriptors in the point clouds registration task, but few methods discuss the 3D keypoints detection issues. The commonly used 3D keypoints detection strategy is the voxel-grid-based downsampling method, and the detected 3D keypoints are usually with a relatively huge amount and also with no explicit geometrical properties, which finally leads to a low inlier ratio. In this study, we rethink the 3D keypoints detection problem for large-scale point clouds with deep learning. Specifically, we discuss four kinds of 3D keypoints detection methods based on the joint keypoint detection and description learning framework D3Feat, and carry out extensive analyses on both the indoor large-scale point clouds dataset 3DMatch and the outdoor large-scale point clouds dataset KITTI Odometry. Experimental results demonstrate that the Multi-layer Perceptron (MLP) based method achieves the best inlier ratios under the different numbers of extracted 3D keypoints on both the indoor and outdoor large-scale point clouds. Further, we test these four kinds of keypoints detection methods under the application of large-scale point clouds registration, and the MLP-based method also achieves the state-of-the-art registration performance.
•Notice that no recent literature exists to collect the growing knowledge concerning 3D object detection, we fill this gap by starting with several basic concepts, providing a glimpse of evolution of ...3D object detection, together with comprehensive comparisons on publicly available datasets being manifested, with pros and cons being judiciously presented.•Witnessing the absence of a universal consensus on taxonomy with respect to 3D object detection, we contribute to the maturity of the taxonomy, which keeps a good continuity of existing efforts as well as adapts new branches for dynamics.•We present a case study on fifteen selected models among surveyed works, with regard to runtime analysis, error analysis, and robustness analysis closely. We argue that what mainly restricts the performance of detection is 3D location error based on our findings.
Autonomous driving is regarded as one of the most promising remedies to shield human beings from severe crashes. To this end, 3D object detection serves as the core basis of perception stack especially for the sake of path planning, motion prediction, and collision avoidance etc.. Taking a quick glance at the progress we have made, we attribute challenges to visual appearance recovery in the absence of depth information from images, representation learning from partially occluded unstructured point clouds, and semantic alignments over heterogeneous features from cross modalities. Despite existing efforts, 3D object detection for autonomous driving is still in its infancy. Recently, a large body of literature have been investigated to address this 3D vision task. Nevertheless, few investigations have looked into collecting and structuring this growing knowledge. We therefore aim to fill this gap in a comprehensive survey, encompassing all the main concerns including sensors, datasets, performance metrics and the recent state-of-the-art detection methods, together with their pros and cons. Furthermore, we provide quantitative comparisons with the state of the art. A case study on fifteen selected representative methods is presented, involved with runtime analysis, error analysis, and robustness analysis. Finally, we provide concluding remarks after an in-depth analysis of the surveyed works and identify promising directions for future work.
Display omitted
•The deep learning approach is proposed to generate unit cells of point clouds for 3D lattice structures with anticipated properties.•The point clouds, surfaces, and mechanical ...properties of the designed 3D unit cells show high similarities to the target structures.•The applications of proposed approach are demonstrated in orthopedic implants, new hybrid unit cells, and functionally gradient structures.
Lattice structures have been a hot topic recently owing to their superior mechanical properties, which are significantly influenced by the unit cell structure. By leveraging the power of deep learning, inverse design can be conducted on the unit cell structure based on the mechanical properties of its lattice structure. Assisted by deep learning, this study introduces a novel data-driven approach to design three-dimensional (3D) unit cells for lattice structures with anticipated properties. The approach can be efficiently and accurately applied to various unit cell structures. An auto-encoder is trained to extract the geometric features from unit cell point clouds. The effective mechanical properties of the lattice structures are calculated by combining the homogenization method and the finite element method. Subsequently, a mapping relationship between mechanical properties and geometric features is established through the multi-layer perceptron neural network. The models are ultimately employed to design 3D unit cells given anticipated properties of lattice structures. The results show that the mechanical properties of the generated unit cells satisfy the anticipated values. The applications of proposed method are demonstrated in orthopedic implants, new hybrid unit cells, and functionally gradient structures. Furthermore, the method can be extended to unit cell design across diverse domains.