Recent advances in remote sensing technologies have provided the research community with unprecedented geospatial data characterized by high geometric, radiometric, spectral, and temporal resolution ......
Three-dimensional building models are important for various applications, such as disaster management and urban planning. The development of laser scanning sensor technologies has resulted in many ...different approaches for efficient building model generation using LiDAR data. Despite this effort, generation of these models lacks economical and reliable techniques that fully exploit the advantage of LiDAR data. Therefore, this research aims to develop a framework for fully-automated building model generation by integrating data-driven and model-driven methods using LiDAR datasets.
The building model generation starts by employing LiDAR data for building detection and approximate boundary determination. The generated building boundaries are then integrated into a model-based processing strategy because LiDAR derived planes show irregular boundaries due to the nature of LiDAR point acquisition. The focus of the research is generating models for the buildings with right-angled-corners, which can be described with a collection of rectangles under the assumption that the majority of the buildings in urban areas belong to this category. Therefore, by applying the Minimum Bounding Rectangle (MBR) algorithm recursively, the LiDAR boundaries are decomposed into sets of rectangles for further processing. At the same time, the quality of the MBRs is examined to verify that the buildings, from which the boundaries are generated, are buildings with right-angled-corners. The parameters that define the model primitives are adjusted through a model-based boundary fitting procedure using LiDAR boundaries. The level of details in the final Digital Building Model is based on the number of recursions during the MBR processing, which in turn are determined by the LiDAR point density. The model-based boundary fitting improves the quality of the generated boundaries and as seen in experimental results, the quality depends on the average LiDAR point spacing. This research thus develops an approach which not only automates the building model generation, but also achieves the best accuracy of the model while utilizing only LiDAR data.
Road markings play a critical role in road traffic safety and are one of the most important elements for guiding autonomous vehicles (AVs). High-Definition (HD) maps with accurate road marking ...information are very useful for many applications ranging from road maintenance, improving navigation, and prediction of upcoming road situations within AVs. This paper presents a deep learning-based framework for road marking extraction, classification and completion from three-dimensional (3D) mobile laser scanning (MLS) point clouds. Compared with existing road marking extraction methods, which are mostly based on intensity thresholds, our method is less sensitive to data quality. We added the step of road marking completion to further optimize the results. At the extraction stage, a modified U-net model was used to segment road marking pixels to overcome the intensity variation, low contrast and other issues. At the classification stage, a hierarchical classification method by integrating multi-scale clustering with Convolutional Neural Networks (CNN) was developed to classify different types of road markings with considerable differences. At the completion stage, a method based on a Generative Adversarial Network (GAN) was developed to complete small-size road markings first, then followed by completing broken lane lines and adding missing markings using a context-based method. In addition, we built a point cloud road marking dataset to train the deep network model and evaluate our method. The dataset contains urban road and highway MLS data and underground parking lot data acquired by our own assembled backpacked laser scanning system. Our experimental results obtained using the point clouds of different scenes demonstrated that our method is very promising for road marking extraction, classification and completion.
Uncrewed aerial vehicles (UAVs) carrying sensors, such as light detection and ranging (LiDAR) and multiband cameras georeferenced by an onboard global navigation satellite system/inertial navigation ...system (GNSS/INS), have become a popular means to quickly acquire near-proximal agricultural remote sensing data. These platforms have bridged the gap between high-altitude airborne and ground-based measurements. UAV data acquisitions also allow for surveying remote sites that are logistically difficult to access from ground. With that said, deriving well-georeferenced mapping products from these mobile mapping systems is contingent on accurate determination of platform trajectory along with intersensor positional and rotational relationships, that is, the mounting parameters of various sensors with respect to the GNSS/INS unit. Conventional techniques for estimating LiDAR mounting parameters (also referred to as LiDAR system calibration) require carefully planned trajectory and target configuration. Such techniques are time-consuming, and in certain cases, not feasible to accomplish. In this article, an in-situ system calibration and trajectory enhancement strategy for UAV LiDAR is proposed. The strategy uses planting geometry in mechanized agricultural fields through an automated procedure for feature extraction/matching and using them to enhance the quality of LiDAR-derived point clouds. The proposed approach is qualitatively and quantitatively evaluated using calibration datasets as well as separately acquired validation datasets to demonstrate the performance of the developed procedure. Quantitatively, the accuracy of the resulting UAV point clouds after system calibration and an accompanying trajectory enhancement improved from as much as 43 to 4 cm.
Acquired imagery by unmanned aerial vehicles (UAVs) has been widely used for three-dimensional (3D) reconstruction/modeling in various digital agriculture applications, such as phenotyping, crop ...monitoring, and yield prediction. 3D reconstruction from well-textured UAV-based images has matured and the user community has access to several commercial and opensource tools that provide accurate products at a high level of automation. However, in some applications, such as digital agriculture, due to repetitive image patterns, these approaches are not always able to produce reliable/complete products. The main limitation of these techniques is their inability to establish a sufficient number of correctly matched features among overlapping images, causing incomplete and/or inaccurate 3D reconstruction. This paper provides two structure from motion (SfM) strategies, which use trajectory information provided by an onboard survey-grade global navigation satellite system/inertial navigation system (GNSS/INS) and system calibration parameters. The main difference between the proposed strategies is that the first one—denoted as partially GNSS/INS-assisted SfM—implements the four stages of an automated triangulation procedure, namely, imaging matching, relative orientation parameters (ROPs) estimation, exterior orientation parameters (EOPs) recovery, and bundle adjustment (BA). The second strategy— denoted as fully GNSS/INS-assisted SfM—removes the EOPs estimation step while introducing a random sample consensus (RANSAC)-based strategy for removing matching outliers before the BA stage. Both strategies modify the image matching by restricting the search space for conjugate points. They also implement a linear procedure for ROPs’ refinement. Finally, they use the GNSS/INS information in modified collinearity equations for a simpler BA procedure that could be used for refining system calibration parameters. Eight datasets over six agricultural fields are used to evaluate the performance of the developed strategies. In comparison with a traditional SfM framework and Pix4D Mapper Pro, the proposed strategies are able to generate denser and more accurate 3D point clouds as well as orthophotos without any gaps.
Movement is fundamental to human and animal life, emerging through interaction of complex neural, muscular, and skeletal systems. Study of movement draws from and contributes to diverse fields, ...including biology, neuroscience, mechanics, and robotics. OpenSim unites methods from these fields to create fast and accurate simulations of movement, enabling two fundamental tasks. First, the software can calculate variables that are difficult to measure experimentally, such as the forces generated by muscles and the stretch and recoil of tendons during movement. Second, OpenSim can predict novel movements from models of motor control, such as kinematic adaptations of human gait during loaded or inclined walking. Changes in musculoskeletal dynamics following surgery or due to human-device interaction can also be simulated; these simulations have played a vital role in several applications, including the design of implantable mechanical devices to improve human grasping in individuals with paralysis. OpenSim is an extensible and user-friendly software package built on decades of knowledge about computational modeling and simulation of biomechanical systems. OpenSim's design enables computational scientists to create new state-of-the-art software tools and empowers others to use these tools in research and clinical applications. OpenSim supports a large and growing community of biomechanics and rehabilitation researchers, facilitating exchange of models and simulations for reproducing and extending discoveries. Examples, tutorials, documentation, and an active user forum support this community. The OpenSim software is covered by the Apache License 2.0, which permits its use for any purpose including both nonprofit and commercial applications. The source code is freely and anonymously accessible on GitHub, where the community is welcomed to make contributions. Platform-specific installers of OpenSim include a GUI and are available on simtk.org.
Building occlusions usually decreases the accuracy of boundary regularization. Thus, it is essential that modeling methods address this problem, aiming to minimize its effects. In this context, we ...propose a weighted iterative changeable degree spline (WICDS) approach. The idea is to use a weight function for initial building boundary points, assigning a lower weight to the points in the occlusion region. As a contribution, the proposed method allows the minimization of errors caused by the occlusions, resulting in a more accurate contour modeling. The conducted experiments are performed using both simulated and real data. In general, the results indicate the potential of the WICDS approach to model a building boundary with occlusions, including curved boundary segments. In terms of Fscore and PoLiS, the proposed approach presents values around 99% and 0.19 m, respectively. Compared with the previous iterative changeable degree spline (ICDS), the WICDS resulted in an improvement of around 6.5% for completeness, 4% for Fscore, and 0.24 m for the PoLiS metric.
This paper develops and validates a new fully automated procedure for shoreline delineation from high-resolution multispectral satellite images. The model is based on a new water–land index, the ...Direct Difference Water Index (DDWI). A new technique based on the buffer overlay method is also presented to determine the shoreline changes from different satellite images and obtain a time series for the shoreline changes. The shoreline detection model was applied to imagery from multiple satellites and validated to have sub-pixel accuracy using beach survey data that were collected from the Lake Michigan (USA) shoreline using a novel backpack-based LiDAR system. The model was also applied to 132 satellite images of a Lake Michigan beach over a three-year period and detected the shoreline accurately, with a >99% success rate. The model out-performed other existing shoreline detection algorithms based on different water indices and clustering techniques. The resolution shoreline position timeseries is the first satellite image-extracted dataset of its kind in terms of its high spatial and temporal resolution, and paves the road to obtaining other high-temporal-resolution datasets to refine models of beaches worldwide.
LiDAR-based mobile mapping systems (MMS) are rapidly gaining popularity for a multitude of applications due to their ability to provide complete and accurate 3D point clouds for any and every scene ...of interest. However, an accurate calibration technique for such systems is needed in order to unleash their full potential. In this paper, we propose a fully automated profile-based strategy for the calibration of LiDAR-based MMS. The proposed technique is validated by comparing its accuracy against the expected point positioning accuracy for the point cloud based on the used sensors’ specifications. The proposed strategy was seen to reduce the misalignment between different tracks from approximately 2 to 3 m before calibration down to less than 2 cm after calibration for airborne as well as terrestrial mobile LiDAR mapping systems. In other words, the proposed calibration strategy can converge to correct estimates of mounting parameters, even in cases where the initial estimates are significantly different from the true values. Furthermore, the results from the proposed strategy are also verified by comparing them to those from an existing manually-assisted feature-based calibration strategy. The major contribution of the proposed strategy is its ability to conduct the calibration of airborne and wheel-based mobile systems without any requirement for specially designed targets or features in the surrounding environment. The above claims are validated using experimental results conducted for three different MMS – two airborne and one terrestrial – with one or more LiDAR unit.
Image spectral and Light Detection and Ranging (LiDAR) positional information can be related through the orthophoto generation process. Orthophotos have a uniform scale and represent all objects in ...their correct planimetric locations. However, orthophotos generated using conventional methods suffer from an artifact known as the double-mapping effect that occurs in areas occluded by tall objects. The double-mapping problem can be resolved through the commonly known true orthophoto generation procedure, in which an occlusion detection process is incorporated. This paper presents a review of occlusion detection methods, from which three techniques are compared and analyzed using experimental results. The paper also describes a framework for true orthophoto production based on an angle-based occlusion detection method. To improve the performance of the angle-based technique, two modifications to this method are introduced. These modifications, which aim at resolving false visibilities reported within the angle-based occlusion detection process, are referred to as occlusion extension and radial section overlap. A weighted averaging approach is also proposed to mitigate the seamline effect and spectral dissimilarity that may appear in true orthophoto mosaics. Moreover, true orthophotos generated from high-resolution aerial images and high-density LiDAR data using the updated version of angle-based methodology are illustrated for two urban study areas. To investigate the potential of image matching techniques in producing true orthophotos and point clouds, a comparison between the LiDAR-based and image-matching-based true orthophotos and digital surface models (DSMs) for an urban study area is also presented in this paper. Among the investigated occlusion detection methods, the angle-based technique demonstrated a better performance in terms of output and running time. The LiDAR-based true orthophotos and DSMs showed higher qualities compared to their image-matching-based counterparts which contain artifacts/noise along building edges.