•Maize and soybean heights were successfully estimated using UAV-LiDAR data.•Method based on LiDAR variables yielded higher accuracy than CHM-based method.•Maize height prediction model outperformed ...soybean height prediction model.•Prediction results of crop height were poor when point density was below 1 point/m2.
Crop height is a key structure parameter for the modelling of crop growth, healthy status, yield forecasting and biomass estimation. Unmanned aerial vehicle (UAV) LiDAR systems can quickly and precisely acquire vegetation structure information at a low cost. UAV LiDAR data are increasingly used in vegetation parameters estimation. In this study, we estimated maize and soybean heights using two methods, i.e., based on LiDAR-derived CHM and based on LiDAR variables. The results show that UAV LiDAR data can successfully estimate maize and soybean heights. We found that the method based on LiDAR variables can produce more accurate estimates than CHM-based method. The estimation model of combined maize and soybean had a better prediction performance than those of the specific maize and soybean. Moreover, the soybean height estimation models derived from both methods yielded the lowest prediction precision. We studied the influence of LiDAR point density on crop height estimates through reduced point density (0.25–420 points/m2). When LiDAR point density was less than 1 point/m2, the estimation precision for the specific maize and soybean dropped rapidly with the decrease of point density. However, the point density had no significant influence on crop height estimation precision while LiDAR point density was greater than or equal to 1 point/m2. Moreover, the original point density did not generate the highest estimation precision in our study. Therefore, high LiDAR point density may be not required for estimating vegetation parameters, and a good balance between the point density and data acquisition cost should be found.
Fractal dimensions of trees provide key ecological information regarding tree and forest stand structure. Although LiDAR scanning has the potential to reduce the intensive labor required to estimate ...fractal dimensions with manual measurements, neither an accuracy assessment comparing with true dimensions nor a guide or method for accurately measuring the dimensions using LiDAR scanners has been presented. This study examined the accuracy of fractal dimensions estimated using terrestrial LiDAR scanning (TLS) and unmanned aerial LiDAR scanning (ULS) through computer simulations and developed an approach to reduce the estimation bias. The true fractal dimensions of digitized trees of red pine (Pinus densiflora), ubame oak (Quercus phillyreoides), and giant timber bamboo (Phyllostachys bambusoides) in 18 site scenarios were calculated by applying the box-counting method to the tree object data. The true fractal dimensions were compared with the estimated dimensions in four TLS and ULS scenarios. Furthermore, a method to reduce the estimation bias was developed by introducing sequential point decimation and modeling the acquisition rate decay, thereby extrapolating the number of voxels before decay. Simple box-counting on highly dense and precise TLS data collected from four positions could retrieve the true fractal dimensions for all 18 site scenarios. However, sparse and less accurate TLS underestimated fractal dimensions in 8 out of 18 cases, and ULS overestimated the dimensions in 29 of 36 cases. The change in the relationship between the number of voxels and mesh size on a log-log plot with sequential decimation could be used to judge whether a set of TLS data can retrieve accurate fractal dimensions by applying simple box-counting to the TLS data. The developed method improved the estimations of fractal dimensions for 30 of the 37 cases where the simple box-counting on the TLS and ULS data provided biased estimations. In contrast, the estimation bias increased in 27 of the 35 cases where the simple box-counting on the TLS and ULS data successfully retrieved the true fractal dimensions.
•Estimation accuracy of fractal dimensions of trees with TLS and ULS was evaluated.•Simple box-counting caused biased estimation in 37 of 72 cases.•Dense and precise TLS provided accurate estimation even with simple box-counting.•Data check with sequential decimation of LiDAR data helps avoid estimation bias.•Developed method reduced bias in most cases where simple box-counting caused bias.
Forest snow interception can account for large snow storage differences between open and forested areas. The effect of interception can also lead to significant variations in sublimation, with ...estimates varying from 5 to 60% of total snowfall. Most current interception models utilize canopy closure and LAI to partition interception from snowfall and calculate interception efficiency as an exponential decrease of interception efficiency with increasing precipitation. However, as demonstrated, these models can show specific deficiencies within heterogeneous canopy. Seven field areas were equipped with 1932 surveyed points within various canopy density regimes in three elevation bands surrounding Davos, Switzerland. Snow interception measurements were taken from 2012 to 2014 (∼9000 samples) and compared with measurements at two open sites. The measured data indicated the presence of snow bridging from a demonstrated increase in interception efficiency as precipitation increased until a maximum was reached. As precipitation increased beyond this maximum, the data then exhibited a decrease in interception efficiency. Standard and novel canopy parameters were developed using aerial LiDAR data. These included estimates of LAI, canopy closure, distance to canopy, gap fraction, and various tree size parameters. These canopy metrics and the underlying efficiency distribution were then integrated to formulate a conceptual model based upon the snow interception measurements. This model gave a ∼27% increase in the r2 (from 0.39 to 0.66) and a ∼40% reduction in RMSE (from 5.19 to 3.39) for both calibration and validation data sets when compared to previous models at the point scale. When upscaled to larger grid sizes, the model demonstrated further increases in performance.
Key Points:
Showed a 27% increase in r2 and 40% RMSE reduction compared to prior models
Canopy parameters that represent large‐scale features explained most variations
Nine thousand interception measurements indicated a sigmoidal shaped interception efficiency
Visible light positioning (VLP) is a promising technology since it can provide high accuracy indoor localization based on the existing lighting infrastructure. However, existing approaches often ...require dense LED distributions and persistent line-of-sight (LOS) between transmitter and receiver. What's more, sensors are imperfect, and their measurements are prone to errors. Through multi sensors fusion, we can compensate the deficiencies of stand-alone sensors and provide more reliable pose estimations. In this work, we propose a loosely-coupled multi-sensor fusion method based on VLP and Simultaneous Localization and Mapping (SLAM), using light detection and ranging (LiDAR), odometry, and rolling shutter camera. Our multi-sensor localizer can provide accurate and robust robot localization and navigation in LED shortage/outage situations. The experimental results show that our proposed scheme can provide an average accuracy of 2.5 cm with around 42 ms average positioning latency.
Reasonable fusion of multimodal data can increase the accuracy of remote sensing classification. In this article, an effective morphological convolution and attention calibration network is proposed ...for the joint classification of hyperspectral image (HSI) and light detection and ranging (LiDAR). Firstly, we devise a morphological convolution block, which combines the dilation and erosion operations in morphology with convolution to better capture the feature from HSI and LiDAR. Next, we designed a dual attention module that uses self attention to calibrate features and cross attention to combine multisource complementary information, respectively. Finally, considering the features of semantic inconsistency and different scales, the adaptive feature fusion module is introduced to dynamically fuse multimodal features. To verify the progressiveness of proposed network, we experiment on three common datasets and one self-made dataset. The result shows that our network performs better than the state-of-the-art models.
Shallow coastal areas are among the most inhabited areas and are valuable for biodiversity, recreation and the economy. Due to climate change and sea level rise, sustainable management of coastal ...areas involves extensive exploration, monitoring, and protection. Current high-resolution remote sensing methods for monitoring these areas include bathymetric LiDAR. Therefore, this study presents a novel methodological approach to assess the suitability of Airborne LiDAR Bathymetry for automatic classification and mapping of the seafloor. Nine classes of geomorphological bedforms and three classes of anthropogenic structures were identified. They were automatically mapped by Geographic Object-Based Image Analysis and machine learning supervised classifiers. The developed method was applied to six study sites and a 48 km submerged coastal zone in the Southern Baltic, achieving an overall accuracy of up to 94%. This study shows that calculation of the Multiresolution Index of Ridge Top Flatness (secondary feature) can be used to quickly and automatically determine sandbar crests and ridge tops. The methodical approach developed in this study can help evaluate and protect other shallow coastal environments and coastal protection structures.
Display omitted
•Nine types of bedforms and three types of anthropogenic structures were identified.•Automatic mapping of geomorphology based on airborne lidar bathymetry was developed.•Results of automatic classification were more precise than manual classification.•Bar crests in the shore zone can be automatically derived based on MRRTF variable.•Performance of Random Forest is very efficient for airborne lidar datasets.
Complex terrains in coastal zones and shallow water areas around islands and reefs can be quickly detected via airborne LiDAR bathymetry (ALB) technology. Due to equipment placement deviations and ...measurement uncertainty, measurement deviations can occur in the overlapping areas of adjacent strips, so it is particularly necessary to register point clouds. To address areas with few seafloor structures and to overcome the challenges of extracting features in water areas with small terrain changes, a registration method for airborne LiDAR bathymetry seafloor point clouds based on the adaptive matching of corresponding points is proposed. First, the normal vector zenith angle, curvature change and omnidirectional variance in the seafloor points are calculated. Then, the corresponding points are adaptively matched according to the terrain feature similarity and distance constraints. Finally, the RANSAC algorithm and ICP algorithm are used for coarse registration and fine registration, respectively. The experimental results show that the method provides high accuracy and a uniform distribution of corresponding points. The root mean square error (RMSE) values when the registration method is applied to a flat area and coral reef area are 0.102 m and 0.041 m, respectively. Compared with that of the ICP alignment algorithm, the accuracy of the proposed method is improved by 0.114 m and 0.227 m, respectively, and compared with that of the NDT algorithm, the accuracy is improved by 0.316 m and 0.452 m, respectively; hence, the proposed method provides an effective solution for the registration of ALB data.
Current practices in traffic sign monitoring heavily rely on manual inspections, a method that is both time-consuming and prone to human error. This leads to inefficiencies in the management and ...maintenance of these critical roadside assets. The objective of this work is to overcome these limitations by proposing a method for automated change detection in traffic signs using low-density LiDAR data. The proposed solution integrates noise elimination, point cloud restructuring, and cross-scan KD-tree generation, followed by the application of unsupervised machine learning techniques for change identification. The effectiveness of this method was verified by testing across three different highways with varying point cloud resolutions. For robust testing, an algorithm was also designed to simulate a broad range of different damage scenarios in traffic signs of different types, sizes, and placements. Testing in different scenarios along almost 15 km of the road revealed impressive results with accuracy and F1 score metrics ranging from 92% to 100%. Moreover, the algorithm was also extremely efficient with an average runtime of just 115" per km of fully automated unattended processing. The change detection potential of the proposed algorithm extends beyond traffic signs, as it could be adapted for many highway elements, enhancing the efficiency of transportation asset management and highway maintenance programmes. The findings indicate that this approach not only fills a significant gap in the current traffic sign monitoring and asset management practice but also offers a promising, comprehensive solution towards automated, cost-effective, and precise monitoring and maintenance of traffic signs, thus addressing a major challenge in this area.
We investigate a significant model‐observation difference found between cloud‐base vertical velocity for continental shallow cumulus simulated using large‐eddy simulations (LES) and observed by ...Doppler lidar measurements over the U.S. Southern Great Plains Atmospheric Radiation Measurement Facility. The LES cloud‐base vertical velocity is dominated by updrafts that are consistent with a general picture for convective clouds but is inconsistent with Doppler lidar observations that also show the presence of considerable downdrafts. The underestimation of simulated downdrafts is found to be a robust feature, being insensitive to various numerical, physical, or dynamical choices. We find that simulations can more closely reproduce observations only after improving the model physics to use size‐resolved microphysics and horizontal longwave radiation, both of which modify the cloud buoyancy and velocity structure near cloud side edges. The results suggest that treatments that capture these structures are needed for the proper simulation and subsequent parameterization development of shallow cumulus vertical transport.
Plain Language Summary
Cumulus clouds are important to vertical transport and the heat and moisture budgets in the lower atmosphere. The representations of these clouds in weather and climate models are typically based on studies using higher‐resolution models. However, we use observations to show that high‐resolution models normally do not properly simulate the vertical wind at cloud bottom that governs cloud evolution. We demonstrate that models can closely match the observed vertical winds at cloud bottom by improving the model physics to compute cloud droplet evolution explicitly for a range of droplet sizes while also computing the cooling at cloud sides caused by the horizontal emission of infrared radiation. These improvements enhance downdrafts near cloud sides and bring the simulations in line with observed vertical velocity statistics at cloud bottom.
Key Points
Large‐eddy simulations significantly underestimate observed downdrafts at cloud base in continental shallow convection
The underestimation is a robust feature, being independent of model setup choices such as large‐scale forcing and horizontal resolution
Simulations require size‐resolved microphysics and horizontal longwave radiation to represent the observed downdrafts
The effective utilization of hyperspectral image (HSI) and light detection and ranging (LiDAR) data is essential for land cover classification. Recently, deep learning-based classification approaches ...have achieved remarkable success. However, most deep learning classification methods are data-driven and designed in a black-box architecture, lacking sufficient interpretability, and ignoring the potential correlation of heterogeneous complementary information between multisource data. To address these issues, we propose an interpretable deep neural network, namely multisource aligning joint contextual representation model-informed interpretable classification network (MACRMoI-N), which fully exploits correlation of multisource data by aligning complementary spectral-spatial-elevation information during end-to-end training. We first present a multimodal aligning joint contextual representation classification model (MACR-M), which incorporates local spatial-spectral prior information into representation. MACR-M is optimized by an iterative algorithm to solve dictionaries of HSI and LiDAR and their corresponding sparse coefficients, in which the dictionary distribution are aligned to enable the complementary information of multisource data to guide a more accurate classification. We further propose the unfolded MACRMoI-N, where each module corresponds to a specific operation of the optimization algorithm, and the parameters are optimized in an end-to-end manner. Comparative experiment results and ablation studies show that MACRMoI-N performs better than other advanced methods.