Friction, or skid resistance, is an important feature of road pavements. It is influenced by a combination of factors. Accurately predicting the friction coefficient and thus effectively assessing ...the skid resistance performance is a complex nonlinear problem. For multi-feature and non-linear data, the accuracy of traditional methods is limited. In addition, the prediction accuracy of some methods depends on the original sample size, so it is difficult to achieve accurate prediction under the condition of small sample size. To solve the above problem, the friction coefficient prediction model based on the improved gray wolf optimization (IGWO) and natural gradient boosting (NGBoost) is proposed. First, to ensure the diversity of samples, asphalt mixture specimens with different gradation types are made. Then, friction and three-dimensional (3D) macro-texture data are collected from the specimen surfaces. Next, twenty-seven 3D macro-texture features are extracted from macro-texture data to describe macro-texture details. A correlation coefficient evaluation method is used to eliminate redundant features, and a feature importance analysis model based on gradient boosting model is constructed to obtain the key factors affecting the skid resistance. Finally, the friction coefficient prediction model based on IGWO-NGBoost is constructed. The IGWO algorithm is used to adjust the hyperparameters of NGBoost to optimize the model structure. The results show that IGWO-NGBoost can effectively fit the friction coefficient with a goodness of fit R2 of 97.31% compared with state-of-the-art methods. The model can effectively analyze the change mechanism of pavement skid resistance under the combined influence of multiple factors.
•A system was developed by using a side-viewing Kinect – v2 for sorghum plants phenotyping.•The skeletonization algorithm can segment stems and individual leaves of sorghum plants with overlapping ...tillers.•The system-derived traits were found to have high correlations with the corresponding manual measurements.•The total leaf area and stem volume showed promising potential for biomass prediction.
The ability to correlate morphological traits of plants with their genotypes plays an important role in plant phenomics research. However, measuring phenotypes manually is time-consuming, labor intensive, and prone to human errors. The 3D surface model of a plant can potentially provide an efficient and accurate way to digitize plant architecture. This study focused on the extraction of morphological traits at multiple developmental timepoints from sorghum plants grown under controlled conditions. A non-destructive 3D scanning system using a commodity depth camera was implemented to capture sequential images of a plant at different heights. To overcome the challenges of overlapping tillers, an algorithm was developed to first search for the stem in the merged point cloud data, and then the associated leaves. A 3D skeletonization algorithm was created by slicing the point cloud along the vertical direction, and then linking the connected Euclidean clusters between adjacent layers. Based on the structural clues of the sorghum plant, heuristic rules were implemented to separate overlapping tillers. Finally, each individual leaf was automatically segmented, and multiple parameters were obtained from the skeleton and the reconstructed point cloud including: plant height, stem diameter, leaf angle, and leaf surface area. The results showed high correlations between the manual measurements and the estimated values generated by the system. Statistical analyses between biomass and extracted traits revealed that stem volume was a promising predictor of shoot fresh weight and shoot dry weight, and the total leaf area was strongly correlated to shoot biomass at early stages.
Given the enormous scale and diverse distribution of 2D point cloud data, an adaptive Hilbert curve insertion algorithm which has quasi-linear time complexity is proposed to improve the efficiency of ...Delaunay triangulation. First of all, a large number of conflicting elongated triangles, which have been created and deleted many times, can be reduced by adopting Hilbert curve traversing multi-grids. In addition, searching steps for point location can be reduced by adjusting Hilbert curve׳s opening direction in adjacent grids to avoid the “jumping” phenomenon. Lastly, the number of conflicting elongated triangles can be further decreased by adding control points during traversing grids. The experimental results show that the efficiency of Delaunay triangulation by the adaptive Hilbert curve insertion algorithm can be improved significantly for both uniformly and non-uniformly distributed point cloud data, compared with CGAL, regular grid insertion and multi-grid insertion algorithms.
Display omitted
•The proposed algorithm optimizes the order of inserted points.•The order is determined by adaptive Hilbert curve and control points.•Conflicting elongated triangles and searching steps are reduced by optimized order.•The efficiency of the proposed method is proved to be enhanced by detail experiment.•The proposed algorithm is suitable for randomly distributed points.
Despite the availability of 3D digital models, 2D floor plans remain extensively used for quality inspection and maintenance as they offer firsthand information. While laser scanners enable efficient ...capture and reconstruction of real-world scenes, challenges arise in accurately extracting building geometry from laser scanning data due to the loss of geometric features. This paper describes a method for accurately reconstructing 2D geometric drawings of built facilities using laser scanning data. These techniques involve transforming the dimension of 3D data into 2D and displaying the registered data as pixels to extract solid lines that represent wall structures. By employing dimensionality transformation and pixelation techniques, the method supports reliable quality inspection and maintenance processes, overcoming the challenges of extracting precise geometry from laser scanning data. This paper contributes to the automated extraction of geometric features from point clouds and inspires the future development of fully automated 2D CAD and 3D BIM in alignment with Scan-to-BIM.
•Two-dimensional (2D) geometric drawings of built facilities are reconstructed from unstructured point clouds.•Geometric primitives are directly extracted to be visualized as 2D drawings without depending on external data.•The point clouds whose dimensions are transformed from 3D to 2D are displayed into pixels.•A case was conducted to validate the accuracy and efficiency of the proposed method.•The results are expected to support reliable quality inspection and maintenance of built facilities.
Surface topography and surface finish are two significant factors for evaluating the quality of products in additive manufacturing (AM). AM parts are fabricated layer by layer, which is quite ...different from traditional formative or subtractive methods. Despite rapid progress in additive manufacturing and associated optical metrology for quality control and in-situ monitoring, limited research has been conducted to investigate the reliability of 3D surface measurement data. The surface topologies scanned by multiple optical systems demonstrated significant differences due to varying sampling mechanisms, resolutions, system noises, etc. The 3D datasets should be trustworthy in order to extract parameters for quality assurance or feedback control from 3D surface measurements. In this paper, we set up new standards to evaluate the reliability of 3D surface measurement data and analyze the variation in the topographical profile. In this study, two non-contact optical methods based on Focus Variation Microscopy (FVM) and Structured Light System (SLS) were adopted to measure the surface topography of the target components. The two optical metrology systems generated two entirely different point cloud datasets. Statistical methods were applied to test the difference between the data obtained from the two systems. By using data analytics approach for comparison, it was found that the surface roughness estimated from the point cloud data sets of FVM and SLS has no significant difference, though the point cloud data sets were completely different. This paper provides standard validation approach to evaluate the plausibility of metrology data from in-situ real-time surface analysis for process planning of AM.
Display omitted
•3D point cloud-based geometric digital twin is proposed for pipe condition assessment.•Density-based extraction, clustering and region growing of pipeline geometric features.•The ...proposed method can process point clouds with high accuracy and efficiency.•Feasibility of geometric digital twin-informed quantitative assessment is proven.•The paper contributes to better informed O&M during lifecycle service of pipelines.
The increasing aging of underground pipe networks and the lack of effective inspection technologies present considerable challenges for whole life-cycle management of these infrastructures. Modern laser scanning technology offers a cost-effective and safe means to obtain dense and accurate 3D topographic data of the inner surface of pipelines. However, laser scanning point clouds contain substantial noise and outliers, and efficiently extracting valuable information for structural and functional mapping remains in its infancy. This paper presents an innovative method for fast processing point clouds data of large-diameter pipelines, enabling the accurate extraction of geometric features and efficient establishment of geometric digital twin using density-based clustering, fitting and region growing algorithm. Experimental tests were conducted to evaluate the accuracy, efficiency, and feasibility of the proposed method. The results demonstrate that the proposed approach not only robustly achieves high accuracy but also maintains high computational efficiency. Additionally, the geometric digital twin shows promise as tools for quantitatively assessing structural deformation and blockage defects.
Laser scanning technology is a fast and reliable method to survey structures. However, the automatic conversion of such data into solid models for computation remains a major challenge, especially ...where non-rectilinear features are present. Since, openings and the overall dimensions of the buildings are the most critical elements in computational models for structural analysis, this article introduces the Slicing Method as a new, computationally-efficient method for extracting overall façade and window boundary points for reconstructing a façade into a geometry compatible for computational modelling. After finding a principal plane, the technique slices a façade into limited portions, with each slice representing a unique, imaginary section passing through a building. This is done along a façade’s principal axes to segregate window and door openings from structural portions of the load-bearing masonry walls. The method detects each opening area’s boundaries, as well as the overall boundary of the façade, in part, by using a one-dimensional projection to accelerate processing. Slices were optimised as 14.3slices per vertical metre of building and 25slices per horizontal metre of building, irrespective of building configuration or complexity. The proposed procedure was validated by its application to three highly decorative, historic brick buildings. Accuracy in excess of 93% was achieved with no manual intervention on highly complex buildings and nearly 100% on simple ones. Furthermore, computational times were less than 3sec for data sets up to 2.6 million points, while similar existing approaches required more than 16hr for such datasets.
3D spatial measurement for model reconstruction: A review Flores-Fuentes, Wendy; Trujillo-Hernández, Gabriel; Alba-Corpus, Iván Y. ...
Measurement : journal of the International Measurement Confederation,
02/2023, Volume:
207
Journal Article
Peer reviewed
The measurement of 3D spatial coordinates for model reconstruction through artificial machine vision systems based on optical sensors and the corresponding signal processing associated with ...algorithms is a powerful module for cyber systems. It provides an efficient, functional, and intelligent vision and data information of the objects and scenes under observation for decisions, as well as for remote environment interactivity and autonomous robot systems actuation. Over the past 20 years, the artificial machine vision has benefited from emerging technology and a promising huge potential is peeking out, but also technical difficulties achieving customized and true commercial applications. This paper reviews the research progress, trends, and future research directions; the state-of-the-art of topics related to the 3D spatial measurement for model reconstruction. It classifies the technology by its fundamental principles and applications, to construct an outlook about its advantages, disadvantages, and challenges.
•Topics related to the 3D spatial measurement for model reconstruction.•Review of the research progress and the state of the art of topics.•State of the art and the fundamental principles of topics detailed explanation.•Description of technologies and applications are presented.•Classification of technology based on optical sensors and its signal processing.•Technology outlook construction advantages, disadvantages, advances, and challenges.•This literature point out the emergence and boost the machine vision for innovation.•Machine vision based cyber-systems contribution to industry 4.0.•Literature for promotion and encourage of research in related fields.
Geometric information modelling from point cloud data (PCD) is a fundamental step of the digital twinning process for rail infrastructure. Currently, this onerous procedure outweighs the anticipated ...benefits of the resulting model and expends 74% of the modellers' effort on converting PCD to a model. The cost of the resulting geometric information models (GIM) can be reduced by automating the modelling process. State-of-the-art methods cannot offer large-scale GIM generation required over kilometres without forfeiting precision and manual cost. This paper addresses the challenge of achieving such automation by leveraging the highly standardised topology of railways to automatically generate GIMs of rail track structures.
The method first automatically segments rails and track beds, delivering labelled point clusters of track structure elements. Next, it converges the segmented rails with pre-defined parametric assemblies of different rail profiles and uses a mesh-based approach to reconstruct the geometry of the track bed, delivering industry foundation classes (IFC) files of railway track structure elements. Experiments on 18.5 km railway PCDs yielded an average segmentation of 98.1% and 94.9% F1 scores and overall modelling accuracy of 3.5 cm and 2.8 cm root mean square error (RMSE)s for rails and track beds. The proposed method can realise an estimated time savings of 88.9% without needing any manual inputs.
•Use of railway topology to locate track structure elements on point clouds.•Automated segmentation method producing rail and track bed point clusters.•Eliminates the impact of inconsistencies in local point densities.•Automated 3D IFC object generation within 5 cm accuracy.•Reduces manual labour hours by 89% without needing any manual inputs.
Unstructured point clouds of varying sizes are increasingly acquired in a variety of environments through laser triangulation or Light Detection and Ranging (LiDAR). Predicting a vector response ...based on unstructured point clouds is a common problem that arises in a wide variety of applications. The current literature relies on several pre-processing steps such as structured subsampling and feature extraction to analyze the point cloud data. Those techniques lead to quantization artifacts and do not consider the relationship between the regression response and the point cloud during pre-processing. Therefore, we propose a general and holistic "Bayesian Nonlinear Tensor Learning and Modeler" (ANTLER) to model the relationship of unstructured, varying-size point cloud data with a vector response. The proposed ANTLER simultaneously optimizes a nonlinear tensor dimensionality reduction and a nonlinear regression model with a 3D point cloud input and a regression response. ANTLER can consider the complex data representation, high-dimensionality, and inconsistent size of the 3D point cloud data. Note to Practitioners -This paper is motivated by a real-world case study concerning the prediction of the transmission error and eccentricity based on unstructured point clouds of varying sizes in gear manufacturing. In the current state-of-the-art method, those characteristics can only be obtained via expensive and time-consuming Finite Element Analysis (FEA) or test benches. The proposed ANTLER framework can directly link the measurement point clouds with a vector response and serves as a guiding example for the immense potential of the ANTLER.