Indoor perception is a field that has gained traction in recent years. While there has been a significant amount of research done on outdoor perception and motion planning, the indoor environment has ...yet to receive similar treatment. In an indoor environment, various sensor systems have been developed to track and localize objects, each tackling a different set of challenges. In this article, we introduce a novel infrastructure sensor node (ISN) consisting of a light detection and ranging (LiDAR) along with two monocular cameras mounted on the ceiling of the hallways of our laboratory to obtain relevant information. We present a perception pipeline that uses prior 3-D point cloud registration to localize objects in real time in dynamic indoor environments. We provided a complete case study to present a work that successfully detects, registers, and localizes objects through a dynamic environment with a high degree of occlusion.
•To address the density imbalanced problem in point clouds, we propose a novel spatial information enhancement module (SIE) to predict the dense shapes of point sets in candidate boxes, and learn the ...structure information to improve the ability of feature representation.•We present a hybrid-paradigm region proposal network (HP-RPN) for more effective multi-scale feature extraction and high-recall proposal generation.•With the structure information as guidance, our elaborately designed SIENet achieves the state-of-the-art performance of 3D object detection on the KITTI benchmark.•The encouraging experimental results also demonstrate the outstanding improvement in far-range object detection.
LiDAR-based 3D object detection pushes forward an immense influence on autonomous vehicles. Due to the limitation of the intrinsic properties of LiDAR, fewer points are collected at the objects farther away from the sensor. This imbalanced density of point clouds degrades the detection accuracy but is generally neglected by previous works. To address the challenge, we propose a novel two-stage 3D object detection framework, named SIENet. Specifically, we design the Spatial Information Enhancement (SIE) module to predict the spatial shapes of the foreground points within proposals, and extract the structure information to learn the representative features for further box refinement. The predicted spatial shapes are complete and dense point sets, thus the extracted structure information contains more semantic representation. Besides, we design the Hybrid-Paradigm Region Proposal Network (HP-RPN) which includes multiple branches to learn discriminate features and generate accurate proposals for the SIE module. Extensive experiments on the KITTI 3D object detection benchmark show that our elaborately designed SIENet outperforms the state-of-the-art methods by a large margin. Codes will be publicly available at https://github.com/Liz66666/SIENet.
Display omitted
•Black double-shell hollow nanoparticles (BDS-HNPs) is prepared as LiDAR-reflective materials.•Strategies of multiple interfaces and internal white shell result as a superb NIR ...reflectance.•With the hydrophilic nature, BDS-HNPs are easily formulated as eco-friendly hydrophilic paint.•BDS-HNPs are dually function as NIR reflective and black color exhibiting layer in monolayer.•Three types of LiDAR sensors are employed for practical recognition of BDS-HNPs-painted object.
Novel LiDAR-detectable black double-shell hollow nanoparticles (BDS-HNPs) with internal white shell are successfully utilized as materials for autonomous vehicle paint for the first time. These BDS-HNPs are carefully designed to achieve excellent near-infrared (NIR) reflectance, blackness, hydrophilicity, and applicability as monolayer coatings. An emphasis is placed on the NIR reflectance by forming double-shell hollow morphologies embracing the internal white shell and multiple interfaces within the nanoparticles. Accordingly, the BDS-HNPs exhibit NIR reflectance of ca. 33.2, 36.9, and 40.9 R% at wavelengths of 793, 850, and 905 nm, respectively, comparable to NIR reflectance of the commercially available NIR-reflective bilayer dark-tone coating. For the practical LiDAR visualization, BDS-HNPs mixed with hydrophilic varnish are spray-coated onto the various objects. As a result, the BDS-HNPs-painted objects are clearly recognized by three different types of LiDAR sensors (robot, rotating, and MEMs mirror) under various conditions of inside and outside. These results clearly demonstrate the great potential of BDS-HNPs as a new type of LiDAR-detectable black material for future autonomous driving environments.
The effective detection of curbs is fundamental and crucial for the navigation of a self-driving car. This paper presents a real-time curb detection method that automatically segments the road and ...detects its curbs using a 3D-LiDAR sensor. The point cloud data of the sensor are first processed to distinguish on-road and off-road areas. A sliding-beam method is then proposed to segment the road by using the off-road data. A curb-detection method is finally applied to obtain the position of curbs for each road segments. The proposed method is tested on the data sets acquired from the self-driving car of laboratory of VeCaN at Tongji University. Off-line experiments demonstrate the accuracy and robustness of the proposed method, i.e., the average recall, precision and their harmonic mean are all over 80%. Online experiments demonstrate the real-time capability for autonomous driving as the average processing time for each frame is only around 12 ms.
Recent years have witnessed the ever-growing interest and adoption of autonomous vehicles (AVs), thanks to the latest advancement in sensing and artificial intelligence (AI) technologies. The LiDAR ...sensor is adopted by most AV manufacturers for its high precision and high reliability. Unfortunately, LiDARs are susceptible to malicious spoofing attacks, which can lead to severe safety consequences for AVs. Most current work focuses on protecting LiDAR against spoofing attacks by using perception model-level defense methods, whose effectiveness unfortunately depends on the correctness of the LiDAR's sensing outcome. A spoofer thus can elude from these methods as long as it fabricates points that maintain the right contextual relationship held by the legitimate points. In this article, we propose to use the signal's Doppler frequency shift to verify the sender of the signal and detect potential spoofing attacks. To this end, we first thoroughly analyze the working principle of LiDAR and conduct real-world experiments to deeply understand and reveal the vulnerability of LiDAR sensors. We then prove that the Doppler frequency shifts of legitimate and spoofing signals present different characteristics, which can be used to fundamentally protect the LiDAR sensing outcome. For better demonstration purposes, we consider three attack models, including static attacker, moving attacker, and moving attacker with control of both velocity and signal frequency. For each of the models, we first show how the spoofing attack is performed and then present our countermeasures. We then propose a statistical spoofing detection framework to jointly consider the impact of short-term uncertainty in vehicle velocity, which can provide more accurate spoofing detection results in realistic environments. Extensive numerical results are provided in a wide range of settings and road conditions.
Autonomous electric vehicles (EVs) need to recognize the surrounding environment through mapping. The mapping provides directions for driving in new locations and uncharted areas. However, few ...studies have discussed the mapping of unknown outdoor areas using light detection and ranging (LiDAR) with simultaneous localization and mapping (SLAM). LiDAR can reduce the limitations of GPS, which cannot track the current location, and it covers a limited area. Hence, this study used the Hector SLAM algorithm, which maps based on the data generated by LiDAR sensors. This study was conducted at the Universitas Sriwijaya using two routes: the Palembang and Inderalaya campuses. A comparison is made with the map on Google Maps to determine the accuracy of the algorithm. The route of the Palembang campus was divided into four points: A-B-C-D; route AB exhibits the highest accuracy of 85.7%. In contrast, the route of the Inderalaya campus was established by adding routes with buildings closer to the road. A marker point was allocated on the route: A-B-C-D-E; route CE exhibits the highest accuracy of 83.6%. Overall, this study shows that the Hector SLAM algorithm and LiDAR can be used to map the unknown environment of autonomous EVs.
Many point-based semantic segmentation methods have been designed for indoor scenarios, but they struggle if they are applied to point clouds that are captured by a light detection and ranging ...(LiDAR) sensor in an outdoor environment. In order to make these methods more efficient and robust such that they can handle LiDAR data, we introduce the general concept of reformulating 3-D point-based operations such that they can operate in the projection space. While we show by means of three point-based methods that the reformulated versions are between 300 and 400 times faster and achieve higher accuracy, we furthermore demonstrate that the concept of reformulating 3-D point-based operations allows to design new architectures that unify the benefits of point-based and image-based methods. As an example, we introduce a network that integrates reformulated 3-D point-based operations into a 2-D encoder-decoder architecture that fuses the information from different 2-D scales. We evaluate the approach on four challenging datasets for semantic LiDAR point cloud segmentation and show that leveraging reformulated 3-D point-based operations with 2-D image-based operations achieves very good results for all four datasets.
Studies have shown that vehicle trajectory data are effective for calibrating microsimulation models. Light Detection and Ranging (LiDAR) technology offers high-resolution 3D data, allowing for ...detailed mapping of the surrounding environment, including road geometry, roadside infrastructures, and moving objects such as vehicles, cyclists, and pedestrians. Unlike other traditional methods of trajectory data collection, LiDAR’s high-speed data processing, fine angular resolution, high measurement accuracy, and high performance in adverse weather and low-light conditions make it well suited for applications requiring real-time response, such as autonomous vehicles. This research presents a comprehensive framework for integrating LiDAR sensor data into simulation models and their accurate calibration strategies for proactive safety analysis. Vehicle trajectory data were extracted from LiDAR point clouds collected at six urban signalized intersections in Lubbock, Texas, in the USA. Each study intersection was modeled with PTV VISSIM and calibrated to replicate the observed field scenarios. The Directed Brute Force method was used to calibrate two car-following and two lane-change parameters of the Wiedemann 1999 model in VISSIM, resulting in an average accuracy of 92.7%. Rear-end conflicts extracted from the calibrated models combined with a ten-year historical crash dataset were fitted into a Negative Binomial (NB) model to estimate the model’s parameters. In all the six intersections, rear-end conflict count is a statistically significant predictor (p-value < 0.05) of observed rear-end crash frequency. The outcome of this study provides a framework for the combined use of LiDAR-based vehicle trajectory data, microsimulation, and surrogate safety assessment tools to transportation professionals. This integration allows for more accurate and proactive safety evaluations, which are essential for designing safer transportation systems, effective traffic control strategies, and predicting future congestion problems.