With the significant development of practicability in deep learning and the ultra-high-speed information transmission rate of 5G communication technology will overcome the barrier of data ...transmission on the Internet of Vehicles, automated driving is becoming a pivotal technology affecting the future industry. Sensors are the key to the perception of the outside world in the automated driving system and whose cooperation performance directly determines the safety of automated driving vehicles. In this survey, we mainly discuss the different strategies of multi-sensor fusion in automated driving in recent years. The performance of conventional sensors and the necessity of multi-sensor fusion are analyzed, including radar, LiDAR, camera, ultrasonic, GPS, IMU, and V2X. According to the differences in the latest studies, we divide the fusion strategies into four categories and point out some shortcomings. Sensor fusion is mainly applied for multi-target tracking and environment reconstruction. We discuss the method of establishing a motion model and data association in multi-target tracking. At the end of the paper, we analyzed the deficiencies in the current studies and put forward some suggestions for further improvement in the future. Through this investigation, we hope to analyze the current situation of multi-sensor fusion in the automated driving process and provide more efficient and reliable fusion strategies.
This article presents GLIM, a 3D range-inertial localization and mapping framework with GPU-accelerated scan matching factors. The odometry estimation module of GLIM employs a combination of ...fixed-lag smoothing and keyframe-based point cloud matching that makes it possible to deal with a few seconds of completely degenerated range data while efficiently reducing trajectory estimation drift. It also incorporates multi-camera visual feature constraints in a tightly coupled way to further improve the stability and accuracy. The global trajectory optimization module directly minimizes the registration errors between submaps over the entire map. This approach enables us to accurately constrain the relative pose between submaps with a small overlap. Although both the odometry estimation and global trajectory optimization algorithms require much more computation than existing methods, we show that they can be run in real-time due to the careful design of the registration error evaluation algorithm and the entire system to fully leverage GPU parallel processing.
•Fixed-lag-smoothing-based odometry that is robust to point cloud degeneration.•Tight fusion of point cloud and IMU constraints for global trajectory optimization.•Code is released as open source.
One camera and one low-cost inertial measurement unit (IMU) form a monocular visual-inertial system (VINS), which is the minimum sensor suite (in size, weight, and power) for the metric six ...degrees-of-freedom (DOF) state estimation. In this paper, we present VINS-Mono: a robust and versatile monocular visual-inertial state estimator. Our approach starts with a robust procedure for estimator initialization. A tightly coupled, nonlinear optimization-based method is used to obtain highly accurate visual-inertial odometry by fusing preintegrated IMU measurements and feature observations. A loop detection module, in combination with our tightly coupled formulation, enables relocalization with minimum computation. We additionally perform 4-DOF pose graph optimization to enforce the global consistency. Furthermore, the proposed system can reuse a map by saving and loading it in an efficient way. The current and previous maps can be merged together by the global pose graph optimization. We validate the performance of our system on public datasets and real-world experiments and compare against other state-of-the-art algorithms. We also perform an onboard closed-loop autonomous flight on the microaerial-vehicle platform and port the algorithm to an iOS-based demonstration. We highlight that the proposed work is a reliable, complete, and versatile system that is applicable for different applications that require high accuracy in localization. We open source our implementations for both PCs (https://github.com/HKUST-Aerial-Robotics/VINS-Mono) and iOS mobile devices ( https://github.com/HKUST-Aerial-Robotics/VINS-Mobile).
We propose a technique for fusing a bracketed exposure sequence into a high quality image, without converting to High dynamic range (HDR) first. Skipping the physically based HDR assembly step ...simplifies the acquisition pipeline. This avoids camera response curve calibration and is computationally efficient. It also allows for including flash images in the sequence. Our technique blends multiple exposures, guided by simple quality measures like saturation and contrast. This is done in a multiresolution fashion to account for the brightness variation in the sequence. The resulting image quality is comparable to existing tone mapping operators.
Visual-inertial odometry (VIO) is known to suffer from drifting, especially over long-term runs. In this article, we present GVINS, a nonlinear optimization-based system that tightly fuses global ...navigation satellite system (GNSS) raw measurements with visual and inertial information for real-time and drift-free stateestimation. Our system aims to provide accurate global six-degree-of-freedom estimation under complex indoor-outdoor environments, where GNSS signals may be intermittent or even inaccessible. To establish the connection between global measurements and local states, a coarse-to-fine initialization procedure is proposed to efficiently calibrate the transformation online and initialize GNSS states from only a short window of measurements. The GNSS code pseudorange and Doppler shift measurements, along with visual and inertial information, are then modeled and used to constrain the system states in a factor graph framework. For complex and GNSS-unfriendly areas, the degenerate cases are discussed and carefully handled to ensure robustness. Thanks to the tightly coupled multisensor approach and system design, our system fully exploits the merits of three types of sensors and is able to seamlessly cope with the transition between indoor and outdoor environments, where satellites are lost and reacquired. We extensively evaluate the proposed system by both simulation and real-world experiments, and the results demonstrate that our system substantially suppresses the drift of the VIO and preserves the local accuracy in spite of noisy GNSS measurements. The versatility and robustness of the system are verified on large-scale data collected in challenging environments. In addition, experiments show that our system can still benefit from the presence of only one satellite, whereas at least four satellites are required for its conventional GNSS counterparts.
With the significant advancement of sensor and communication technology and the reliable application of obstacle detection techniques and algorithms, automated driving is becoming a pivotal ...technology that can revolutionize the future of transportation and mobility. Sensors are fundamental to the perception of vehicle surroundings in an automated driving system, and the use and performance of multiple integrated sensors can directly determine the safety and feasibility of automated driving vehicles. Sensor calibration is the foundation block of any autonomous system and its constituent sensors and must be performed correctly before sensor fusion and obstacle detection processes may be implemented. This paper evaluates the capabilities and the technical performance of sensors which are commonly employed in autonomous vehicles, primarily focusing on a large selection of vision cameras, LiDAR sensors, and radar sensors and the various conditions in which such sensors may operate in practice. We present an overview of the three primary categories of sensor calibration and review existing open-source calibration packages for multi-sensor calibration and their compatibility with numerous commercial sensors. We also summarize the three main approaches to sensor fusion and review current state-of-the-art multi-sensor fusion techniques and algorithms for object detection in autonomous driving applications. The current paper, therefore, provides an end-to-end review of the hardware and software methods required for sensor fusion object detection. We conclude by highlighting some of the challenges in the sensor fusion field and propose possible future research directions for automated driving systems.
This article studies the Gaussian filtering fusion problem for multi-sensor uncertain systems. The measurements are classified as the normal and the abnormal measurements by the hypothesis tests, and ...a unified fusion framework of optimal estimation is proposed based on the Bayesian filtering theory to integrate the classified measurements. Under the unified fusion framework, the measurements are treated with different fusion strategies, thus the process and the measurement uncertainties are compensated by the internal interactions among the local estimators. Moreover, instead of solving the adaptive factors, the measurement uncertainties are compensated by controlling the steps of the progressive measurement update. Finally, the effectiveness of the proposed unified fusion method is verified through numerous simulations.
Touch sensing can help robots understand their surrounding environment, and in particular the objects they interact with. To this end, roboticists have, in the last few decades, developed several ...tactile sensing solutions, extensively reported in the literature. Research into interpreting the conveyed tactile information has also started to attract increasing attention in recent years. However, a comprehensive study on this topic is yet to be reported. In an effort to collect and summarize the major scientific achievements in the area, this survey extensively reviews current trends in robot tactile perception of object properties. Available tactile sensing technologies are briefly presented before an extensive review on tactile recognition of object properties. The object properties that are targeted by this review are shape, surface material and object pose. The role of touch sensing in combination with other sensing sources is also discussed. In this review, open issues are identified and future directions for applying tactile sensing in different tasks are suggested.
The progress brought by the deep learning technology over the last decade has inspired many research domains, such as radar signal processing, speech and audio recognition, etc., to apply it to their ...respective problems. Most of the prominent deep learning models exploit data representations acquired with either Lidar or camera sensors, leaving automotive radars rarely used. This is despite the vital potential of radars in adverse weather conditions, as well as their ability to simultaneously measure an object's range and radial velocity seamlessly. As radar signals have not been exploited very much so far, there is a lack of available benchmark data. However, recently, there has been a lot of interest in applying radar data as input to various deep learning algorithms, as more datasets are being provided. To this end, this paper presents a survey of various deep learning approaches processing radar signals to accomplish some significant tasks in an autonomous driving application, such as detection and classification. We have itemized the review based on different radar signal representations, as it is one of the critical aspects while using radar data with deep learning models. Furthermore, we give an extensive review of the recent deep learning-based multi-sensor fusion models exploiting radar signals and camera images for object detection tasks. We then provide a summary of the available datasets containing radar data. Finally, we discuss the gaps and important innovations in the reviewed papers and highlight some possible future research prospects.