•In-depth analysis of motion planning methods for autonomous on-road driving.•Path, manoeuvre and trajectory motion planning approaches are compared and contrasted.•Decision making and handling of ...obstacles highlighted as the main areas of concern.•Incorporating transport engineering & operations aspects to motion planning methods.
Currently autonomous or self-driving vehicles are at the heart of academia and industry research because of its multi-faceted advantages that includes improved safety, reduced congestion, lower emissions and greater mobility. Software is the key driving factor underpinning autonomy within which planning algorithms that are responsible for mission-critical decision making hold a significant position. While transporting passengers or goods from a given origin to a given destination, motion planning methods incorporate searching for a path to follow, avoiding obstacles and generating the best trajectory that ensures safety, comfort and efficiency. A range of different planning approaches have been proposed in the literature. The purpose of this paper is to review existing approaches and then compare and contrast different methods employed for the motion planning of autonomous on-road driving that consists of (1) finding a path, (2) searching for the safest manoeuvre and (3) determining the most feasible trajectory. Methods developed by researchers in each of these three levels exhibit varying levels of complexity and performance accuracy. This paper presents a critical evaluation of each of these methods, in terms of their advantages/disadvantages, inherent limitations, feasibility, optimality, handling of obstacles and testing operational environments.
Based on a critical review of existing methods, research challenges to address current limitations are identified and future research directions are suggested so as to enhance the performance of planning algorithms at all three levels. Some promising areas of future focus have been identified as the use of vehicular communications (V2V and V2I) and the incorporation of transport engineering aspects in order to improve the look-ahead horizon of current sensing technologies that are essential for planning with the aim of reducing the total cost of driverless vehicles. This critical review on planning techniques presented in this paper, along with the associated discussions on their constraints and limitations, seek to assist researchers in accelerating development in the emerging field of autonomous vehicle research.
With the significant advancement of sensor and communication technology and the reliable application of obstacle detection techniques and algorithms, automated driving is becoming a pivotal ...technology that can revolutionize the future of transportation and mobility. Sensors are fundamental to the perception of vehicle surroundings in an automated driving system, and the use and performance of multiple integrated sensors can directly determine the safety and feasibility of automated driving vehicles. Sensor calibration is the foundation block of any autonomous system and its constituent sensors and must be performed correctly before sensor fusion and obstacle detection processes may be implemented. This paper evaluates the capabilities and the technical performance of sensors which are commonly employed in autonomous vehicles, primarily focusing on a large selection of vision cameras, LiDAR sensors, and radar sensors and the various conditions in which such sensors may operate in practice. We present an overview of the three primary categories of sensor calibration and review existing open-source calibration packages for multi-sensor calibration and their compatibility with numerous commercial sensors. We also summarize the three main approaches to sensor fusion and review current state-of-the-art multi-sensor fusion techniques and algorithms for object detection in autonomous driving applications. The current paper, therefore, provides an end-to-end review of the hardware and software methods required for sensor fusion object detection. We conclude by highlighting some of the challenges in the sensor fusion field and propose possible future research directions for automated driving systems.
Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can ...be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction.
•3D visual perception with a multi-camera system•Fisheye cameras for autonomous driving tasks•Pipeline overview for calibration, mapping, localization and obstacle detection•Novel algorithms that directly work on fisheye images
Artificial perception, in the context of autonomous driving, is the process by which an intelligent system translates sensory data into an effective model of the environment surrounding a vehicle. In ...this paper, and considering data from a 3D-LIDAR mounted onboard an intelligent vehicle, a 3D perception system based on voxels and planes is proposed for ground modeling and obstacle detection in urban environments. The system, which incorporates time-dependent data, is composed of two main modules: (i) an effective ground surface estimation using a piecewise plane fitting algorithm and RANSAC-method, and (ii) a voxel-grid model for static and moving obstacles detection using discriminative analysis and ego-motion information. This perception system has direct application in safety systems for intelligent vehicles, particularly in collision avoidance and vulnerable road users detection, namely pedestrians and cyclists. Experiments, using point-cloud data from a Velodyne LIDAR and localization data from an Inertial Navigation System were conducted for both a quantitative and a qualitative assessment of the static/moving obstacle detection module and for the surface estimation approach. Reported results, from experiments using the KITTI database, demonstrate the applicability and efficiency of the proposed approach in urban scenarios.
•A complete framework for ground surface estimation and static/moving obstacle detection in driving environments is proposed.•A piecewise surface fitting algorithm, based on a ‘multi-region’ strategy and Velodyne LIDAR scans behavior is proposed to estimate a finite set of multiple surfaces that fit the road and its vicinity.•A 3D voxel-based representation, using discriminative analysis is proposed for obstacle modeling. The proposed approach detects moving obstacles by integrating and processing information from previous measurements.•A set of diversified experiments, and corresponding result analysis, aimed at evaluating the performance of the proposed approach were performed.
For autonomous driving, it is important to detect obstacles in all scales accurately for safety consideration. In this paper, we propose a new spatial attention fusion (SAF) method for obstacle ...detection using mmWave radar and vision sensor, where the sparsity of radar points are considered in the proposed SAF. The proposed fusion method can be embedded in the feature-extraction stage, which leverages the features of mmWave radar and vision sensor effectively. Based on the SAF, an attention weight matrix is generated to fuse the vision features, which is different from the concatenation fusion and element-wise add fusion. Moreover, the proposed SAF can be trained by an end-to-end manner incorporated with the recent deep learning object detection framework. In addition, we build a generation model, which converts radar points to radar images for neural network training. Numerical results suggest that the newly developed fusion method achieves superior performance in public benchmarking. In addition, the source code will be released in the GitHub.
Hightlights•Utilizing three features such as disparity, super pixel segments and pixel-wise gradient.•Computing the reliability of disparity from super pixel segments and pixel-wise ...gradient.•Developing voting map to reduce time complexity of initial obstacle region.•Superior performance with erroneous disparity information and in complex environments.
A vision based real-time rear obstacle detection system is one of the most essential technologies, which can be used in many applications such as a parking assistance systems and intelligent vehicles. Although disparity is a useful feature for detecting obstacles, estimating a correct disparity map is a hard problem due to the matching ambiguity and noise sensitivity, especially in homogeneous regions. To overcome these problems, we leverage reliable disparities only for obstacle detection. A reliability factor is introduced to measure an inhomogeneity of the regions quantitatively. It is computed at each superpixel to consider the noise sensitivity of pixel-wise gradients and to assign similar reliability value within a same object. It includes two major components: firstly, In a feature extraction and combining stage, we extract three features from stereo images such as disparity, superpixel segments and pixel-wise gradient and compute the reliability of disparity from superpixel segments and the pixel-wise gradient. Secondly, In an obstacle detection stage, a disparity feature with reliability votes for localizing obstacles and dominant candidates in voting map are selected as initial obstacle region. The initial obstacle regions are expanded into their neighbor superpixels based on CIELAB color similarity and distance similarity between superpixels. Experimental results show satisfactory performance under various real parking environments. Its detection rate is at least 4% higher than those of other existing methods, and its false detection rate is more than 10% lower and thus, can be used for parking assistance system.
We developed a new method for obstacles detection and 3D reconstruction using a 3D map. Obstacles detection and 3D reconstruction are key functions of autonomous driving. It is easy to detect and ...reconstruct static obstacles three-dimensionally because they exist in the 3D map. However, the detection and the 3D reconstruction of dynamic obstacles that are not in the 3D map is difficult for a typical in-vehicle camera that cannot measure the distance. We aim to detect dynamic obstacles three-dimensionally, using an in-vehicle camera. And we deal with the new problem of accurate 3D reconstruction by using a monocular camera and a 3D map. To solve this problem, we focused on semantic segmentation for detection and depth completion to complement the depth map. We propose a multi-task neural network (NN) that shares the encoder of semantic segmentation NN and depth completion NN, whose inputs are an image and the 3D map. The proposed multi-task NN detects dynamic obstacles 1.4 times more accurately than the single-task state-of-the-art method.
This paper delivers an exhaustive analysis of the fusion of multi-sensor technologies, including traditional sensors such as cameras, Light Detection and Ranging(LiDAR), Radio Detection and ...Ranging(RADAR), and ultrasonic sensors, with Artificial Intelligence(AI) powered methodologies in obstacle detection for Autonomous Vehicles(AVs). With the growing momentum in AVs adoption, a heightened need exists for versatile and resilient obstacle detection systems. Our research delves into study of literatures, where proposed approaches assimilate data from this diverse sensor suite, integrated through Deep Learning(DL) techniques, to refine AV performance. Recent advancements and prevailing challenges within the domain are thoroughly examined, with particular focus on the integration of sensor fusion techniques, the facilitation of real-time processing via edge and fog computing, and the implementation of advanced artificial intelligence architectures, including Convolutional Neural Networks(CNNs), Recurrent Neural Networks(RNNs), and Generative Adversarial Networks(GANs), to enhance data interpretation efficacy. In conclusion, the paper underscores the critical contribution of multi-sensor arrays and deep learning in enhancing the safety and reliability of autonomous vehicles, offering significant perspectives for future research and technological progress.