With the significant development of practicability in deep learning and the ultra-high-speed information transmission rate of 5G communication technology will overcome the barrier of data ...transmission on the Internet of Vehicles, automated driving is becoming a pivotal technology affecting the future industry. Sensors are the key to the perception of the outside world in the automated driving system and whose cooperation performance directly determines the safety of automated driving vehicles. In this survey, we mainly discuss the different strategies of multi-sensor fusion in automated driving in recent years. The performance of conventional sensors and the necessity of multi-sensor fusion are analyzed, including radar, LiDAR, camera, ultrasonic, GPS, IMU, and V2X. According to the differences in the latest studies, we divide the fusion strategies into four categories and point out some shortcomings. Sensor fusion is mainly applied for multi-target tracking and environment reconstruction. We discuss the method of establishing a motion model and data association in multi-target tracking. At the end of the paper, we analyzed the deficiencies in the current studies and put forward some suggestions for further improvement in the future. Through this investigation, we hope to analyze the current situation of multi-sensor fusion in the automated driving process and provide more efficient and reliable fusion strategies.
Visual-inertial odometry (VIO) is known to suffer from drifting, especially over long-term runs. In this article, we present GVINS, a nonlinear optimization-based system that tightly fuses global ...navigation satellite system (GNSS) raw measurements with visual and inertial information for real-time and drift-free stateestimation. Our system aims to provide accurate global six-degree-of-freedom estimation under complex indoor-outdoor environments, where GNSS signals may be intermittent or even inaccessible. To establish the connection between global measurements and local states, a coarse-to-fine initialization procedure is proposed to efficiently calibrate the transformation online and initialize GNSS states from only a short window of measurements. The GNSS code pseudorange and Doppler shift measurements, along with visual and inertial information, are then modeled and used to constrain the system states in a factor graph framework. For complex and GNSS-unfriendly areas, the degenerate cases are discussed and carefully handled to ensure robustness. Thanks to the tightly coupled multisensor approach and system design, our system fully exploits the merits of three types of sensors and is able to seamlessly cope with the transition between indoor and outdoor environments, where satellites are lost and reacquired. We extensively evaluate the proposed system by both simulation and real-world experiments, and the results demonstrate that our system substantially suppresses the drift of the VIO and preserves the local accuracy in spite of noisy GNSS measurements. The versatility and robustness of the system are verified on large-scale data collected in challenging environments. In addition, experiments show that our system can still benefit from the presence of only one satellite, whereas at least four satellites are required for its conventional GNSS counterparts.
This article studies the Gaussian filtering fusion problem for multi-sensor uncertain systems. The measurements are classified as the normal and the abnormal measurements by the hypothesis tests, and ...a unified fusion framework of optimal estimation is proposed based on the Bayesian filtering theory to integrate the classified measurements. Under the unified fusion framework, the measurements are treated with different fusion strategies, thus the process and the measurement uncertainties are compensated by the internal interactions among the local estimators. Moreover, instead of solving the adaptive factors, the measurement uncertainties are compensated by controlling the steps of the progressive measurement update. Finally, the effectiveness of the proposed unified fusion method is verified through numerous simulations.
Touch sensing can help robots understand their surrounding environment, and in particular the objects they interact with. To this end, roboticists have, in the last few decades, developed several ...tactile sensing solutions, extensively reported in the literature. Research into interpreting the conveyed tactile information has also started to attract increasing attention in recent years. However, a comprehensive study on this topic is yet to be reported. In an effort to collect and summarize the major scientific achievements in the area, this survey extensively reviews current trends in robot tactile perception of object properties. Available tactile sensing technologies are briefly presented before an extensive review on tactile recognition of object properties. The object properties that are targeted by this review are shape, surface material and object pose. The role of touch sensing in combination with other sensing sources is also discussed. In this review, open issues are identified and future directions for applying tactile sensing in different tasks are suggested.
This paper presents a convolutional neural network (CNN) based approach for fault diagnosis of rotating machinery. The proposed approach incorporates sensor fusion by taking advantage of the CNN ...structure to achieve higher and more robust diagnosis accuracy. Both temporal and spatial information of the raw data from multiple sensors is considered during the training process of the CNN. Representative features can be extracted automatically from the raw signals. It avoids manual feature extraction or selection, which relies heavily on prior knowledge of specific machinery and fault types. The effectiveness of the developed method is evaluated by using datasets from two types of typical rotating machinery, roller bearings, and gearboxes. Compared with traditional approaches using manual feature extraction, the results show the superior diagnosis performance of the proposed method. The present approach can be extended to fault diagnosis of other machinery with various types of sensors due to its end to end feature learning capability.
The progress brought by the deep learning technology over the last decade has inspired many research domains, such as radar signal processing, speech and audio recognition, etc., to apply it to their ...respective problems. Most of the prominent deep learning models exploit data representations acquired with either Lidar or camera sensors, leaving automotive radars rarely used. This is despite the vital potential of radars in adverse weather conditions, as well as their ability to simultaneously measure an object's range and radial velocity seamlessly. As radar signals have not been exploited very much so far, there is a lack of available benchmark data. However, recently, there has been a lot of interest in applying radar data as input to various deep learning algorithms, as more datasets are being provided. To this end, this paper presents a survey of various deep learning approaches processing radar signals to accomplish some significant tasks in an autonomous driving application, such as detection and classification. We have itemized the review based on different radar signal representations, as it is one of the critical aspects while using radar data with deep learning models. Furthermore, we give an extensive review of the recent deep learning-based multi-sensor fusion models exploiting radar signals and camera images for object detection tasks. We then provide a summary of the available datasets containing radar data. Finally, we discuss the gaps and important innovations in the reviewed papers and highlight some possible future research prospects.
•Developed a modular and distributed sensing system for assisting the GPS navigation.•Designed, developed, and validated multi-channel IR sensors with CANBUS communication.•Designed a fuzzy ...knowledge-based controller, and validated it in simulation and actual field.•Tested the functionality of the proposed solution under harsh field conditions.
Autonomous navigation of mobile robots inside unstructured agricultural fields proposes serious challenges due to the extreme variations in high-density bushes, the presence of random obstacles, and the inaccuracies in the GPS and IMU measurements. Advanced perception solutions are therefore required to assist the existing GPS-based navigation and to improve the reliability of the operation. This paper reports on the development and evaluation of a modular and scalable sensing system to assist the autonomous navigation of an agricultural mobile robot by providing it with collision avoidance capabilities. The robot benefited from a four-wheel steering mechanism that could be driven remotely via a 2.4 GHz wireless transmitter and could be programmed using the Robot Operating System (ROS) to follow waypoints. Multiple arrays of Time-of-Flight and infrared sensors with independent processing units were installed on the left, right, and front of the robot to enable a distributed control system. Communication between the sensor modules was realized via a CAN network. The collision avoidance system then exchanged messages with the robot computer over Ethernet using ROS on multiple machines scheme. A virtual model of the robot with an exact sensing setup was replicated in a robotic simulator to accelerate experimenting with different control algorithms and to optimize the sensors’ functionality. The simulation scenes and dynamic models were then improved by manually driving the robot in a real berry field for collecting sensor and steering data. Results from the simulation showed that the robot was able to autonomously navigate in different tracks and stabilize itself in the presence of random obstacles using a fuzzy knowledge-based algorithm. Preliminary field tests suggested that the Exponential filter was necessary to be implemented on each sensor for removing noise and outliers. The proposed approach created a flexible framework for exchanging data between each of the sensor ECUs and preventing the robot from colliding with random obstacles in front, left, and right. The study confirmed the functionality of the affordable sensing system and control architecture and can be suggested as an alternative solution for the high-end 3D LiDARs and the complex simultaneous localization and mapping methods.
Improving the ability to detect small leaks to prevent more severe accidents plays an extremely important role in the safe operation of pipelines. To tackle the issue of low diagnostic accuracy ...associated with single sensors for detecting small leaks, a multi-source multi-modal feature fusion method for gas pipeline leak detection was proposed. First, the collected data from multiple sensors were transformed into two-dimensional time-frequency images for input into the feature extraction network. Then, the dual-information fusion (DIF) module was introduced, incorporating the attention mechanism and multi-scale feature fusion to enhance the model's feature expression capability, and fully interact with the multi-modal features. Secondly, the channel split multiscale convolution (CSMC) module was designed to accommodate the diversity of input data and improve the model's generalization capability. The DIF and CSMC modules were cascaded and fused to produce the classification results through the fully connected layer. Finally, the effectiveness of the proposed method was assessed using pipeline leak data collected in the laboratory. The experimental results demonstrate that the proposed multi-modal deep learning model can effectively identify the small leak state in pipelines, exhibiting superior diagnostic performance when compared to the current mainstream image classification models.
One camera and one low-cost inertial measurement unit (IMU) form a monocular visual-inertial system (VINS), which is the minimum sensor suite (in size, weight, and power) for the metric six ...degrees-of-freedom (DOF) state estimation. In this paper, we present VINS-Mono: a robust and versatile monocular visual-inertial state estimator. Our approach starts with a robust procedure for estimator initialization. A tightly coupled, nonlinear optimization-based method is used to obtain highly accurate visual-inertial odometry by fusing preintegrated IMU measurements and feature observations. A loop detection module, in combination with our tightly coupled formulation, enables relocalization with minimum computation. We additionally perform 4-DOF pose graph optimization to enforce the global consistency. Furthermore, the proposed system can reuse a map by saving and loading it in an efficient way. The current and previous maps can be merged together by the global pose graph optimization. We validate the performance of our system on public datasets and real-world experiments and compare against other state-of-the-art algorithms. We also perform an onboard closed-loop autonomous flight on the microaerial-vehicle platform and port the algorithm to an iOS-based demonstration. We highlight that the proposed work is a reliable, complete, and versatile system that is applicable for different applications that require high accuracy in localization. We open source our implementations for both PCs (https://github.com/HKUST-Aerial-Robotics/VINS-Mono) and iOS mobile devices ( https://github.com/HKUST-Aerial-Robotics/VINS-Mobile).
The paper addresses Kalman filtering over a peer-to-peer sensor network with a careful eye towards data transmission scheduling for reduced communication bandwidth and, consequently, enhanced energy ...efficiency and prolonged network lifetime. A novel consensus Kalman filter algorithm with event-triggered communication is developed by enforcing each node to transmit its local information to the neighbors only when this is considered as particularly significant for estimation purposes, in the sense that it notably deviates from the information that can be predicted from the last transmitted one. Further, it is proved how the filter guarantees stability (mean-square boundedness of the estimation error in each node) under network connectivity and system collective observability. Finally, numerical simulations are provided to demonstrate practical effectiveness of the distributed filter for trading off estimation performance versus transmission rate.