This work presents a simulation framework developed under the widely used Robot Operating System (ROS) to enable the validation of robotics systems and gas sensing algorithms under realistic ...environments. The framework is rooted in the principles of computational fluid dynamics and filament dispersion theory, modeling wind flow and gas dispersion in 3D real-world scenarios (i.e., accounting for walls, furniture, etc.). Moreover, it integrates the simulation of different environmental sensors, such as metal oxide gas sensors, photo ionization detectors, or anemometers. We illustrate the potential and applicability of the proposed tool by presenting a simulation case in a complex and realistic office-like environment where gas leaks of different chemicals occur simultaneously. Furthermore, we accomplish quantitative and qualitative validation by comparing our simulated results against real-world data recorded inside a wind tunnel where methane was released under different wind flow profiles. Based on these results, we conclude that our simulation framework can provide a good approximation to real world measurements when advective airflows are present in the environment.
•A promising real-time architecture integrating robot software packages to Xenomai.•Deterministic response of Xenomai and rapid development using ROS.•Convenient APIs of communication mechanism ...between real-time/non-real-time tasks.•Flexible and extendible for real-time tasks with non-real-time device drivers.•Implementation details on an embedded platform considering software compatibility.
This paper proposes a real-time (RT) control architecture based on Xenomai, an RT embedded Linux, to control a service robot along with non-real-time (NRT) robot operating system (ROS) packages. Most software, including device drivers and ROS, are developed to operate under the standard Linux kernel that does not provide RT guarantees. Standard Linux system calls in an RT context stimulates mode switching, resulting in non-deterministic responses and stability problems such as priority inversion and kernel panic. This paper overcomes such issues through a communication interface between RT and NRT tasks, termed cross-domain datagram protocol. The proposed architecture supports priority-based scheduling of multiple tasks while exposing an interface compatible with the original ROS packages. Moreover, it enables standard device driver operation inside RT tasks without developing RT device drivers that requires significant amount of development time. Feasibility is proven by implementation on a Raspberry Pi 3, a low-cost open embedded hardware platform, and conducted various experiments to analyze its performance and applied it to a service robot using ROS navigation packages. The results indicate that the proposed architecture can effectively provide an RT environment without stability issues when utilizing ROS packages and standard device drivers.
This paper proposes the cooperative use of zero velocity update (ZU) in a decentralized extended Kalman filter (DEKF)-based localization algorithm for multi-robot systems. The filter utilizes ...inertial measurement unit (IMU), ultra-wideband (UWB), and odometer-based velocity measurements to improve the localization performance of the system in a GNSS-denied environment. In this work, we evaluate the benefits of using ZU in a DEKF-based localization algorithm. The algorithm was tested with real hardware in a video motion capture facility and a robot operating system (ROS)-based simulation environment for unmanned ground vehicles (UGVs). Both simulation and real-world experiments were performed to determine the effectiveness of using ZU in one robot to reinstate the localization of the others in a multi-robot system. Experimental results from GNSS-denied simulation and real-world environments revealed that using ZU in the DEKF together with simple heuristics significantly improved the three-dimensional localization accuracy.
Recent industrial robotics covers a broad part of the manufacturing spectrum and other human everyday life applications; the performance of these devices has become increasingly important. ...Positioning accuracy and repeatability, as well as operating speed, are essential in any industrial robotics application. Robot positioning errors are complex due to the extensive combination of their sources and cannot be compensated for using conventional methods. Some robot positioning errors can be compensated for only using machine learning (ML) procedures. Reinforced machine learning increases the robot's positioning accuracy and expands its implementation capabilities. The provided methodology presents an easy and focused approach for industrial in situ robot position adjustment in real-time during production setup or readjustment cases. The scientific value of this approach is a methodology using an ML procedure without huge external datasets for the procedure and extensive computing facilities. This paper presents a deep q-learning algorithm applied to improve the positioning accuracy of an articulated KUKA youBot robot during operation. A significant improvement of the positioning accuracy was achieved approximately after 260 iterations in the online mode and initial simulation of the ML procedure.
To tackle the challenges of weak sensing capacity for multi-scale objects, high missed detection rates for occluded targets, and difficulties for model deployment in detection tasks of intelligent ...roadside perception systems, the PDT-YOLO algorithm based on YOLOv7-tiny is proposed. Firstly, we introduce the intra-scale feature interaction module (AIFI) and reconstruct the feature pyramid structure to enhance the detection accuracy of multi-scale targets. Secondly, a lightweight convolution module (GSConv) is introduced to construct a multi-scale efficient layer aggregation network module (ETG), enhancing the network feature extraction ability while maintaining weight. Thirdly, multi-attention mechanisms are integrated to optimize the feature expression ability of occluded targets in complex scenarios, Finally, Wise-IoU with a dynamic non-monotonic focusing mechanism improves the accuracy and generalization ability of model sensing. Compared with YOLOv7-tiny, PDT-YOLO on the DAIR-V2X-C dataset improves mAP50 and mAP50:95 by 4.6% and 12.8%, with a parameter count of 6.1 million; on the IVODC dataset by 15.7% and 11.1%. We deployed the PDT-YOLO in an actual traffic environment based on a robot operating system (ROS), with a detection frame rate of 90 FPS, which can meet the needs of roadside object detection and edge deployment in complex traffic scenes.
Robotic odor source localization (OSL) is a technology that enables mobile robots or autonomous vehicles to find an odor source in unknown environments. An effective navigation algorithm that guides ...the robot to approach the odor source is the key to successfully locating the odor source. While traditional OSL approaches primarily utilize an olfaction-only strategy, guiding robots to find the odor source by tracing emitted odor plumes, our work introduces a fusion navigation algorithm that combines both vision and olfaction-based techniques. This hybrid approach addresses challenges such as turbulent airflow, which disrupts olfaction sensing, and physical obstacles inside the search area, which may impede vision detection. In this work, we propose a hierarchical control mechanism that dynamically shifts the robot's search behavior among four strategies: crosswind maneuver, Obstacle-Avoid Navigation, Vision-Based Navigation, and Olfaction-Based Navigation. Our methodology includes a custom-trained deep-learning model for visual target detection and a moth-inspired algorithm for Olfaction-Based Navigation. To assess the effectiveness of our approach, we implemented the proposed algorithm on a mobile robot in a search environment with obstacles. Experimental results demonstrate that our Vision and Olfaction Fusion algorithm significantly outperforms vision-only and olfaction-only methods, reducing average search time by 54% and 30%, respectively.
In a visual-based real detection system using computer vision, the most important thing that must be considered is the computation time. In general, a detection system has a heavy algorithm that puts ...a strain on the performance of a computer system, especially if the computer has to handle two or more different detection processes. This paper presents an effort to improve the performance of the trash detection system and the target partner detection system of a trash bin robot with social interaction capabilities. The trash detection system uses a combination of the Haar Cascade algorithm, Histogram of Oriented Gradient (HOG) and Gray-Level Coocurrence Matrix (GLCM). Meanwhile, the target partner detection system uses a combination of Depth and Histogram of Oriented Gradient (HOG) algorithms. Robotic Operating System (ROS) is used to make each system in separate modules which aim to utilize all available computer system resources while reducing computation time. As a result, the performance obtained by using the ROS platform is a trash detection system capable of running at a speed of 7.003 fps. Meanwhile, the human target detection system is capable of running at a speed of 8,515 fps. In line with the increase in fps, the accuracy also increases to 77%, precision increases to 87,80%, recall increases to 82,75%, and F1-score increases to 85,20% in trash detection, and the human target detection system has also improved accuracy to 81%, %, precision increases to 91,46%, recall increases to 86,20%, and F1-score increases to 88,42%.
ROS Introduction for Control Engineer Tokuyama, Kyota; Kinouchi, Yusuke
SYSTEMS, CONTROL AND INFORMATION,
2017/10/15, Volume:
61, Issue:
10
Journal Article
Safety and resiliency are essential components of autonomous vehicles. In this research, we introduce ROSFI, the first robot operating system (ROS) resilience analysis methodology, to assess the ...effect of silent data corruption (SDC) on mission metrics. We use unmanned aerial vehicles (UAVs) as a case study to demonstrate that system-level parameters, such as flight time and success rate, are necessary for accurately measuring system resilience. We demonstrate that downstream ROS tasks such as planning and control are more susceptible to SDCs than the visual perception stage in the perception-planning-control (PPC) compute pipeline. This observation only becomes apparent when we consider the complete end-to-end system-level pipeline, as opposed to isolated compute kernels, as previous work does. To enhance the safety and robustness of robot systems bound by size, weight, and power (SWaP), we offer two low-overhead anomaly-based SDC detection and recovery algorithms based on Gaussian statistical models and autoencoder neural networks. Our anomaly error protection techniques are validated in numerous simulated environments. We demonstrate that the autoencoder-based technique can recover up to all failure cases in our studied scenarios with a computational overhead of no more than 0.0062%. Finally, our open-source methodology can be utilized to comprehensively test the robustness of other ROS-based applications. It is available for public download at https://github.com/harvard-edge/MAVBench/tree/mavfi .