We present a method for implementing hardware intelligent processing accelerator on domestic service robots. These domestic service robots support human life; therefore, they are required to ...recognize environments using intelligent processing. Moreover, the intelligent processing requires large computational resources. Therefore, standard personal computers (PCs) with robot middleware on the robots do not have enough resources for this intelligent processing. We propose a 'connective object for middleware to an accelerator (COMTA),' which is a system that integrates hardware intelligent processing accelerators and robot middleware. Herein, by constructing dedicated architecture digital circuits, field-programmable gate arrays (FPGAs) accelerate intelligent processing. In addition, the system can configure and access applications on hardware accelerators via a robot middleware space; consequently, robotic engineers do not require the knowledge of FPGAs. We conducted an experiment on the proposed system by utilizing a human-following application with image processing, which is commonly applied in the robots. Experimental results demonstrated that the proposed system can be automatically constructed from a single-configuration file on the robot middleware and can execute the application 5.2 times more efficiently than an ordinary PC.
Non-contact, remote sensing approaches to measuring flow velocities in river channels are widely used, but typical workflows involve acquiring images in the field and then processing data later in ...the office. To reduce latency between acquisition and output, with the ultimate goal of enabling real-time image velocimetry, we developed a Robot Operating System (ROS) package for Particle Image Velocimetry (PIV) that can be deployed on an embedded computer aboard an uncrewed aircraft system (UAS). The ROSPIV package consists of a series of nodes that can be run in parallel and comprise an end-to-end PIV workflow. Software development involved converting MATLAB code to C++, organizing files within a catkin workspace, and building nodes using catkin_make. The codebase is available via a repository that includes a user’s guide and demo script. This paper describes the nodes in the ROSPIV package as well as functions for preparing inputs, facilitating code generation, and visualizing PIV output. To illustrate the application of the software, we present two examples, one based on a simulated image sequence and the other based on data acquired from a UAS. For the simulated data, the velocity field derived via the ROSPIV package closely matched the known flow field used to generate the image sequence. Using real data as input demonstrated the ability of the ROSPIV package to ingest and pre-process raw images. Our initial results suggest that the ROSPIV package could become a viable approach for mapping river surface velocities in real time.
Underwater pipeline inspection is an important topic in off-shore subsea operations. ROVs (Remotely Operated Vehicles) can play an important role in multiple application areas including military, ...ocean science, aquaculture, shipping, and energy. However, using ROVs for inspection is not cost-effective, and the fixed leak detection sensors mounted along the pipeline have limited precision. Although the cost can be significantly reduced by applying AUVs (Autonomous Underwater Vehicles), the unstable current, low visibility and loss of GPS signal make the navigation of AUVs underwater very challenging. Previous studies have been conducted on coordinate-based, vision-based, and fusion-based navigation algorithms. However, the coordinate-based algorithms suffered from the denial of GPS signals while the vision-based methods typically relied on terrain and landscape knowledge that required collection prior to the mission. As a result of these issues, a navigation system for an AUV (Autonomous Underwater Vehicle) that incorporates vision and sonar sensors is presented in this paper. In a ROS/Gazebo-based simulation environment, the AUV had the ability to find and navigate towards the pipeline and continuously traverse along its length. Additionally, with a chemical concentration sensor mounted on the AUV, the system demonstrated the capability of inspecting the pipeline and reporting the leak point with a resolution of 3 meters along the pipeline.
Autonomous and semiautonomous mobile robots play an important role in making tasks in environments considered hostile or dangerous for a human being. In order to execute many of the required tasks, ...robots need, in its architecture, a module of navigation with an appropriate path planning algorithm. This paper presents the development and implementation of a methodology for path planning of a mobile robot using a spherical algorithm and homotopy continuation methods (HCMs). The first section is a brief introduction about HCMs. Subsequently, the homotopy path planning method and spherical path tracking algorithm are explained, as well as the upgraded version and its main features. Then, the main contributions of this paper are presented and the effectiveness of the proposed method is proved. Besides, numerical examples, implementation results in multiple platforms including a 32b microcontroller and robot operating system, are displayed. Finally, some favorable results from a comparative between the proposed methodology against sampling-based planners using the open motion planning library are presented.
Autonomous mobile robots (AMRs) have revolutionized various aspects of our daily lives and manufacturing services. To enhance their efficiency, productivity, and safety, AMRs are equipped with ...advanced capacities such as object detection and tracking, localization, collision-free navigation, and decision-making. Among these technologies, 2-D light detection and ranging (LiDAR) commonly stands out as the prevailing choice, showcasing remarkable accomplishments in practice. Obviously, the precision of the mentioned modules is affected by the accuracy of 2-D LiDAR observed data. Typically, 2-D LiDAR intrinsic parameters are adequately calibrated during the manufacturing process, while the extrinsic parameters should be intervened by the user at the application level. Previous research has predominantly emphasized extrinsic calibration for sensor fusion, given its perceived appeal over individual 2-D LiDAR extrinsic calibration. However, it is important to note that a multisensor system usually includes more favorable geometric constraints between different sensor datasets. In contrast, a 2-D LiDAR sensor only provides position information in a 2-D horizontal plane, resulting in fewer features or constraints when used alone. Besides, in the realm of multisensor calibration, the direct incorporation of observed data within the robot base coordinates is often overlooked, despite it is necessary for AMR applications. This article presents an extrinsic calibration for coordinates of a single 2-D LiDAR in AMRs' base coordinates directly, which ensures accuracy as well as easy tool installation, fast, and simple observation for data samples without supports from other sensors. The proposed method has been verified through both simulation and real experiments.
In recent years the increased rate of the aging population has become more serious. With aging, the elderly sometimes inevitably faces many problems which lead to slow walking, unstable or weak limbs ...and even fall-related injuries. So, it is very important to develop an assistive aid device. In this study, a fuzzy controller-based smart walker with a distributed robot operating system (ROS) framework is designed to assist in independent walking. The combination of Raspberry Pi and PIC microcontroller acts as the control kernel of the proposed device. In addition, the environmental information and user postures can be recognized with the integration of sensors. The sensing data include the road slope, velocity of the walker, and user's grip forces, etc. According to the sensing data, the fuzzy controller can produce an assistive force to make the walker moving more smoothly and safely. Apart from this, a mobile application (App) is designed that allows the user's guardian to view the current status of the smart walker as well as to track the user's location.
Toward Campus Mail Delivery Using BDI Onyedinma, Chidiebere; Gavigan, Patrick; Esfandiari, Babak
Journal of sensor and actuator networks,
12/2020, Volume:
9, Issue:
4
Journal Article
Peer reviewed
Open access
Autonomous systems developed with the Belief-Desire-Intention (BDI) architecture tend to be mostly implemented in simulated environments. In this project we sought to build a BDI agent for use in the ...real world for campus mail delivery in the tunnel system at Carleton University. Ideally, the robot should receive a delivery order via a mobile application, pick up the mail at a station, navigate the tunnels to the destination station, and notify the recipient. In this paper, we discuss how we linked the Robot Operating System (ROS) with a BDI reasoning system to achieve a subset of the required use casesand demonstrated the system performance in an analogue environment. ROS handles the connections to the low-level sensors and actuators, while the BDI reasoning system handles the high-level reasoning and decision making. Sensory data is sent to the reasoning system as perceptions using ROS. These perceptions are then deliberated upon, and an action string is sent back to ROS for interpretation and driving of the necessary actuator for the action to be performed. In this paper we present our current implementation, which closes the loop on the hardware-software integration and implements a subset of the use cases required for the full system. We demonstrated the performance of the system in an analogue environment.
The aim of this study is to develop an autonomous mobile robot (AMR) for our demonstration factory, incorporating Google’s Cartographer algorithm with the linear quadratic Gaussian (LQG) control ...model and providing safe navigation with obstacle avoidance for the delivery of recycling metal shaving (RMS) with the help of this Cartographer-LQG method. The originality of this study is that we have integrated Google’s Cartographer algorithm with the LQG model and improved the accuracy and stability of our AMR-RMS. This method offers users a reliable method for calibrating their mobile robots and constructs a grid map with loop-closure to automate navigation. The suggested approach increases the stability of the electro-mechanical modules and lowers the cumulative error of simultaneous localization and mapping (SLAM). This study has compared the SLAM results from Gmapping, Hector, and Cartographer algorithms, suggesting that the Cartographer-LQG method can provide a map with loop closure and accurate information for autopiloting the AMR-RMS.
Abstract
This paper presents development of an autonomous mobile robot which can recognize the traffic sign and move according to that recognized sign. The mobile robot was developed based on ...Duckietown platform, which is a research which focuses on self-driving vehicles. The mobile robot was equipped with a fish eye camera as its vision device and raspberry pi.3 controller module as the main controller for doing image processing in order to recognize the signs and controlling the movement of the robot. The program runs on ROS platform. The system has been tested by moving mobile robot through all area and the results showed average recognition rate of 80%. The system also has been tested in variation of light intensity and the result showed average recognition rate of 80%. Another test also was done by adding disturbance object which has similar or same color as sign color and the result showed average recognition rate of 43.3%.