Facilitating navigation in pedestrian environments is critical for enabling people who are blind and visually impaired (BVI) to achieve independent mobility. A deep reinforcement learning (DRL)–based ...assistive guiding robot with ultrawide-bandwidth (UWB) beacons that can navigate through routes with designated waypoints was designed in this study. Typically, a simultaneous localization and mapping (SLAM) framework is used to estimate the robot pose and navigational goal; however, SLAM frameworks are vulnerable in certain dynamic environments. The proposed navigation method is a learning approach based on state-of-the-art DRL and can effectively avoid obstacles. When used with UWB beacons, the proposed strategy is suitable for environments with dynamic pedestrians. We also designed a handle device with an audio interface that enables BVI users to interact with the guiding robot through intuitive feedback. The UWB beacons were installed with an audio interface to obtain environmental information. The on-handle and on-beacon verbal feedback provides points of interests and turn-by-turn information to BVI users. BVI users were recruited in this study to conduct navigation tasks in different scenarios. A route was designed in a simulated ward to represent daily activities. In real-world situations, SLAM-based state estimation might be affected by dynamic obstacles, and the visual-based trail may suffer from occlusions from pedestrians or other obstacles. The proposed system successfully navigated through environments with dynamic pedestrians, in which systems based on existing SLAM algorithms have failed.
In this paper, we design a blind guiding robot by imitated the biological guide dog, and propose a kind of smooth turning and obstacle avoidance control that fits the action of drawing blind people. ...At present, the common guiding robot can not consider the user's experience of blind people during drawing. The common biological guide dog pulls the blind person forward in front of the blind person's side, so the speed impact of the relative position on the blind person should be considered in the traction control of the guiding robot. At the same time, in the process of obstacle avoidance, the biological guide dog will also choose the obstacle avoidance method according to the position of the blind person. Therefore, this paper decomposes the speed relationship between the blind and the robot through the calculation of the relative position, and designs a perceptual obstacle avoidance system in a complex environment. Finally, the system simulation experiment is completed in the Matlab-Simulink environment, which proves that the control method proposed in this paper can effectively improve the safety of the blind in the process of being pulled.
With the continuous increase in the number of visually impaired individuals, it has become urgent to address the traffic challenges faced by this group. A blind guiding robot based on speed ...adaptation and visual recognition was designed to address this problem. The speed adaptation of the robot and the blind person is achieved through feedback control of the distance and speed. Traffic signals are identified using optimized visual recognition method based on YOLOv5 transfer learning, and man-machine interaction is realized by applying multi-module units such as real-time image, speech, and positioning. The experimental results show that the deviation between the relative distance between the man and machine and the set distance was controlled within 13.1%, the relative velocity deviation was controlled within 0.3 m/s, the accuracy of identifying traffic signals reached 91.88%, and when the man-machine distance gap is large, the robot can timely control the man-machine distance to the set distance within 0.7 s, which effectively ensured the travel safety of blind people and provide the groundwork for the practical application of guiding blind robots.
Guiding robots, in the form of canes or cars, have recently been explored to assist blind and low vision (BLV) people. Such robots can provide full or partial autonomy when guiding. However, the pros ...and cons of different forms and autonomy for guiding robots remain unknown. We sought to fill this gap. We designed autonomy-switchable guiding robotic cane and car. We conducted a controlled lab-study (N=12) and a field study (N=9) on BLV. Results showed that full autonomy received better walking performance and subjective ratings in the controlled study, whereas participants used more partial autonomy in the natural environment as demanding more control. Besides, the car robot has demonstrated abilities to provide a higher sense of safety and navigation efficiency compared with the cane robot. Our findings offered empirical evidence about how the BLV community perceived different machine forms and autonomy, which can inform the design of assistive robots.
When Dynamic Window Approach (DWA) is used in obstacle avoidance of blind-guiding robots, the contradiction which between the heading and velocity evaluation factors is not considered, resulting in, ...selecting the trajectory under certain road conditions, the not-timely collision avoidance, frequently direction changing, time-consuming planning and other issues in the planned path. To balance the relationship between the original three evaluation factors, the evaluation factor about the change of orientation is introduced into the function of path evaluation, which will suppress the excessive influence of a particular factor on the evaluation function under some specific circumstances, and to reduce unnecessary steering frequency of the robot. Experiments reveal that the actual required runtime of the planning path with the improved algorithm is reduced at average of 45.37% compared with that with the DWA algorithm, path planned with the improved algorithm can be planned advance to avoid obstacles with a smaller and continuous curvature. It can be obtained that the improved algorithm has a smoother trajectory and more timely collision avoidance, which can meet the comfort requirements of the blind-guiding robot's users.
Tasks that involve locating objects and then moving hands to those specific locations, such as using touchscreens or grabbing objects on a desk, are challenging for the visually impaired. Over the ...years, audio guidance and haptic feedback have been a staple in hand navigation based assistive technologies. However, these methods require the user to interpret the generated directional cues and then manually perform the hand motions. In this paper, we present automated hand-based spatial guidance to bridge the gap between guidance and execution, allowing visually impaired users to move their hands between two points automatically, without any manual effort. We implement this concept through FingerRover, an on-finger miniature robot that carries the user’s finger to target points. We demonstrate the potential applications that can benefit from automated hand-based spatial guidance. Our user study shows the potential of our technique in improving the interaction capabilities of people with visual impairments.
It has become urgent to address the traffic challenges faced by this group with the continuous increase in the number of visually impaired individuals. A blind guiding robot based on speed adaptation ...and visual recognition was designed to address this problem. The speed adaptation of the robot and the blind person is achieved through feedback control of the distance and speed in this paper. Traffic signals are identified using optimized visual recognition method based on YOLOv5 transfer learning, and man-machine interaction is realized by applying multi-module units such as real-time image, speech, and positioning. The experimental results show that the rate of change of the relative distance was controlled within 13.1%, the relative velocity deviation was controlled within 0.3 m/s, the accuracy of identifying traffic signals reached 91.88%. And when the man-machine distance gap is large, the robot can control the man-machine distance to the set distance within 0.7 s in a timely manner, which effectively ensured the travel safety of blind people and provide the groundwork for the practical application of guiding blind robots.
This paper proposes a computer vision method for guiding the robot to greet and guide guests. Locations of guests are acquired for controlling the robot by face detection. In order to reduce regions ...of search, optical flow algorithm is used to segment image in advance. Asymmetric problems in face detection are explained, and relative solutions are put forward by bootstrapping strategy and asymmetric adaboost algorithm. In addition, fisher discriminant analysis further improves the performance of face detection. And multi-view face models are trained to accommodate practical face detection application. At last, experiments demonstrate that our multi-view face detector achieves high detection accuracy and fast detection speed on both standard testing datasets and real-life images.
In this paper we proposed a case-based reasoning (CBR) mechanism with improved vector space model. In order to get more accuracy of case retrieval, this paper presents the improved algorithm of the ...text feature-weight calculation and Chinese word order calculation. And a supermarket guiding robot is designed to verify the reasoning machine as an example. The experiments show that compared with the traditional CBR reasoning machine which uses the nearest neighbor algorithm, the improved CBR reasoning machine can get more accurate and reasonable result, and better meet the demands of customers.