The development of navigation tools for people who are visually impaired had become an important concern in the research area of assistive technologies. This paper gives a comprehensive review of ...different articles published in the area of navigation solutions for people who are visually impaired. Unlike other review papers, this review considers major solutions that work in both the indoor or/and outdoor environments which are based on different technology. From the review, it became clear that the navigation systems proposed for the target users lack some core features that are quite important for independent navigation. Also, there can be instances in which humanitarian conditions also have to be considered in the navigation system design. Based on these findings, a set of recommendations are also given which can be considered in the future design of navigation systems for blind and visually impaired people.
The growing aging population suffers from high levels of vision and cognitive impairment, often resulting in a loss of independence. Such individuals must perform crucial everyday tasks such as ...cooking and heating with systems and devices designed for visually unimpaired individuals, which do not take into account the needs of persons with visual and cognitive impairment. Thus, the visually impaired persons using them run risks related to smoke and fire. In this paper, we propose a vision-based fire detection and notification system using smart glasses and deep learning models for blind and visually impaired (BVI) people. The system enables early detection of fires in indoor environments. To perform real-time fire detection and notification, the proposed system uses image brightness and a new convolutional neural network employing an improved YOLOv4 model with a convolutional block attention module. The h-swish activation function is used to reduce the running time and increase the robustness of YOLOv4. We adapt our previously developed smart glasses system to capture images and inform BVI people about fires and other surrounding objects through auditory messages. We create a large fire image dataset with indoor fire scenes to accurately detect fires. Furthermore, we develop an object mapping approach to provide BVI people with complete information about surrounding objects and to differentiate between hazardous and nonhazardous fires. The proposed system shows an improvement over other well-known approaches in all fire detection metrics such as precision, recall, and average precision.
This paper describes the interface and testing of an indoor navigation app - ASSIST - that guides blind & visually impaired (BVI) individuals through an indoor environment with high accuracy while ...augmenting their understanding of the surrounding environment. ASSIST features personalized interfaces by considering the unique experiences that BVI individuals have in indoor wayfinding and offers multiple levels of multimodal feedback. After an overview of the technical approach and implementation of the first prototype of the ASSIST system, the results of two pilot studies performed with BVI individuals are presented - a performance study to collect data on mobility (walking speed, collisions, and navigation errors) while using the app, and a usability study to collect user evaluation data on the perceived helpfulness, safety, ease-of-use, and overall experience while using the app. Our studies show that ASSIST is useful in providing users with navigational guidance, improving their efficiency and (more significantly) their safety and accuracy in wayfinding indoors. Findings and user feedback from the studies confirm some of the previous results, while also providing some new insights into the creation of such an app, including the use of customized user interfaces and expanding the types of information provided.
Early fire detection and notification techniques provide fire prevention and safety information to blind and visually impaired (BVI) people within a short period of time in emergency situations when ...fires occur in indoor environments. Given its direct impact on human safety and the environment, fire detection is a difficult but crucial problem. To prevent injuries and property damage, advanced technology requires appropriate methods for detecting fires as quickly as possible. In this study, to reduce the loss of human lives and property damage, we introduce the development of the vision-based early flame recognition and notification approach using artificial intelligence for assisting BVI people. The proposed fire alarm control system for indoor buildings can provide accurate information on fire scenes. In our proposed method, all the processes performed manually were automated, and the performance efficiency and quality of fire classification were improved. To perform real-time monitoring and enhance the detection accuracy of indoor fire disasters, the proposed system uses the YOLOv5m model, which is an updated version of the traditional YOLOv5. The experimental results show that the proposed system successfully detected and notified the occurrence of catastrophic fires with high speed and accuracy at any time of day or night, regardless of the shape or size of the fire. Finally, we compared the competitiveness level of our method with that of other conventional fire-detection methods to confirm the seamless classification results achieved using performance evaluation matrices.
Over the last few decades, the development in the field of navigation and routing devices has become a hindering task for the researchers to develop smart and intelligent guiding mechanism at indoor ...and outdoor locations for blind and visually impaired people (BVIPs). The existing research need to be analysed from a historical perception including early research on the first electronic travel aids to the use of modern artificial vision models for the navigation of BVIPs. Diverse approaches such as: e-cane or guide dog, infrared-based cane, laser based walker and many others are proposed for the navigation of BVIPs. But most of these techniques have limitations such as: infrared and ultrasonic based assistance has short range capacities for object detection. While laser based assistance can harm other people if it directly hit them on their eyes or any other part of the body. These trade-offs are critical to bring this technology in practice.To systematically assess, analyze, and identify the primary studies in this specialized field and provide an overview of the trends and empirical evidence in the proposed field. This systematic research work is performed by defining a set of relevant keywords, formulating four research questions, defining selection criteria for the articles, and synthesizing the empirical evidence in this area. Our pool of studies include 191 most relevant articles to the proposed field reported between 2011 and 2020 (a portion of 2020 is included). This systematic mapping will help the researchers, engineers, and practitioners to make more authentic decisions for finding gaps in the available navigation assistants and suggest a new and enhanced smart assistant application accordingly to ensure safety and accurate guidance of the BVIPs. This research work have several implications in particular the impact of reducing fatalities and major injuries of BVIPs.
Not only has information technology evolved rapidly, but the spatial cognition theory for blind and visually impaired (BVI) people has also made great strides, which has opened up a new opportunity ...for indoor travel assistance systems (ITASs). However, there are still some issues that have not been effectively addressed due to the lack of guidance of the spatial cognition theory. Thus, this article presents a comparative survey among ITASs proposed in the last four years in an effort to inform researchers and developers about system problems and challenges and inform BVI people about the various types and functions of the ITAS. This article will also make researchers and developers aware of the importance of the spatial cognition theory. Furthermore, we give predictions for future trends based on a detailed analysis of 17 ITASs.
Visually impaired individuals often rely on assistive technologies such as white canes for independent navigation. Many electronic enhancements to the traditional white cane have been proposed. ...However, only a few of these proof-of-concept technologies have been tested with authentic users, as most studies rely on blindfolded non-visually impaired participants or no testing with participants at all. Experiments involving blind users are usually not contrasted with the traditional white cane. This study set out to compare an ultrasound-based electronic cane with a traditional white cane. Moreover, we also compared the performance of a group of visually impaired participants (
N
= 10) with a group of blindfolded participants without visual impairments (
N
= 31). The results show that walking speed with the electronic cane is significantly slower compared to the traditional white cane. Moreover, the results show that the performance of the participants without visual impairments is significantly slower than for the visually impaired participants. No significant differences in obstacle detection rates were observed across participant groups and device types for obstacles on the ground, while 79% of the hanging obstacles were detected by the electronic cane. The results of this study thus suggest that electronic canes present only one advantage over the traditional cane, namely in its ability to detect hanging obstacles, at least without prolonged practice. Next, blindfolded participants are insufficient substitutes for blind participants who are expert cane users. The implication of this study is that research into digital white cane enhancements should include blind participants. These participants should be followed over time in longitudinal experiments to document if practice will lead to improvements that surpass the performance achieved with traditional canes.
Individuals suffering from visual impairments and blindness encounter difficulties in moving independently and overcoming various problems in their routine lives. As a solution, artificial ...intelligence and computer vision approaches facilitate blind and visually impaired (BVI) people in fulfilling their primary activities without much dependency on other people. Smart glasses are a potential assistive technology for BVI people to aid in individual travel and provide social comfort and safety. However, practically, the BVI are unable move alone, particularly in dark scenes and at night. In this study we propose a smart glass system for BVI people, employing computer vision techniques and deep learning models, audio feedback, and tactile graphics to facilitate independent movement in a night-time environment. The system is divided into four models: a low-light image enhancement model, an object recognition and audio feedback model, a salient object detection model, and a text-to-speech and tactile graphics generation model. Thus, this system was developed to assist in the following manner: (1) enhancing the contrast of images under low-light conditions employing a two-branch exposure-fusion network; (2) guiding users with audio feedback using a transformer encoder–decoder object detection model that can recognize 133 categories of sound, such as people, animals, cars, etc., and (3) accessing visual information using salient object extraction, text recognition, and refreshable tactile display. We evaluated the performance of the system and achieved competitive performance on the challenging Low-Light and ExDark datasets.
Visually impaired people use tactile maps that can be read by the sense of touch or, to a limited extent, with their eyes. This article concerns the methods of assessing tactile maps in terms of ...their information value. In the research, methods used to assess traditional maps have been adopted to assess tactile maps. Tactile elements of two maps - one developed with the use of traditional methods and the second developed with the use of 3D printing - have been compared. Structural measures of information as well as the information efficiency coefficient of each map have been determined to assess whether new cartographic symbols proposed on a multi-level 3D printed map can increase its information value.