Fencing for the blind and visually impaired is an emerging sub-discipline of fencing that creates unusual conditions for meaning-making through interaction between embodied endowments and worldly ...affordances. With the rules of fencing slightly adjusted to the needs of the blindfolded participants – regardless of their sightedness – the discipline requires the fencers to engage in a duel by relying on other than visual cues. This article explores what an autoethnographic account of experiences of participation in fencing for the blind and visually impaired brings to debates on the embodied, and specifically sensory difference. The discussion of these experiences intersects with debates on affect, affordance, and habit, with all three having important roles in related semiotic processes. Presented vignettes draw upon the author’s lived experiences of participation in fencing for the blind and visually impaired and are analyzed as part of a mixed-method autoethnographic study, accompanied by sensory methodologies, with a focus on an inquiry beyond the visual. The vignettes elucidate how we make sense of our surroundings through a complex engagement with the ecology of sensory and affective processes. In addition to exploring the role of affective and pre-conceptual aspects of our experiences, the article seeks to understand how semiosis occurs through both exposure to as well as the active pursuit of specific environmental signs available to us. The article also derives from biosemiotics to examine the complex relationship between meaning-making processes and habits. Finally, the autoethnographic account provides an insight into how we habituate the world and our embodied differences and thus enable meaning-making processes.
Navigating indoor spaces is especially challenging for individuals with blindness and visual impairments. Although many solutions currently exist, the acceptance of most of them is extremely poor due ...to their technical limitations and the complete lack of taking into consideration factors, such as usability and the perceived experience among others, which influence adoption rates. To alleviate this problem, we created BlindMuseumTourer, a state-of-the-art indoor navigation smartphone application that tracks and navigates the user inside the spaces of a museum. At the same time, it provides services for narration and description of the exhibits. The proposed system consists of an Android application that leverages the sensors found on smartphones and utilizes a novel pedestrian dead reckoning (PDR) mechanism that optionally takes input from the Bluetooth low energy (BLE) beacons specially mounted on the exhibits. This article presents the extended Usability and User Experience evaluation of BlindMuseumTourer and the findings carried out with 30 participants having varying degrees of blindness. Throughout this process, we received feedback for improving both the available functionality and the specialized user-centred training sessions in which blind users are first exposed to our application’s functionality. The methodology of this evaluation employs standardized questionnaires and semi-structured interviews, and the results indicate an overall positive attitude from the users. In the future, we intend to extend the number and type of indoor spaces supported by our application.
Notably valuable efforts have focused on helping people with special needs. In this work, we build upon the experience from the BlindHelper smartphone outdoor pedestrian navigation app and present ...Blind MuseumTourer, a system for indoor interactive autonomous navigation for blind and visually impaired persons and groups (e.g., pupils), which has primarily addressed blind or visually impaired (BVI) accessibility and self-guided tours in museums. A pilot prototype has been developed and is currently under evaluation at the Tactual Museum with the collaboration of the Lighthouse for the Blind of Greece. This paper describes the functionality of the application and evaluates candidate indoor location determination technologies, such as wireless local area network (WLAN) and surface-mounted assistive tactile route indications combined with Bluetooth low energy (BLE) beacons and inertial dead-reckoning functionality, to come up with a reliable and highly accurate indoor positioning system adopting the latter solution. The developed concepts, including map matching, a key concept for indoor navigation, apply in a similar way to other indoor guidance use cases involving complex indoor places, such as in hospitals, shopping malls, airports, train stations, public and municipality buildings, office buildings, university buildings, hotel resorts, passenger ships, etc. The presented Android application is effectively a Blind IndoorGuide system for accurate and reliable blind indoor navigation.
The textual data of a document is supplemented by the graphical information in it. To make communication easier, they contain tables, charts and images. However, it excludes a section of our ...population - the visually impaired. With technological advancements, the blind can access the documents through text to speech software solutions. In this method, even images can be conveyed by reading out the figure captions. However, charts and other statistical comparisons which involve critical information are difficult to be "read" out this way. Aim of this paper is to analyse various methods available to solve this vexatious issue. We survey the state-of-the-art works that do the exact opposite of graphing tools. In this paper, we explore the existing literature in understanding the graphs and extracting the visual encoding from them. We classify these approaches into modality-based approaches, conventional and deep-learning based methods. The survey also contains comparisons and analyses relevant study datasets. As an outcome of this survey, we observe that: (i) All existing works under each category need decoding in a variety of graphs. (ii) Among the approaches, deep learning performs remarkably well in localisation and classification. However, it needs further improvements in reasoning from chart images. (iii) Research works are still in progress to access data from vector images. Recreating data from the raster images has unresolved issues. Based on this study, the various applications of decoding the graphs, challenges and future possibilities are also discussed. This paper explores current works in the extraction of chart data, which seek to enable researchers in Human Computer Interaction to achieve human-level perception of visual data by machines. In this era of visual summarisation of data, the AI approaches can automate the underlying data extraction and hence provide the natural language descriptions to support visually disabled users.
Using the methodology of in-depth interviews, this article explores how blind and visually impaired white cane users conceptualize urban space. The study presented in the article showed that the city ...is conceived, even without visual mechanisms, through landmarks, paths, edges, nodes and districts, i.e. the types of elements in the city image defined by Kevin Lynch. However, spatial representations of blind people are produced on the basis of spatial experience that is proximal and not distal, as was the case with Lynch. The article discusses elements of the non-visual image of the city that are constructed through direct touch and white cane use. Drawing on Lefebvre's stance on the interconnectedness of the body, practice and representational spaces, the author argues that the white cane is not just an aid that facilitates the mobility of blind people and helps to navigate in the urban space. As part of the 'practico-sensory totality' of the body, it also influences the ways in which the city is experienced and conceived.
Standardizing accessible test design and development to meet students’ individual access needs is a complex task. The following study provides one approach to accessible test design and development ...using participatory design methods with school community members. Participatory research provides opportunities to empower collaborators by co-creating knowledge that is useful for assessment development. In this study, teachers of students who are visually impaired, students who are blind or are visually impaired, English language teachers, and test administrators provided feedback at critical stages of the development process to explore the construct validity of English language proficiency (ELP) assessments. Students who are blind or visually impared need to be able to show what they know and can do without impact from construct-irrelevant variance like language acquisition or disability characteristics. Building on our iterative accessible test design, development, and delivery practices, and as part of a large project on English-learner proficiency test accessibility and usability, we collected rich observation and interview data from 17 students who were blind or visually impaired and were enrolled in grades kindergarten through Grade 12. We examined the ratings and item metadata, including assistive technology preferences and interactions, while we used grounded theory approaches to examine qualitative thematic findings. Implications for research and practice are discussed.
The loss of or impairment in vision makes it challenging for blind and visually impaired people (BVIP) to navigate easily in their surroundings. Several solutions were proposed to address this ...challenge and assist BVIP in navigation by exploiting existing technologies. However, their reliance on pre-installed infrastructure and costly dedicated hardware made them less practical. As an alternative, pedestrian dead reckoning techniques were proposed. However, the slow walking pace of BVIP, the required contact with un-intended obstacles, and the false recognition of activities increase error accumulation, making these techniques less applicable. Therefore, solutions are needed to accurately recognize the walking patterns of BVIP so that efficient navigation solutions can be developed. This article fills this research gap by extending traditional white cane with smartphone sensors. Specifically, a smartphone is used with a conventional white cane to collect data through its sensors on a time-based data window. For smooth recording, a revolving tire is attached at the bottom of the white cane. The collected data is processed by employing the computational resources of the smartphone using our designed app, which identifies the user’s walking patterns such as walking, stairs up/down, sit/stand, and collision. As a case study, these activities were classified using Naïve Bayes, Random Forests, J48, Decision Table, and LibSVM. Among these, Random Forests gave a higher accuracy. These results suggest that the proposed solution is more practical in designing navigation applications for BVIP and may yield better accuracy if tested with more advanced classifiers.
In classic communication, emotion is transferred via three channels: verbal, nonverbal and paraverbal. With the audio description, which should enable blind and visually impaired people to perceive ...visual processes, the visual channel is omitted. Emotions are objectified using the available linguistic and para-linguistic means. On the basis of the corpora of the German-language audio films, relevant forms and forms of emotion transfer in the audio description are analyzed.
The autonomy, independence, productivity and, in general, quality of life of people with visual impairments often rely significantly on their ability to use new assistive technologies. In particular, ...their ability to navigate by foot, use means of transport and visit indoor spaces may be greatly enhanced by the use of assistive navigation systems. In this paper, a detailed analysis of user needs and requirements is presented concerning the design and development of assisting navigation systems for blind and visually impaired people (BVIs). To this end, the elicited user needs and requirements from interviews with the BVIs are processed and classified into seven main categories. Interestingly, one of the categories represents the requirements of the BVIs to be trained on the use of the mobile apps that would be included in an assistive navigation system. The need of the BVIs to be confident in their ability to safely use the apps revealed the requirement that training versions of the apps should be available. These versions would be able to simulate real-world conditions during the training process. The requirements elicitation and classification reported in this paper aim to offer insight into the design, development, deployment and distribution of assistive navigation systems for the BVIs.
Building new systems used for indoor objects detection and indoor assistance navigation presents a very crucial task especially in artificial intelligence and computer science fields. The number of ...blind and visually impaired persons (VIP) is increasing day by day. In order to help this category of persons, we propose to develop a new indoor object-detection system based on deep convolutional neural networks (DCNNs). The proposed system is developed based on the one-stage neural network RetinaNet. In order to train and evaluate the developed system, we propose to build a new indoor objects dataset which also presents 11,000 images containing 24 indoor landmark objects highly valuable for indoor assistance navigation. The proposed dataset provides a high intra and inter-class variation and various challenging conditions which aim to build a robust detection system for blind and visually impaired people (VIP) mobility. Experimental results prove the high detection performances of the developed indoor objects detection and recognition system. We obtained a detection accuracy reaching up to 98.75% mAP and 62 FPS as a detection speed.