"Alexa, Can I Trust You?" Chung, Hyunji; Iorga, Michaela; Voas, Jeffrey ...
Computer (Long Beach, Calif.),
09/2017, Letnik:
50, Številka:
9
Journal Article
Recenzirano
Odprti dostop
Several recent incidents highlight significant security and privacy risks associated with intelligent virtual assistants (IVAs). Better diagnostic testing of IVA ecosystems can reveal such ...vulnerabilities and lead to more trustworthy systems.
Smart assistants are among the most popular technological devices at home. With a built‐in voice‐based user interface, they provide access to a broad portfolio of online services and information, and ...constitute the central element of state‐of‐the‐art home automation systems. This work discusses the challenges addressed and the solutions adopted for the design and implementation of scripted conversations by means of off‐the‐shelf smart assistants. Scripted conversations play a fundamental role in many application fields, such as call center facilities, retail customer services, rapid prototyping, role‐based training or the management of neuropsychiatric disorders. To illustrate this proposal, an actual implementation of the phone version of the Montreal cognitive assessment test as an Amazon's Alexa skill is described as a proof‐of‐concept.
People with visual impairment are the second largest affected category with limited access to assistive products. A complete, portable, and affordable smart assistant for helping visually impaired ...people to navigate indoors, outdoors, and interact with the environment is presented in this paper. The prototype of the smart assistant consists of a smart cane and a central unit; communication between user and the assistant is carried out through voice messages, making the system suitable for any user, regardless of their IT skills. The assistant is equipped with GPS, electronic compass, Wi-Fi, ultrasonic sensors, an optical sensor, and an RFID reader, to help the user navigate safely. Navigation functionalities work offline, which is especially important in areas where Internet coverage is weak or missing altogether. Physical condition monitoring, medication, shopping, and weather information, facilitate the interaction between the user and the environment, supporting daily activities. The proposed system uses different components for navigation, provides independent navigation systems for indoors and outdoors, both day and night, regardless of weather conditions. Preliminary tests provide encouraging results, indicating that the prototype has the potential to help visually impaired people to achieve a high level of independence in daily activities.
This paper describes the main results of the JUNO project, a proof of concept developed in the Region of Murcia in Spain, where a smart assistant robot with capabilities for smart navigation and ...natural human interaction has been developed and deployed, and it is being validated in an elderly institution with real elderly users. The robot is focused on helping people carry out cognitive stimulation exercises and other entertainment activities since it can detect and recognize people, safely navigate through the residence, and acquire information about attention while users are doing the mentioned exercises. All the information could be shared through the Cloud, if needed, and health professionals, caregivers and relatives could access such information by considering the highest standards of privacy required in these environments. Several tests have been performed to validate the system, which combines classic techniques and new Deep Learning-based methods to carry out the requested tasks, including semantic navigation, face detection and recognition, speech to text and text to speech translation, and natural language processing, working both in a local and Cloud-based environment, obtaining an economically affordable system. The paper also discusses the limitations of the platform and proposes several solutions to the detected drawbacks in this kind of complex environment, where the fragility of users should be also considered.
Suppressing unintended invocation of the device because of the speech that sounds like wake-word, or accidental button presses, is critical for a good user experience, and is referred to as ...False-Trigger-Mitigation (FTM). In case of multiple invocation options, the traditional approach to FTM is to use invocation-specific models, or a single model for all invocations. Both approaches are sub-optimal: the memory cost for the former approach grows linearly with the number of invocation options, which is prohibitive for on-device deployment, and does not take advantage of shared training data; while the latter is unable to accurately capture acoustic differences across different invocation types. To this end, we propose a Unified Acoustic Detector (UAD) for FTM when multiple invocation options are available on device. The proposed UAD is trained using a multi-task learning framework, where a jointly trained acoustic encoder model is augmented with invocation-specific classification layers. In the context of the FTM task, we show for the first time that using the shared model architecture across invocations (thus, keeping the model size similar to that of a monolithic model used for a single invocation type), we can not only match but largely improve the accuracy of the invocation-specific models. In particular, in the challenging case of touch-based invocation, we obtain 50% and 35% relative improvement in false positive rate at 99% true positive rate, when compared with a singleoutput model for both invocations, and separate models per invocation, respectively. Furthermore, we propose streaming and non-streaming variants of the UAD, and show that they both outperform a traditional ASR-based approach to FTM.
Healthy diets have been demonstrated to complement benefits of physical activity, physical condition and mental wellbeing, all of them being important factors influencing the quality of life of ...elderly. Unfortunately malnutrition is a serious threat and an increasingly prevalent condition among the fast-growing elderly population. The present work addresses the identification of important factors contributing to decreased appetite and food intake as well as the development of approaches towards a healthy diet and personalised nutrition in elderly. Within the present study semi-structured interviews with elderly and elderly suffering from swallowing and mastication difficulties have been performed, results being used for the development of food provision modules and the corresponding recipes addressing the nutritional requirements of elderly. The social context and the swallowing and mastication difficulties influence the eating behaviour as well as the motivation to eat. On the other hand, it was found that texture modified foods (food which texture is adapted to the need of people with swallowing and mastication problems) could act as motivational aspect. With regard to food personalisation in the elderly the consideration of three different case scenarios based on individual independency and the degree of oral impairment seemed to be appropriate. Different aspects such as gender weight, physical activity level as well as high protein demand are important influential factors in the development of personalised recipes in elderly. In addition to the above, a conversational agent was developed as behaviour change module and can be successfully used as smart personal assistant in helping the users to understand their eating habits and adopt healthier nutrition over the long term.
This work has been focused on the part of the population with hearing impairment who owns a dog and that worries about not listening the dog barks, specially when a risky situation is taking place at ...home. A survey was carried out on people with deafness problems to find out hazard situations which they are exposed at home. A system prototype was developed to be integrated as a component of ambient intelligence (AmI) for ambient assisted living (AAL) that serves to Hearing Impaired People (HIP). The prototype detects dog barks and notifies users through both a smart mobile app and a visual feedback. It consists of a connection between a Raspberry Pi 3 card and a ReSpeaker Mic Array v2.0 microphone array; a communication module with a smartphone was implemented, which displays written messages or vibrations when receiving notifications. The cylinder-shaped device was designed by the authors and sent it to 3D print with a resin material. The prototype recognized the barking efficiently by using a machine learning model based on Support Vector Machine technique. The prototype was tested with deaf people which were satisfied with precision, signal intensity, and activation of lights.