This paper investigates effects of participants’ gender and age (adolescents, young adults, and seniors), robots’ gender (male and female robots) and appearance (humanoid vs android) on robots’ ...acceptance dimensions. The study involved 6 differently aged groups of participants (two adolescents, two young adults and two seniors’ groups, for a total of 240 participants) requested to express their willingness to interact and their perception of robots’ usefulness, pleasantness, appeal, and engagement for two different sets of females (Pepper, Erica, and Sophia) and male (Romeo, Albert, and Yuri) humanoid and android robots. Participants were also requested to express their preferred and attributed age ranges and occupations they entrusted to robots among healthcare, housework, protection and security and front office. Results show that neither the age nor participants and robots’ gender, nor robots’ human likeness univocally affected robots’ acceptance by these differently aged users. Robots’ acceptance appeared to be a nonlinear combination of all these factors.
This paper proposes a systematic approach to investigate the impact of factors such as the gender and age of participants and gender, and age of faces on the decoding accuracy of emotional ...expressions of disgust, anger, sadness, fear, happiness, and neutrality. The emotional stimuli consisted of 76 posed and 76 naturalistic faces, differently aged (young, middle-aged, and older) selected from FACES and SFEW databases. Either a posed or naturalistic faces’ decoding task was administered. The posed faces’ decoding task involved three differently aged groups (young, middle-aged, and older adults). The naturalistic faces’ decoding task involved two groups of older adults. For the posed decoding task, older adults were found significantly less accurate than middle-aged and young participants, and middle-aged significantly less accurate than young participants. Old faces were significantly less accurately decoded than young and middle-aged faces of disgust, and anger, and young faces of fear, and neutrality. Female faces were significantly more accurately decoded than male faces of anger and sadness, significantly less accurately decoded than male faces of neutrality. For the naturalistic decoding task, older adults were significantly less accurate in decoding naturalistic rather than posed faces of disgust, fear, and neutrality, contradicting an older adults’ emended support from a prior naturalistic emotional experience. Young faces were more accurately decoded than old and middle-aged faces of disgust and anger and old faces of neutrality. Female faces were significantly more accurately decoded than male faces of fear, and significantly less accurately decoded than male faces of anger. Significant effects and significant interdependencies were observed among the age of participants, emotional categories, age, and gender of faces, and type of stimuli (naturalistic vs. posed), not allowing to distinctly isolate the effects of each involved variable. Nevertheless, the data collected in this paper weakens both the assumptions on women enhanced ability to display and decode emotions and participants enhanced ability to decode faces closer to their own age (“own age bias” theory). Considerations are made on how these data would guide the development of assessment tools and preventive interventions and the design of emotionally and socially believable virtual agents and robots to assists and coach emotionally vulnerable people in their daily routines.
With the aim to give an overview of the most recent discoveries in the field of socially engaging interactive systems, the present paper discusses features affecting users' acceptance of virtual ...agents, robots, and chatbots. In addition, questionnaires exploited in several investigations to assess the acceptance of virtual agents, robots, and chatbots (voice only) are discussed and reported in the
Supplementary material
to make them available to the scientific community. These questionnaires were developed by the authors as a scientific contribution to the H2020 project EMPATHIC (
http://www.empathic-project.eu/
), Menhir (
https://menhir-project.eu/
), and the Italian-funded projects SIROBOTICS (
https://www.exprivia.it/it-tile-6009-si-robotics/
) and ANDROIDS (
https://www.psicologia.unicampania.it/android-project
) to guide the design and implementation of the promised assistive interactive dialog systems. They aimed to quantitatively evaluate Virtual Agents Acceptance (VAAQ), Robot Acceptance (RAQ), and Synthetic Virtual Agent Voice Acceptance (VAVAQ).
Based on a previous investigation, a quantitative study aimed to identify user’ preferences towards four synthetic voices of two different quality levels (classified through the sophistication of the ...synthesizer: low vs. high) is proposed. The voices administered to participants were developed considering two main aspects: the voice quality (high/low) and their gender (male/female). 182 unpaid participants were recruited for the study, divided in four groups according to their age, and therefore classified as adolescents, young adults, middle-aged, and seniors. To collect data regarding each voice, randomly audited by participants, the shortened version of the Virtual Agent Voice Acceptance Questionnaire (VAVAQ) was exploited. Outcomes of the previous study revealed that the voices of high quality, regardless of their gender, received a higher acclaim by all participants examined rather than the corresponding two voices assessed as lower quality. Conversely, findings of the current study suggest that the four new groups of participants involved agreed in showing their strong preference towards the high-quality voice gendered as female compared to all the other considered voices. Regarding the two voices gendered as male, the high-quality one was considered as more original and capable to arouse positive emotional states than the low-quality one. Moreover, the high-quality male voice was judged as more natural than the female low-quality one. Results provide some insights for future directions in the user experience and design field.
Considered the increasing use of assistive technologies in the shape of virtual agents, it is necessary to investigate those factors which characterize and affect the interaction between the user and ...the agent, among these emerges the way in which people interpret and decode synthetic emotions, i.e., emotional expressions conveyed by virtual agents. For these reasons, an article is proposed, which involved 278 participants split in differently aged groups (young, middle-aged, and elders). Within each age group, some participants were administered a "naturalistic decoding task," a recognition task of human emotional faces, while others were administered a "synthetic decoding task" namely emotional expressions conveyed by virtual agents. Participants were required to label pictures of female and male humans or virtual agents of different ages (young, middle-aged, and old) displaying static expressions of disgust, anger, sadness, fear, happiness, surprise, and neutrality. Results showed that young participants showed better recognition performances (compared to older groups) of anger, sadness, and neutrality, while female participants showed better recognition performances (compared to males) of sadness, fear, and neutrality; sadness and fear were better recognized when conveyed by real human faces, while happiness, surprise, and neutrality were better recognized when represented by virtual agents. Young faces were better decoded when expressing anger and surprise, middle-aged faces were better decoded when expressing sadness, fear, and happiness, while old faces were better decoded in the case of disgust; on average, female faces where better decoded compared to male ones.
The growth of data-driven approaches typical of Machine Learning leads to an ever-increasing need for large quantities of labeled data. Unfortunately, these attributions are often made automatically ...and/or crudely, thus destroying the very concept of “ground truth” they are supposed to represent. To address this problem, we introduce HUM-CARD, a dataset of human trajectories in crowded contexts manually annotated by nine experts in engineering and psychology, totaling approximately 5000 hours. Our multidisciplinary labeling process has enabled the creation of a well-structured ontology, accounting for both individual and contextual factors influencing human movement dynamics in shared environments. Preliminary and descriptive analyzes are presented, highlighting the potential benefits of this dataset and its methodology in various research challenges.
This study contributes knowledge on the detection of depression through handwriting/drawing features, to identify quantitative and noninvasive indicators of the disorder for implementing algorithms ...for its automatic detection. For this purpose, an original
approach was adopted to provide a dynamic evaluation of handwriting/drawing performance of healthy participants with no history of any psychiatric disorders (Formula: see text), and patients with a clinical diagnosis of depression (Formula: see text). Both groups were asked to complete seven tasks requiring either the writing or drawing on a paper while five handwriting/drawing features' categories (i.e. pressure on the paper, time, ductus, space among characters, and pen inclination) were recorded by using a digitalized tablet. The collected records were statistically analyzed. Results showed that, except for pressure, all the considered features, successfully discriminate between depressed and nondepressed subjects. In addition, it was observed that depression affects different writing/drawing functionalities. These findings suggest the adoption of writing/drawing tasks in the clinical practice as tools to support the current depression detection methods. This would have important repercussions on reducing the diagnostic times and treatment formulation.
In this paper, we present the Learning Interface for Mathematics Education (LIME) project. The main goal of this project was to create User-Friendly Interfaces (UFI) for Vygotskian computer-based ...learning activities (VCBLAs) in order to promote their dissemination in the school context. A VCBLA is based on collaborative scripts, according to the Vygotskian perspective, and is implemented on e-learning platforms (such as Moodle). It is aimed at the development of argumentative and problem-solving skills in mathematics, and in other educational contexts or for vocational training. Based on VCBLAs testing and studies in the literature, we have identified the requirements for UFI in order to increase users' (i.e., students and teachers) acceptance to enable large-scale testing and use of VCBLAs.