To improve user experiences and immersion within virtual environments auditory experience has long been claimed to be of notable importance 1. This paper introduces a framework, in which objects, ...enriched with information about their sound properties, are being processed to generate virtual sound sources. This is done with an automatic processing of the 3D-scene and therefore minimizes the effort needed to develop a multimodal virtual world. In order to create a comprehensive auditory experience different types of sound sources have to be distinguished. We propose a differentiation into three classes: locally bound static sounds, dynamically created event based sounds, and ambient sounds to create spatial atmosphere.
In this paper we describe how coverbal iconic gestures can be used to express shape-related references to objects in a Virtual Construction Environment. Shape information is represented using ...Imagistic Description Trees (IDTs), an extended semantic representation which includes relational information (as well as numerical data) about the objects’ spatial features. The IDTs are generated online according to the trajectory of the user’s hand movements when the system is instructed to select an existing or to create a new object. A tight integration of the semantic information into the objects’ data structures allows to access this information via so-called semantic entities as interfaces during the multimodal analysis and integration process.
In this paper the WASABI Affect Simulation Architecture is introduced, in which a virtual human’s cognitive reasoning capabilities are combined with simulated embodiment to achieve the simulation of ...primary and secondary emotions. In modeling primary emotions we follow the idea of “Core Affect” in combination with a continuous progression of bodily feeling in three-dimensional emotion space (PAD space), that is only subsequently categorized into discrete emotions. In humans, primary emotions are understood as onto-genetically earlier emotions, which directly influence facial expressions. Secondary emotions, in contrast, afford the ability to reason about current events in the light of experiences and expectations. By technically representing aspects of their connotative meaning in PAD space, we not only assure their mood-congruent elicitation, but also combine them with facial expressions, that are concurrently driven by the primary emotions. An empirical study showed that human players in the Skip-Bo scenario judge our virtual human MAX significantly older when secondary emotions are simulated in addition to primary ones.
Interaction in conversational interfaces strongly relies on the system's capability to interpret the user's references to objects via deictic expressions. Deictic gestures, especially pointing ...gestures, provide a powerful way of referring to objects and places, e.g., when communicating with an embodied conversational agent in a virtual reality environment. We highlight results drawn from a study on pointing and draw conclusions for the implementation of pointing-based conversational interactions in partly immersive virtual reality.
The challenge to develop an integrated perspective of embodiment in communication has been taken up by an international research group hosted by Bielefeld University's Center for Interdisciplinary ...Research (ZiF) from October, 2005 through September, 2006. An international conference was held there on 12-15 January, 2005 to define a research agenda that will explicitly address Embodied Communication in Humans and Machines.
Full text
Available for:
CEKLJ, EMUNI, FZAB, GEOZS, IJS, IMTLJ, KILJ, KISLJ, NUK, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UL, UM, UPUK, VKSCE, ZAGLJ
One of the crucial aspects in building sociable, communicative robots is to endow them with expressive nonverbal behaviors. Gesture is one such behavior, frequently used by human speakers to ...illustrate what they express in speech. The production of gestures, however, poses a number of challenges with regard to motor control for arbitrary, expressive hand-arm movement and its coordination with other interaction modalities. We describe an approach to enable the humanoid robot ASIMO to flexibly produce communicative gestures at run-time, building upon the Articulated Communicator Engine (ACE) that was developed to allow virtual agents to realize planned behavior representations on the spot. We present a control architecture that tightly couples ACE with ASIMO's perceptuo-motor system for multi-modal scheduling. In this way, we combine conceptual representation and planning with motor control primitives for meaningful arm movements of a physical robot body. First results of realized gesture representations are presented and discussed.
Within the field of Embodied Conversational Agents (ECAs), the simulation of emotions has been suggested as a means to enhance the believability of ECAs and also to effectively contribute to the goal ...of more intuitive human–computer interfaces. Although various emotion models have been proposed, results demonstrating the appropriateness of displaying particular emotions within ECA applications are scarce or even inconsistent. Worse, questionnaire methods often seem insufficient to evaluate the impact of emotions expressed by ECAs on users. Therefore we propose to analyze non-conscious physiological feedback (bio-signals) of users within a clearly arranged dynamic interaction scenario where various emotional reactions are likely to be evoked. In addition to its diagnostic purpose, physiological user information is also analyzed online to trigger empathic reactions of the ECA during game play, thus increasing the level of social engagement. To evaluate the appropriateness of different types of affective and empathic feedback, we implemented a cards game called Skip-Bo, where the user plays against an expressive 3D humanoid agent called Max, which was designed at the University of Bielefeld 6 and is based on the emotion simulation system of 2. Work performed at the University of Tokyo and NII provided a real-time system for empathic (agent) feedback that allows one to derive user emotions from skin conductance and electromyography 13. The findings of our study indicate that within a competitive gaming scenario, the absence of negative agent emotions is conceived as stress-inducing and irritating, and that the integration of empathic feedback supports the acceptance of Max as a co-equal humanoid opponent.
Full text
Available for:
FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NUK, OBVAL, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
We propose a computational model for building a tactile body schema for a virtual human. The learned body structure of the agent can enable it to acquire a perception of the space surrounding its ...body, namely its peripersonal space. The model uses tactile and proprioceptive informations and relies on an algorithm which was originally applied with visual and proprioceptive sensor data. In order to feed the model, we present work on obtaining the nessessary sensory data only from touch sensors and the motor system. Based on this, we explain the learning process for a tactile body schema. As there is not only a technical motivation for devising such a model but also an application of peripersonal action space, an interaction example with a conversational agent is described.
Empathy is believed to play a major role as a basis for humans’ cooperative behavior. Recent research shows that humans empathize with each other to different degrees depending on several modulation ...factors including, among others, their social relationships, their mood, and the situational context. In human spatial interaction, partners share and sustain a space that is equally and exclusively reachable to them, the so-called interaction space. In a cooperative interaction scenario of relocating objects in interaction space, we introduce an approach for triggering and modulating a virtual humans cooperative spatial behavior by its degree of empathy with its interaction partner. That is, spatial distances like object distances as well as distances of arm and body movements while relocating objects in interaction space are modulated by the virtual human’s degree of empathy. In this scenario, the virtual human’s empathic emotion is generated as a hypothesis about the partner’s emotional state as related to the physical effort needed to perform a goal directed spatial behavior.