Interpretative Dynamics in Human Robot Interaction Giusti, L.; Marti, P.
ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication,
2006-Sept.
Conference Proceeding
Current technologies often tend to emphasise utilitarian versions of work, entertainment, and consumer activity by embodying a representation of the privileged activities they support and the values ...predefined by the designer. Social robots offer an extraordinary opportunity to design technologies with open-ended possibilities for interaction and engagement with humans. In the paper, we present the results of a case study conducted in a nursing home with elderly people interacting with the seal robot Paro. The results of the study show that the robot actively supports our natural disposition to attribute intentional states to inanimate or artificial objects. In addition interesting interpretative dynamics developing in human robot interaction emerged in the study: mental compromised subjects alternate their assessment of the robot from an inanimate object to an agent, depending on the severity of their disease. However, the observation shows that also subjects talking about the robot as an inanimate object, continue to be emotionally and intellectually involved in the experience. In this respect, agentivity does not seem to be a key factor in assuring a pleasurable and intriguing interaction experience
This paper introduces CognitiveDog, a pioneering development of quadruped robot with Large Multi-modal Model (LMM) that is capable of not only communicating with humans verbally but also physically ...interacting with the environment through object manipulation. The system was realized on Unitree Go1 robot-dog equipped with a custom gripper and demonstrated autonomous decision-making capabilities, independently determining the most appropriate actions and interactions with various objects to fulfill user-defined tasks. These tasks do not necessarily include direct instructions, challenging the robot to comprehend and execute them based on natural language input and environmental cues. The paper delves into the intricacies of this system, dataset characteristics, and the software architecture. Key to this development is the robot's proficiency in navigating space using Visual-SLAM, effectively manipulating and transporting objects, and providing insightful natural language commentary during task execution. Experimental results highlight the robot's advanced task comprehension and adaptability, underscoring its potential in real-world applications. The dataset used to fine-tune the robot-dog behavior generation model is provided at the following link: huggingface.co/datasets/ArtemLykov/CognitiveDog_dataset
This paper introduces DogSurf - a newapproach of using quadruped robots to help visually impaired people navigate in real world. The presented method allows the quadruped robot to detect slippery ...surfaces, and to use audio and haptic feedback to inform the user when to stop. A state-of-the-art GRU-based neural network architecture with mean accuracy of 99.925% was proposed for the task of multiclass surface classification for quadruped robots. A dataset was collected on a Unitree Go1 Edu robot. The dataset and code have been posted to the public domain.
Although work on computational and robotic modelling of cognition is highly diverse, as an empirical method it can be roughly divided into at least two clearly different, though non-exclusive ...branches, motivated to evaluate the
sufficiency
or the
necessity
of theories when it comes to accounting for data and/or other observations. With the rising profile of theories of situated/embodied cognition, a third non-exclusive avenue for investigation has also gained in popularity, the investigation of agent-environment embedding or more generally,
exploration
. Still in its infancy, and often confused with sufficiency testing, this relatively new kind of modelling, which is theory- rather than data-driven, investigates the role of the environment in shaping the ontogenetic and/or phylogenetic development of situated agency. Each of these three approaches presents many issues that modellers must be sensitive to, both in the design of experiments, and in the conclusions that can be drawn from them. This paper highlights some of these issues, provides examples, and addresses the contribution of computational/robotic modelling to cognitive science, as well as some of its limitations.
Role sharing analysis on multi-operator cooperative work Igarashi, H.; Suzuki, S.; Kobayashi, H. ...
RO-MAN 2009 - The 18th IEEE International Symposium on Robot and Human Interactive Communication,
2009-Sept.
Conference Proceeding
This paper addresses a quantification method for role sharing in cooperative tasks. By the method, we found that the ratio of three typical indexes relate to task performance. Most of conventional ...human-machine systems assumed to assist single operator. In such case, the systems should only pay attention his/her operation characteristics. If there were multiple operators, they also include altruistic behaviours during the work. Such behaviours could not be regarded by such one-on-one assist system. Since intelligent assist systems are expected to apply in human society, the systems are required work among humans. A challenge of our research is to quantify such humans' altruistic behaviours, especially what relating cooperation in the task involving multiple participants. In this paper, role sharing characteristics are quantified in order to evaluate the cooperative performance. For this quantification, ratio indexes of three kinds of behaviours is focused on. One is observation behaviours for other operator's work, which is typical altruistic behaviour in a cooperative task. Second is ratio of egoistic behaviour as active conveyance. Then, total activity by motion distance of the robot could be indexes of the role sharing. Finally, correlation between these ratio of indexes and a task performance is analyzed. Furthermore an advanced assist using the evaluation is discussed.
Working Memory (WM) is a central component of cognition. It has direct impact not only on core cognitive processes, such as learning, comprehension, and reasoning, but also language-related ...processes, such as natural language understanding and referring expression generation. Thus, for robots to achieve human-like natural language capabilities, we argue that their cognitive models should include an accurate WM representation that plays a similarly central role. Our research investigates how different WM models from cognitive psychology affect robots' natural language capabilities. Specifically, we explore the limited capacity nature of WM and how different information forgetting strategies, namely decay and interference, impact the human-likeness of utterances formulated by robots.
Design Specifications for a Social Robot Math Tutor Ligthart, Mike E.U.; de Droog, Simone M.; Bossema, Marianne ...
Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction,
03/2023
Conference Proceeding
Open access
To benefit from the social capabilities of a robot math tutor, instead of being distracted by them, a novel approach is needed where the math task and the robot's social behaviors are better ...intertwined. We present concrete design specifications of how children can practice math via a personal conversation with a social robot and how the robot can scaffold instructions. We evaluated the designs with a three-session experimental user study (n = 130, 8-11 y.o.). Participants got better at math over time when the robot scaffolded instructions. Furthermore, the robot felt more as a friend when it personalized the conversation.
In this work, we present a novel probabilistic appearance representation and describe its application to surprise detection in the context of cognitive mobile robots. The luminance and chrominance of ...the environment are modeled by Gaussian distributions which are determined from the robot's observations using Bayesian inference. The parameters of the prior distributions over the mean and the precision of the Gaussian models are stored at a dense series of viewpoints along the robot's trajectory. Our probabilistic representation provides us with the expected appearance of the environment and enables the robot to reason about the uncertainty of the perceived luminance and chrominance. Hence, our representation provides a framework for the detection of surprising events, which facilitates attentional selection. In our experiments, we compare the proposed approach with surprise detection based on image differencing. We show that our surprise measure is a superior detector for novelty estimation compared to the measure provided by image differencing.
LESS is More Bobu, Andreea; Scobee, Dexter R. R.; Fisac, Jaime F. ...
2020 15th ACM/IEEE International Conference on Human-Robot Interaction (HRI),
03/2020
Conference Proceeding
Open access
Robots need models of human behavior for both inferring human goals and preferences, and predicting what people will do. A common model is the Boltzmann noisily-rational decision model, which assumes ...people approximately optimize a reward function and choose trajectories in proportion to their exponentiated reward. While this model has been successful in a variety of robotics domains, its roots lie in econometrics, and in modeling decisions among different discrete options, each with its own utility or reward. In contrast, human trajectories lie in a continuous space, with continuous-valued features that influence the reward function. We propose that it is time to rethink the Boltzmann model, and design it from the ground up to operate over such trajectory spaces. We introduce a model that explicitly accounts for distances between trajectories, rather than only their rewards. Rather than each trajectory affecting the decision independently, similar trajectories now affect the decision together. We start by showing that our model better explains human behavior in a user study. We then analyze the implications this has for robot inference, first in toy environments where we have ground truth and find more accurate inference, and finally for a 7DOF robot arm learning from user demonstrations.
Self-optimization is a concept for mechatronic systems to leave open the choice among system objectives as a degree of freedom until runtime to allow better adaptation to changing system and ...environment conditions. Demonstration and knowledge transfer of the concept is not easy as the effects of it in mechatronic systems are hard to see in a complex system. To further spread the idea of self-optimization, an intuitive anchor is needed to make it easier to talk about the concept. Also the abstraction from technical details facilitates focusing on the concept. We have developed a multi-agent heterogeneous robotic demonstrator that allows showing the process of self-optimization on a timescale of minutes. The demonstrator decomposes the roles in a mechatronic system to robotic agents. An association between a function and the behavior of the robot is achieved. After having demonstrated the setup for expert and non-expert audiences we have seen the encouraging effect that discussions spin off easily and allow to spread the idea effectively. We present the concept of self-optimization, the behavior-based demonstrator scenario implemented using BeBot miniature and Paderkicker robots in an office environment.