Intelligent systems are increasingly entering the workplace, gradually moving away from technologies supporting work processes to artificially intelligent (AI) agents becoming team members. ...Therefore, a deep understanding of effective human-AI collaboration within the team context is required. Both psychology and computer science literature emphasize the importance of trust when humans interact either with human team members or AI agents. However, empirical work and theoretical models that combine these research fields and define team trust in human-AI teams are scarce. Furthermore, they often lack to integrate central aspects, such as the multilevel nature of team trust and the role of AI agents as team members. Building on an integration of current literature on trust in human-AI teaming across different research fields, we propose a multidisciplinary framework of team trust in human-AI teams. The framework highlights different trust relationships that exist within human-AI teams and acknowledges the multilevel nature of team trust. We discuss the framework's potential for human-AI teaming research and for the design and implementation of trustworthy AI team members.
Introduction:
Collaboration in teams composed of both humans and automation has an interdependent nature, which demands calibrated trust among all the team members. For building suitable autonomous ...teammates, we need to study how trust and trustworthiness function in such teams. In particular, automation occasionally fails to do its job, which leads to a decrease in a human’s trust. Research has found interesting effects of such a reduction of trust on the human’s trustworthiness, i.e., human characteristics that make them more or less reliable. This paper investigates how automation failure in a human-automation collaborative scenario affects the human’s trust in the automation, as well as a human’s trustworthiness towards the automation.
Methods:
We present a 2 × 2 mixed design experiment in which the participants perform a simulated task in a 2D grid-world, collaborating with an automation in a “moving-out” scenario. During the experiment, we measure the participants’ trustworthiness, trust, and liking regarding the automation, both subjectively and objectively.
Results:
Our results show that automation failure negatively affects the human’s trustworthiness, as well as their trust in and liking of the automation.
Discussion:
Learning the effects of automation failure in trust and trustworthiness can contribute to a better understanding of the nature and dynamics of trust in these teams and improving human-automation teamwork.
Our lives are made of social interactions which can be recorded through personal gadgets as well as sensors capturing ubiquitous and social data. This type of data, such as spatio‐temporal data from ...the real‐time location of people, for example, can then be used for inferring interactions which can be translated into behavioural patterns. In this paper, we consider the automatic discovery of exceptional social behaviour from spatio‐temporal interaction data, focusing on two areas: exceptional subgroups and spatio‐temporal outliers – both in the form of descriptive patterns. For that, we propose a method for exceptional social behaviour discovery, combining subgroup discovery and network science methods for identifying behaviour that deviates from the norm. We also propose the use of two outlier detection metrics for identifying outliers, namely the Local Outlier Factor (LOF) and the Voronoi area. We applied the proposed method on synthetic data as well as two real datasets containing location data from children playing in the school playground. Our results indicate that this is a valid approach which is able to obtain meaningful knowledge from the data.
Artificial Trust as a Tool in Human-AI Teams Jorge, Carolina Centeio; Tielman, Myrthe L.; Jonker, Catholijn M.
2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI),
2022-March-7
Conference Proceeding
Odprti dostop
Mutual trust is considered a required coordinating mechanism for achieving effective teamwork in human teams. However, it is still a challenge to implement such mechanisms in teams composed by both ...humans and AI (human-AI teams), even though those are becoming increasingly prevalent. Agents in such teams should not only be trustworthy and promote appropriate trust from the humans, but also know when to trust a human teammate to perform a certain task. In this project, we study trust as a tool for artificial agents to achieve better team work. In particular, we want to build mental models of humans so that agents can understand human trustworthiness in the context of human-AI teamwork, taking into account factors such as human teammates', task's and environment's characteristics.