Inferring Trust From Users’ Behaviours; Agents’ Predictability Positively Affects Trust, Task Performance and Cognitive Load in Human-Agent Real-Time Collaboration
Open Access
- 8 July 2021
- journal article
- research article
- Published by Frontiers Media SA in Frontiers in Robotics and AI
Abstract
Collaborative virtual agents help human operators to perform tasks in real-time. For this collaboration to be effective, human operators must appropriately trust the agent(s) they are interacting with. Multiple factors influence trust, such as the context of interaction, prior experiences with automated systems and the quality of the help offered by agents in terms of its transparency and performance. Most of the literature on trust in automation identified the performance of the agent as a key factor influencing trust. However, other work has shown that the behavior of the agent, type of the agent’s errors, and predictability of the agent’s actions can influence the likelihood of the user’s reliance on the agent and efficiency of tasks completion. Our work focuses on how agents’ predictability affects cognitive load, performance and users’ trust in a real-time human-agent collaborative task. We used an interactive aiming task where participants had to collaborate with different agents that varied in terms of their predictability and performance. This setup uses behavioral information (such as task performance and reliance on the agent) as well as standardized survey instruments to estimate participants’ reported trust in the agent, cognitive load and perception of task difficulty. Thirty participants took part in our lab-based study. Our results showed that agents with more predictable behaviors have a more positive impact on task performance, reliance and trust while reducing cognitive workload. In addition, we investigated the human-agent trust relationship by creating models that could predict participants’ trust ratings using interaction data. We found that we could reliably estimate participants’ reported trust in the agents using information related to performance, task difficulty and reliance. This study provides insights on behavioral factors that are the most meaningful to anticipate complacent or distrusting attitudes toward automation. With this work, we seek to pave the way for the development of trust-aware agents capable of responding more appropriately to users by being able to monitor components of the human-agent relationships that are the most salient for trust calibration.This publication has 46 references indexed in Scilit:
- Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”Ethics and Information Technology, 2010
- Cooperation between drivers and automation: implications for safetyTheoretical Issues in Ergonomics Science, 2009
- NASA TLX: Software for assessing subjective mental workloadBehavior Research Methods, 2009
- Nasa-Task Load Index (NASA-TLX); 20 Years LaterProceedings of the Human Factors and Ergonomics Society Annual Meeting, 2006
- Ten Challenges for Making Automation a "Team Player" in Joint Human-Agent ActivityIEEE Intelligent Systems, 2004
- Trust in Automation: Designing for Appropriate RelianceHuman Factors: The Journal of the Human Factors and Ergonomics Society, 2004
- The role of trust in automation relianceInternational Journal of Human-Computer Studies, 2003
- Foundations for an Empirically Determined Scale of Trust in Automated SystemsInternational Journal of Cognitive Ergonomics, 2000
- Automation- Induced "Complacency": Development of the Complacency-Potential Rating ScaleThe International Journal of Aviation Psychology, 1993
- Experimental Designs Balanced for the Estimation of Residual Effects of TreatmentsAustralian Journal of Chemistry, 1949