Improving Human-Autonomous Car Interaction Through Gaze Following Behaviors of Driving Agents
- 1 March 2019
- journal article
- Published by Japanese Society for Artificial Intelligence in Transactions of the Japanese Society for Artificial Intelligence
- Vol. 34 (2), A-IA1_1-IA1_1
- https://doi.org/10.1527/tjsai.a-ia1
Abstract
Autonomous cars have been gaining attention as a future transportation option due to an envisioning of a reduction in human errors and achieving a safer, more energy efficient and more comfortable mode of transportation. However, eliminating human involvement may impact the usage of autonomous cars negatively because of the impairment of perceived safety and enjoyment of driving. In order to achieve a reliable interaction between an autonomous car and a human operator, the car should evince intersubjectivity, implying that it possesses the same intentions with as those of the human operator. One critical social cue for human to understand the intentions of others is eye gaze behaviours. This paper proposes an interaction method that utilizes the eye gazing behaviours of an in-car driving agent platform that reflects the intentions of a simulated autonomous car that holds a potential of enabling the human operators to perceive the autonomous car as a social entity. We conducted a preliminary experiment to investigate whether an autonomous car will be perceived as possessing the same intentions as a human operator through gaze following behaviors of the driving agents as compared to the conditions of random gazing as well as not using the driving agents at all. The results revealed that gaze-following behavior of the driving agents induces an increase in the perception of the intersubjectivity. Furthermore, a detailed eye gaze data analysis remarked that the gaze following behaviors of the robots received more attention from the driver. Finally, the proposed interaction method demonstrated that the autonomous system was perceived as safer and more enjoyable.Keywords
This publication has 20 references indexed in Scilit:
- Can Autonomous Vehicles Be Safe and Trustworthy? Effects of Appearance and Autonomy of Unmanned Driving SystemsInternational Journal of Human–Computer Interaction, 2015
- Supporting Human–Robot Interaction Based on the Level of Visual Focus of AttentionIEEE Transactions on Human-Machine Systems, 2015
- Turn-taking, feedback and joint attention in situated human–robot interactionSpeech Communication, 2014
- The mind in the machine: Anthropomorphism increases trust in an autonomous vehicleJournal of Experimental Social Psychology, 2014
- Caregiving role in human–robot interaction: A study of the mediating effects of perceived benefit and social presenceComputers in Human Behavior, 2013
- Shared intentionalityDevelopmental Science, 2006
- Effects of echoic mimicry using hummed sounds on human–computer interactionSpeech Communication, 2003
- Equilibrium Theory Revisited: Mutual Gaze and Personal Space in Virtual EnvironmentsPRESENCE: Virtual and Augmented Reality, 2001
- Supporting presence in collaborative environments by haptic force feedbackACM Transactions on Computer-Human Interaction, 2000
- Précis of The Intentional StanceBehavioral and Brain Sciences, 1988