Understanding User Attitudes Towards Negative Side Effects of AI Systems
- 8 May 2021
- conference paper
- conference paper
- Published by Association for Computing Machinery (ACM)
Abstract
Artificial Intelligence (AI) systems deployed in the open world may produce negative side effects—which are unanticipated, undesirable outcomes that occur in addition to the intended outcomes of the system’s actions. These negative side effects affect users directly or indirectly, by violating their preferences or altering their environment in an undesirable, potentially harmful, manner. While the existing literature has started to explore techniques to overcome the impacts of negative side effects in deployed systems, there has been no prior efforts to determine how users perceive and respond to negative side effects. We surveyed 183 participants to develop an understanding of user attitudes towards side effects and how side effects impact user trust in the system. The surveys targeted two domains: an autonomous vacuum cleaner and an autonomous vehicle, each with 183 respondents. The results indicate that users are willing to tolerate side effects that are not safety-critical but prefer to minimize them as much as possible. Furthermore, users are willing to assist the system in mitigating negative side effects by providing feedback and reconfiguring the environment. Trust in the system diminishes if it fails to minimize the impacts of negative side effects over time. These results support key fundamental assumptions in existing techniques and facilitate the development of new methods to overcome negative side effects of AI systems.Keywords
Funding Information
- Semiconductor Research Corporation (2906.001)
This publication has 9 references indexed in Scilit:
- A Multi-Objective Approach to Mitigate Negative Side EffectsPublished by International Joint Conferences on Artificial Intelligence ,2020
- Conservative Agency via Attainable Utility PreservationPublished by Association for Computing Machinery (ACM) ,2020
- Understanding the Effect of Accuracy on Trust in Machine Learning ModelsPublished by Association for Computing Machinery (ACM) ,2019
- Theoretical considerations and development of a questionnaire to measure trust in automationPublished by Center for Open Science ,2018
- Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify ThemManagement Science, 2018
- Perceptions of autonomous vehicles: Relationships with road users, risk, gender and ageSafety Science, 2018
- Algorithm aversion: People erroneously avoid algorithms after seeing them err.Journal of Experimental Psychology: General, 2015
- Are your participants gaming the system?Published by Association for Computing Machinery (ACM) ,2010
- Service robots in the domestic environmentPublished by Association for Computing Machinery (ACM) ,2006