Principles of Explanatory Debugging to Personalize Interactive Machine Learning
- 18 March 2015
- conference paper
- conference paper
- Published by Association for Computing Machinery (ACM)
- p. 126-137
- https://doi.org/10.1145/2678025.2701399
Abstract
No abstract availableKeywords
This publication has 25 references indexed in Scilit:
- You Are the Only Possible Oracle: Effective Test Selection for End Users of Interactive Machine Learning SystemsIEEE Transactions on Software Engineering, 2013
- Are explanations always important?Published by Association for Computing Machinery (ACM) ,2012
- Why-oriented end-user debugging of naive Bayes text classificationACM Transactions on Interactive Intelligent Systems, 2011
- The effects of transparency on trust in and acceptance of a content-based art recommenderUser Modelling and User-Adapted Interaction, 2008
- Multinomial Naive Bayes for Text Categorization RevisitedLecture Notes in Computer Science, 2004
- The role of trust in automation relianceInternational Journal of Human-Computer Studies, 2003
- Interactive machine learningPublished by Association for Computing Machinery (ACM) ,2003
- A review of explanation methods for Bayesian networksThe Knowledge Engineering Review, 2002
- Using neural networks for data miningFuture Generation Computer Systems, 1997
- Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical ResearchPublished by Elsevier BV ,1988