Active learning strategies for rating elicitation in collaborative filtering
- 1 December 2013
- journal article
- Published by Association for Computing Machinery (ACM) in ACM Transactions on Intelligent Systems and Technology
- Vol. 5 (1), 1-33
- https://doi.org/10.1145/2542182.2542195
Abstract
The accuracy of collaborative-filtering recommender systems largely depends on three factors: the quality of the rating prediction algorithm, and the quantity and quality of available ratings. While research in the field of recommender systems often concentrates on improving prediction algorithms, even the best algorithms will fail if they are fed poor-quality data during training, that is, garbage in, garbage out. Active learning aims to remedy this problem by focusing on obtaining better-quality data that more aptly reflects a user's preferences. However, traditional evaluation of active learning strategies has two major flaws, which have significant negative ramifications on accurately evaluating the system's performance (prediction error, precision, and quantity of elicited ratings). (1) Performance has been evaluated for each user independently (ignoring system-wide improvements). (2) Active learning strategies have been evaluated in isolation from unsolicited user ratings (natural acquisition). In this article we show that an elicited rating has effects across the system, so a typical user-centric evaluation which ignores any changes of rating prediction of other users also ignores these cumulative effects, which may be more influential on the performance of the system as a whole (system centric). We propose a new evaluation methodology and use it to evaluate some novel and state-of-the-art rating elicitation strategies. We found that the system-wide effectiveness of a rating elicitation strategy depends on the stage of the rating elicitation process, and on the evaluation measures (MAE, NDCG, and Precision). In particular, we show that using some common user-centric strategies may actually degrade the overall performance of a system. Finally, we show that the performance of many common active learning strategies changes significantly when evaluated concurrently with the natural acquisition of ratings in recommender systems.Keywords
This publication has 31 references indexed in Scilit:
- Efficiently learning the preferences of peopleMachine Learning, 2012
- Critiquing-based recommenders: survey and emerging trendsUser Modelling and User-Adapted Interaction, 2011
- A Comprehensive Survey of Neighborhood-based Recommendation MethodsPublished by Springer Science and Business Media LLC ,2010
- Active Learning in Recommender SystemsPublished by Springer Science and Business Media LLC ,2010
- Introduction to Recommender Systems HandbookPublished by Springer Science and Business Media LLC ,2010
- Multiattribute Bayesian Preference Elicitation with Pairwise Comparison QueriesLecture Notes in Computer Science, 2010
- Learning preferences of new users in recommender systemsACM SIGKDD Explorations Newsletter, 2008
- Evaluating collaborative filtering recommender systemsACM Transactions on Information Systems, 2004
- Cumulated gain-based evaluation of IR techniquesACM Transactions on Information Systems, 2002
- Recommender systemsCommunications of the ACM, 1997