Novelty and diversity in information retrieval evaluation
Top Cited Papers
- 20 July 2008
- conference paper
- conference paper
- Published by Association for Computing Machinery (ACM)
- p. 659-666
- https://doi.org/10.1145/1390334.1390446
Abstract
Evaluation measures act as objective functions to be optimized by information retrieval systems. Such objective functions must accurately reflect user requirements, particularly when tuning IR systems and learning ranking functions. Ambiguity in queries and redundancy in retrieved documents are poorly reflected by current evaluation measures. In this paper, we present a framework for evaluation that systematically rewards novelty and diversity. We develop this framework into a specific evaluation measure, based on cumulative gain. We demonstrate the feasibility of our approach using a test collection based on the TREC question answering track.Keywords
This publication has 24 references indexed in Scilit:
- Ambiguous requestsACM SIGIR Forum, 2007
- A risk minimization framework for information retrievalInformation Processing & Management, 2006
- Query performance predictionInformation Systems, 2005
- Query Difficulty, Robustness, and Selective Application of Query ExpansionLecture Notes in Computer Science, 2004
- Cumulated gain-based evaluation of IR techniquesACM Transactions on Information Systems, 2002
- A taxonomy of web searchACM SIGIR Forum, 2002
- An information-theoretic approach to automatic query expansionACM Transactions on Information Systems, 2001
- A probabilistic model of information retrieval: development and comparative experimentsInformation Processing & Management, 2000
- THE PROBABILITY RANKING PRINCIPLE IN IRJournal of Documentation, 1977
- A searching procedure for information retrievalInformation Storage and Retrieval, 1964