Does Online Evaluation Correspond to Offline Evaluation in Query Auto Completion?

Type: 
Poster presentation with proceedings
Proceedings: 
Advances in Information Retrieval
Publisher: 
Springer
Pages: 
713 - 719
ISBN: 
ISBN: 978-3-319-56608-5
Year: 
2017
Abstract: 
Query Auto Completion is the task of suggesting queries to the users of a search engine while they are typing a query in the search box. Over the recent years there has been a renewed interest in research on improving the quality of this task. The published improvements were<br> assessed by using offline evaluation techniques and metrics. In this paper, we provide a comparison of online and offline assessments for Query Auto Completion. We show that there is a large potential for significant bias if the raw data used in an online experiment is re-used for offline experiments afterwards to evaluate new methods.
TU Focus: 
Computational Science and Engineering
Reference: 

A. Bampoulidis, J. Palotti, M. Lupu, J. Brassey, A. Hanbury:
"Does Online Evaluation Correspond to Offline Evaluation in Query Auto Completion?";
Poster: 39th European Conference on Information Retrieval, Aberdeen, Scotland, UK; 09.04.2017 - 13.04.2017; in: "Advances in Information Retrieval", Springer, (2017), ISBN: 978-3-319-56608-5; S. 713 - 719.

Zusätzliche Informationen

Last changed: 
24.10.2017 12:28:48
TU Id: 
262234
Accepted: 
Accepted
Invited: 
Department Focus: 
Author List: 
A. Bampoulidis, J. Palotti, M. Lupu, J. Brassey, A. Hanbury
Abstract German: