[Back]


Talks and Poster Presentations (with Proceedings-Entry):

A. Bampoulidis, J. Palotti, M. Lupu, J. Brassey, A. Hanbury:
"Does Online Evaluation Correspond to Offline Evaluation in Query Auto Completion?";
Poster: 39th European Conference on Information Retrieval, Aberdeen, Scotland, UK; 2017-04-09 - 2017-04-13; in: "Advances in Information Retrieval", Springer, (2017), ISBN: 978-3-319-56608-5; 713 - 719.



English abstract:
Query Auto Completion is the task of suggesting queries to the users of a search engine while they are typing a query in the search box. Over the recent years there has been a renewed interest in research on improving the quality of this task. The published improvements were
assessed by using offline evaluation techniques and metrics. In this paper, we provide a comparison of online and offline assessments for Query Auto Completion. We show that there is a large potential for significant bias if the raw data used in an online experiment is re-used for offline experiments afterwards to evaluate new methods.


Electronic version of the publication:
https://publik.tuwien.ac.at/files/publik_262234.pdf


Created from the Publication Database of the Vienna University of Technology.