[Back]


Talks and Poster Presentations (with Proceedings-Entry):

M. Zlabinger, R. Sabou, S. Hofstätter, M. Sertkan, A. Hanbury:
"DEXA: Supporting Non-Expert Annotators with Dynamic Examples from Experts";
Talk: SIGIR 2020 - 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Xi'an, China; 2020-07-25 - 2020-07-30; in: "SIGIR '20: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval", J. Huang, Y. Chang et al. (ed.); Association for Computing Machinery, New York, NY, United States, (2020), ISBN: 978-1-4503-8016-4; 2109 - 2112.



English abstract:
The success of crowdsourcing based annotation of text corpora depends on ensuring that crowdworkers are sufficiently well-trained to perform the annotation task accurately. To that end, a frequent approach to train annotators is to provide instructions and a few example cases that demonstrate how the task should be performed (referred to as the CONTROL approach). These globally defined "task-level examples", however, (i) often only cover the common cases that are encountered during an annotation task; and (ii) require effort from crowdworkers during the annotation process to find the most relevant example for the currently annotated sample. To overcome these limitations, we propose to support workers in addition to task-level examples, also with "task-instance level" examples that are semantically similar to the currently annotated data sample (referred to as Dynamic Examples for Annotation, DEXA). Such dynamic examples can be retrieved from collections previously labeled by experts, which are usually available as gold standard dataset. We evaluate DEXA on a complex task of annotating participants, interventions, and outcomes (known as PIO) in sentences of medical studies. The dynamic examples are retrieved using BioSent2Vec, an unsupervised semantic sentence similarity method specific to the biomedical domain. Results show that (i) workers of the DEXA approach reach on average much higher agreements (Cohen's Kappa) to experts than workers of the the CONTROL approach (avg. of 0.68 to experts in DEXA vs. 0.40 in CONTROL); (ii) already three per majority voting aggregated annotations of the DEXA approach reach substantial agreements to experts of 0.78/0.75/0.69 for P/I/O (in CONTROL 0.73/0.58/0.46). Finally, (iii) we acquire explicit feedback from workers and show that in the majority of cases (avg. 72%) workers find the dynamic examples useful.

Keywords:
human data annotation, crowdsourcing, PICO task


"Official" electronic version of the publication (accessed through its Digital Object Identifier - DOI)
http://dx.doi.org/10.1145/3397271.3401334


Created from the Publication Database of the Vienna University of Technology.