[Zurück]


Vorträge und Posterpräsentationen (mit Tagungsband-Eintrag):

M. Zlabinger, R. Sabou, S. Hofstätter, M. Sertkan, A. Hanbury:
"DEXA: Supporting Non-Expert Annotators with Dynamic Examples from Experts";
Vortrag: SIGIR 2020 - 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Xi'an, China; 25.07.2020 - 30.07.2020; in: "SIGIR '20: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval", J. Huang, Y. Chang et al. (Hrg.); Association for Computing Machinery, New York, NY, United States, (2020), ISBN: 978-1-4503-8016-4; S. 2109 - 2112.



Kurzfassung englisch:
The success of crowdsourcing based annotation of text corpora depends on ensuring that crowdworkers are sufficiently well-trained to perform the annotation task accurately. To that end, a frequent approach to train annotators is to provide instructions and a few example cases that demonstrate how the task should be performed (referred to as the CONTROL approach). These globally defined "task-level examples", however, (i) often only cover the common cases that are encountered during an annotation task; and (ii) require effort from crowdworkers during the annotation process to find the most relevant example for the currently annotated sample. To overcome these limitations, we propose to support workers in addition to task-level examples, also with "task-instance level" examples that are semantically similar to the currently annotated data sample (referred to as Dynamic Examples for Annotation, DEXA). Such dynamic examples can be retrieved from collections previously labeled by experts, which are usually available as gold standard dataset. We evaluate DEXA on a complex task of annotating participants, interventions, and outcomes (known as PIO) in sentences of medical studies. The dynamic examples are retrieved using BioSent2Vec, an unsupervised semantic sentence similarity method specific to the biomedical domain. Results show that (i) workers of the DEXA approach reach on average much higher agreements (Cohen's Kappa) to experts than workers of the the CONTROL approach (avg. of 0.68 to experts in DEXA vs. 0.40 in CONTROL); (ii) already three per majority voting aggregated annotations of the DEXA approach reach substantial agreements to experts of 0.78/0.75/0.69 for P/I/O (in CONTROL 0.73/0.58/0.46). Finally, (iii) we acquire explicit feedback from workers and show that in the majority of cases (avg. 72%) workers find the dynamic examples useful.

Schlagworte:
human data annotation, crowdsourcing, PICO task


"Offizielle" elektronische Version der Publikation (entsprechend ihrem Digital Object Identifier - DOI)
http://dx.doi.org/10.1145/3397271.3401334


Erstellt aus der Publikationsdatenbank der Technischen Universität Wien.