[Back]


Contributions to Proceedings:

R. Csaky, G. Recski:
"The Gutenberg Dialogue Dataset";
in: "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", issued by: Association for Computational Linguistics; The Association for Computational Linguistics, 2021, ISBN: 978-1-954085-02-2, 138 - 159.



English abstract:
Large datasets are essential for neural modeling of many NLP tasks. Current publicly available open-domain dialogue datasets offer a trade-off between quality (e.g., DailyDialog) and size (e.g., Opensubtitles). We narrow this gap by building a high-quality dataset of 14.8M utterances in English, and smaller datasets in German, Dutch, Spanish, Portuguese, Italian, and Hungarian. We extract and process dialogues from public-domain books made available by Project Gutenberg. We describe our dialogue extraction pipeline, analyze the effects of the various heuristics used, and present an error analysis of extracted dialogues. Finally, we conduct experiments showing that better response quality can be achieved in zero-shot and finetuning settings by training on our data than on the larger but much noisier Opensubtitles dataset. Our open-source pipeline (https://github.com/ricsinaruto/gutenberg-dialog) can be extended to further languages with little additional effort. Researchers can also build their versions of existing datasets by adjusting various trade-off parameters.


"Official" electronic version of the publication (accessed through its Digital Object Identifier - DOI)
http://dx.doi.org/10.18653/v1/2021.eacl-main.11

Electronic version of the publication:
https://publik.tuwien.ac.at/files/publik_296529.pdf


Created from the Publication Database of the Vienna University of Technology.