[Back]


Talks and Poster Presentations (with Proceedings-Entry):

P. Poller, M. Chikobava, J. Hodges, M. Kritzler, F. Michahelles, T. Becker:
"Back-end semantics for multimodal dialog on XR devices";
Talk: 26th annual meeting of the intelligent interfaces community, Texas/online; 2021-04-13 - 2021-04-17; in: "26th annual meeting of the intelligent interfaces community", ACM, Texas (2021), 75 - 77.



English abstract:
Extended Reality (XR) devices have great potential to become the next wave in mobile interaction. They provide
powerful, easy-to-use Augmented Reality (AR) and/or Mixed Reality (MR) in conjunction with multimodal
interaction facilities using gaze, gesture, and speech. However, current implementations typically lack a
coherent semantic representation for the virtual elements, backend-communication, and dialog capabilities.
Existing devices are often restricted to mere command and control interactions. To improve these shortcomings
and realize enhanced system capabilities and comprehensive interactivity, we have developed a flexible modular
approach that integrates powerful back-end platforms using standard API interfaces. As a concrete example,
we present our distributed implementation of a multimodal dialog system on the Microsoft Hololens®. It
uses the SiAM-dp multimodal dialog platform as a back-end service and an Open Semantic Framework (OSF)
back-end server to extract the semantic models for creating the dialog domain model.

Keywords:
human computer interaction, multimodal dialog system, dialog platform, semantics, extended reality (XR)


"Official" electronic version of the publication (accessed through its Digital Object Identifier - DOI)
http://dx.doi.org/10.1145/3397482.3450719


Created from the Publication Database of the Vienna University of Technology.