Talks and Poster Presentations (with Proceedings-Entry):
A. Schindler, S. Gordea, P. Knees:
"Unsupervised cross-modal audio representation learning from unstructured multilingual text";
Talk: 35th Annual ACM Symposium on Applied Computing (SAC ´20),
Brno, Czech Republic;
- 2020-04-03; in: "Proceedings of the 35th Annual ACM Symposium on Applied Computing (SAC ´20)",
We present an approach to unsupervised audio representation learning. Based on a Triplet Neural Network architecture, we harnesses semantically related cross-modal information to estimate audio track-relatedness. By applying Latent Semantic Indexing (LSI) we embed corresponding textual information into a latent vector space from which we derive track relatedness for online triplet selection. This LSI topic modeling facilitates fine-grained selection of similar and dissimilar audio-track pairs to learn the audio representation using a Convolution Recurrent Neural Network (CRNN). By this we directly project the semantic context of the unstructured text modality onto the learned representation space of the audio modality without deriving structured ground truth annotations from it. We evaluate our approach on the Europeana Sounds collection and show how to improve search in digital audio libraries by harnessing the multilingual metadata provided by numerous European digital libraries. We show that our approach is invariant to the variety of annotation styles as well as to the different languages of this collection. The learned representations perform comparable to the baseline of handcrafted features, respectively exceeding this baseline in similarity retrieval precision at higher cut-offs with only 15% of the baseline's feature vector length.
deep neural networks, cross-modal learning, audio representation learning
"Official" electronic version of the publication (accessed through its Digital Object Identifier - DOI)
Electronic version of the publication:
Created from the Publication Database of the Vienna University of Technology.