[Zurück]


Zeitschriftenartikel:

A. Schindler, A. Rauber:
"Harnessing Music-Related Visual Stereotypes for Music Information Retrieval";
ACM Transactions on Intelligent Systems and Technology, 8 (2016), 2; S. 1 - 21.



Kurzfassung englisch:
Over decades, music labels have shaped easily identifiable genres to improve recognition value and subsequently market sales of new music acts. Referring to print magazines and later to music television as important distribution channels, the visual representation thus played and still plays a significant role in music marketing. Visual stereotypes developed over decades that enable us to quickly identify referenced music only by sight without listening. Despite the richness of music-related visual information provided by music videos and album covers as well as T-shirts, advertisements, and magazines, research towards harnessing this information to advance existing or approach new problems of music retrieval or recommendation is scarce or missing. In this article, we present our research on visual music computing that aims to extract stereotypical music-related visual information from music videos. To provide comprehensive and reproducible results, we present the Music Video Dataset, a thoroughly assembled suite of datasets with dedicated evaluation tasks that are aligned to current Music Information Retrieval tasks. Based on this dataset, we provide evaluations of conventional low-level image processing and affect-related features to provide an overview of the expressiveness of fundamental visual properties such as color, illumination, and contrasts. Further, we introduce a high-level approach based on visual concept detection to facilitate visual stereotypes. This approach decomposes the semantic content of music video frames into concrete concepts such as vehicles, tools, and so on, defined in a wide visual vocabulary. Concepts are detected using convolutional neural networks and their frequency distributions as semantic descriptions for a music video. Evaluations showed that these descriptions show good performance in predicting the music genre of a video and even outperform audio-content descriptors on cross-genre thematic tags. Further, highly significant performance improvements were observed by augmenting audio-based approaches through the introduced visual approach.

Schlagworte:
Music Information Retrieval, Information Retireval, Machine Leraning, Video Retrieval


"Offizielle" elektronische Version der Publikation (entsprechend ihrem Digital Object Identifier - DOI)
http://dx.doi.org/10.1145/2926719


Erstellt aus der Publikationsdatenbank der Technischen Universität Wien.