Talks and Poster Presentations (with Proceedings-Entry):
R. Vogl, M. Dorfer, P. Knees:
"Drum transcription from polyphonic music with recurrent neural networks";
Talk: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),
New Orleans, LA, USA;
- 2017-03-09; in: "Proceedings of the 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing",
Automatic drum transcription methods aim at extracting a symbolic representation of notes played by a drum kit in audio recordings. For automatic music analysis, this task is of particular interest as such a transcript can be used to extract high level information about the piece, e.g., tempo, downbeat positions, meter, and genre cues. In this work, an approach to transcribe drums from polyphonic audio signals based on a recurrent neural network is presented. Deep learning techniques like dropout and data augmentation are applied to improve the generalization capabilities of the system. The method is evaluated using established reference datasets consisting of solo drum tracks as well as drums mixed with accompaniment. The results are compared to state-of-the-art approaches on the same datasets. The evaluation reveals that F-measure values higher than state of the art can be achieved using the proposed method.
drum transcription, recurrent neural networks, machine learning, music information retrieval
"Official" electronic version of the publication (accessed through its Digital Object Identifier - DOI)
Project Head Peter Knees:
Created from the Publication Database of the Vienna University of Technology.