Tampere University of Technology

TUTCRIS Research Portal

End-to-End Polyphonic Sound Event Detection Using Convolutional Recurrent Neural Networks with Learned Time-Frequency Representation Input

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Details

Original languageEnglish
Title of host publication2018 International Joint Conference on Neural Networks, IJCNN 2018 - Proceedings
PublisherIEEE
ISBN (Electronic)9781509060146
DOIs
Publication statusPublished - 10 Oct 2018
Publication typeA4 Article in a conference publication
EventInternational Joint Conference on Neural Networks - Rio de Janeiro, Brazil
Duration: 8 Jul 201813 Jul 2018

Publication series

Name
ISSN (Electronic)2161-4407

Conference

ConferenceInternational Joint Conference on Neural Networks
CountryBrazil
CityRio de Janeiro
Period8/07/1813/07/18

Abstract

Sound event detection systems typically consist of two stages: Extracting hand-crafted features from the raw audio waveform, and learning a mapping between these features and the target sound events using a classifier. Recently, the focus of sound event detection research has been mostly shifted to the latter stage using standard features such as mel spectrogram as the input for classifiers such as deep neural networks. In this work, we utilize end-to-end approach and propose to combine these two stages in a single deep neural network classifier. The feature extraction over the raw waveform is conducted by a feedforward layer block, whose parameters are initialized to extract the time-frequency representations. The feature extraction parameters are updated during training, resulting with a representation that is optimized for the specific task. This feature extraction block is followed by (and jointly trained with) a convolutional recurrent network, which has recently given state-of-the-art results in many sound recognition tasks. The proposed system does not outperform a convolutional recurrent network with fixed hand-crafted features. The final magnitude spectrum characteristics of the feature extraction block parameters indicate that the most relevant information for the given task is contained in 0 - 3 kHz frequency range, and this is also supported by the empirical results on the SED performance.

ASJC Scopus subject areas

Keywords

  • convolutional recurrent neural networks, end-to-end, feature learning, neural networks

Publication forum classification

Field of science, Statistics Finland