TUTCRIS - Tampereen teknillinen yliopisto

TUTCRIS

Convolutional Recurrent Neural Networks for Polyphonic Sound Event Detection

Tutkimustuotosvertaisarvioitu

Yksityiskohdat

AlkuperäiskieliEnglanti
Sivut1291-1303
Sivumäärä13
JulkaisuIeee-Acm transactions on audio speech and language processing
Vuosikerta25
Numero6
DOI - pysyväislinkit
TilaJulkaistu - kesäkuuta 2017
OKM-julkaisutyyppiA1 Alkuperäisartikkeli

Tiivistelmä

Sound events often occur in unstructured environments where they exhibit wide variations in their frequency content and temporal structure. Convolutional neural networks (CNNs) are able to extract higher level features that are invariant to local spectral and temporal variations. Recurrent neural networks (RNNs) are powerful in learning the longer term temporal context in the audio signals. CNNs and RNNs as classifiers have recently shown improved performances over established methods in various sound recognition tasks. We combine these two approaches in a convolutional recurrent neural network (CRNN) and apply it on a polyphonic sound event detection task. We compare the performance of the proposed CRNN method with CNN, RNN, and other established methods, and observe a considerable improvement for four different datasets consisting of everyday sound events.

Tutkimusalat

Julkaisufoorumi-taso

Latausten tilastot

Ei tietoja saatavilla