TUTCRIS - Tampereen teknillinen yliopisto

TUTCRIS

Recurrent Neural Networks for Polyphonic Sound Event Detection in Real Life Recordings

Tutkimustuotosvertaisarvioitu

Yksityiskohdat

AlkuperäiskieliEnglanti
Otsikko2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Sivut6440-6444
Sivumäärä5
DOI - pysyväislinkit
TilaJulkaistu - maaliskuuta 2016
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisussa
TapahtumaIEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING -
Kesto: 1 tammikuuta 19001 tammikuuta 2000

Julkaisusarja

Nimi
ISSN (elektroninen)2379-190X

Conference

ConferenceIEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING
Ajanjakso1/01/001/01/00

Tiivistelmä

In this paper we present an approach to polyphonic sound event detection in real life recordings based on bi-directional long short term memory (BLSTM) recurrent neural networks (RNNs). A single multilabel BLSTM RNN is trained to map acoustic features of a mixture signal consisting of sounds from multiple classes, to binary activity indicators of each event class. Our method is tested on a large database of real-life recordings, with 61 classes (e.g. music, car, speech) from 10 different everyday contexts. The proposed method outperforms previous approaches by a large margin, and the results are further improved using data augmentation techniques. Overall, our system reports an average F1-score of 65.5% on 1 second blocks and 64.7% on single frames, a relative improvement over previous state-of-the-art approach of 6.8% and 15.1% respectively.

Julkaisufoorumi-taso