Detection and Classification of Acoustic Scenes and Events: Outcome of the DCASE 2016 Challenge
Tutkimustuotos › › vertaisarvioitu
|Julkaisu||IEEE/ACM Transactions on Audio Speech and Language Processing|
|Varhainen verkossa julkaisun päivämäärä||28 marraskuuta 2017|
|DOI - pysyväislinkit|
|Tila||Julkaistu - helmikuuta 2018|
Public evaluation campaigns and datasets promote active development in target research areas, allowing direct comparison of algorithms. The second edition of the challenge on Detection and Classification of Acoustic Scenes and Events (DCASE 2016) has offered such an opportunity for development of state-of-the-art methods, and succeeded in drawing together a large number of participants from academic and industrial backgrounds. In this paper, we report on the tasks and outcomes of the DCASE 2016 challenge. The challenge comprised four tasks: acoustic scene classification, sound event detection in synthetic audio, sound event detection in real-life audio, and domestic audio tagging. We present in detail each task and analyse the submitted systems in terms of design and performance. We observe the emergence of deep learning as the most popular classification method, replacing the traditional approaches based on Gaussian mixture models and support vector machines. By contrast, feature representations have not changed substantially throughout the years, as mel frequency-based representations predominate in all tasks. The datasets created for and used in DCASE 2016 are publicly available and are a valuable resource for further research.