City Classification from Multiple Real-World Sound Scenes
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Details
Original language | English |
---|---|
Title of host publication | 2019 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA) |
Publisher | IEEE |
Pages | 11-15 |
Number of pages | 5 |
ISBN (Electronic) | 978-1-7281-1123-0 |
ISBN (Print) | 978-1-7281-1124-7 |
DOIs | |
Publication status | Published - Oct 2019 |
Publication type | A4 Article in a conference publication |
Event | IEEE Workshop on Applications of Signal Processing to Audio and Acoustics - Duration: 1 Jan 1900 → … |
Publication series
Name | IEEE Workshop on Applications of Signal Processing to Audio and Acoustics |
---|---|
ISSN (Print) | 1931-1168 |
ISSN (Electronic) | 1947-1629 |
Conference
Conference | IEEE Workshop on Applications of Signal Processing to Audio and Acoustics |
---|---|
Period | 1/01/00 → … |
Abstract
The majority of sound scene analysis work focuses on one of two clearly defined tasks: acoustic scene classification or sound event detection. Whilst this separation of tasks is useful for problem definition, they inherently ignore some subtleties of the real-world, in particular how humans vary in how they describe a scene. Some will describe the weather and features within it, others will use a holistic descriptor like ‘park’, and others still will use unique identifiers such as cities or names. In this paper, we undertake the task of automatic city classification to ask whether we can recognize a city from a set of sound scenesƒ In this problem each city has recordings from multiple scenes. We test a series of methods for this novel task and show that a simple convolutional neural network (CNN) can achieve accuracy of 50%. This is less than the acoustic scene classification task baseline in the DCASE 2018 ASC challenge on the same data. A simple adaptation to the class labels of pairing city labels with grouped scenes, accuracy increases to 52%, closer to the simpler scene classification task. Finally we also formulate the problem in a multi-task learning framework and achieve an accuracy of 56%, outperforming the aforementioned approaches.
Keywords
- Acoustic scene classification, location identification, city classification, computational sound scene analysis