TUTCRIS - Tampereen teknillinen yliopisto

TUTCRIS

COALA: Co-Aligned Autoencoders for Learning Semantically Enriched Audio Representations

Tutkimustuotosvertaisarvioitu

Yksityiskohdat

AlkuperäiskieliEnglanti
OtsikkoInternational Conference on Machine Learning (ICML)
AlaotsikkoWorkshop on Self-supervision in Audio and Speech
TilaJulkaistu - 2020
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisussa
TapahtumaInternational Conference on Machine Learning - Virtual
Kesto: 13 heinäkuuta 202018 heinäkuuta 2020
Konferenssinumero: 37
https://icml.cc

Conference

ConferenceInternational Conference on Machine Learning
LyhennettäICML
Ajanjakso13/07/2018/07/20
www-osoite

Tiivistelmä

Audio representation learning based on deep neural networks (DNNs) emerged as an alternative approach to hand-crafted features. For achieving high performance, DNNs often need a large amount of annotated data which can be difficult and costly to obtain. In this paper, we propose a method for learning audio representations, aligning the learned latent representations of audio and associated tags. Aligning is done by maximizing the agreement of the latent representations of audio and tags, using a contrastive loss. The result is an audio embedding model which reflects acoustic and semantic characteristics of sounds. We evaluate the quality of our embedding model, measuring its performance as a feature extractor on three different tasks (namely, sound event recognition, and music genre and musical instrument classification), and investigate what type of characteristics the model captures. Our results are promising, sometimes in par with the state-of-the-art in the considered tasks and the embeddings produced with our method are well correlated with some acoustic descriptors.