TUTCRIS - Tampereen teknillinen yliopisto

TUTCRIS

Musical Instrument Synthesis and Morphing in Multidimensional Latent Space Using Variational, Convolutional Recurrent Autoencoders

Tutkimustuotos

Yksityiskohdat

AlkuperäiskieliEnglanti
OtsikkoProceedings of the Audio Engineerings Society 145th Convention
KustantajaAES Audio Engineering Society
TilaJulkaistu - 2018
OKM-julkaisutyyppiD3 Artikkeli ammatillisessa konferenssijulkaisussa
TapahtumaAudio Engineering Society Convention - New York, Yhdysvallat
Kesto: 17 lokakuuta 201820 lokakuuta 2018

Conference

ConferenceAudio Engineering Society Convention
MaaYhdysvallat
KaupunkiNew York
Ajanjakso17/10/1820/10/18

Tiivistelmä

In this work we propose a deep learning based method—namely, variational, convolutional recurrent autoencoders (VCRAE)—for musical instrument synthesis. This method utilizes the higher level time-frequency representations extracted by the convolutional and recurrent layers to learn a Gaussian distribution in the training stage, which will be later used to infer unique samples through interpolation of multiple instruments in the usage stage. The reconstruction performance of VCRAE is evaluated by proxy through an instrument classifier and provides significantly better accuracy than two other baseline autoencoder methods. The synthesized samples for the combinations of 15 different instruments are available on the companion website.