TUTCRIS - Tampereen teknillinen yliopisto

TUTCRIS

A Recurrent Encoder-Decoder Approach With Skip-Filtering Connections for Monaural Singing Voice Separation

Tutkimustuotosvertaisarvioitu

Yksityiskohdat

AlkuperäiskieliEnglanti
Otsikko27th IEEE International Workshop on Machine Learning for Signal Processing (MLSP)
KustantajaIEEE
ISBN (elektroninen)978-1-5090-6341-3
DOI - pysyväislinkit
TilaJulkaistu - 2017
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisussa
TapahtumaIEEE International Workshop on Machine Learning for Signal Processing -
Kesto: 1 tammikuuta 1900 → …

Conference

ConferenceIEEE International Workshop on Machine Learning for Signal Processing
Ajanjakso1/01/00 → …

Tiivistelmä

The objective of deep learning methods based on encoder-decoder architectures for music source separation is to approximate either ideal time-frequency masks or spectral representations of the target music source(s). The spectral representations are then used to derive time-frequency masks. In this work we introduce a method to directly learn time-frequency masks from an observed mixture magnitude spectrum. We employ recurrent neural networks and train them using prior knowledge only for the magnitude spectrum of the target source. To assess the performance of the proposed method, we focus on the task of singing voice separation. The results from an objective evaluation show that our proposed method provides comparable results to deep learning based methods which operate over complicated signal representations. Compared to previous methods that approximate time-frequency masks, our method has increased performance of signal to distortion ratio by an average of 3.8 dB.

Julkaisufoorumi-taso