Tampere University of Technology

TUTCRIS Research Portal

Automated Audio Captioning with Recurrent Neural Networks

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Details

Original languageEnglish
Title of host publicationIEEE Workshop on Applications of Signal Processing to Audio and Acoustics
PublisherIEEE
Number of pages5
ISBN (Print)978-1-5386-1632-1
DOIs
Publication statusPublished - 2017
Publication typeA4 Article in a conference publication
EventIEEE Workshop on Applications of Signal Processing to Audio and Acoustics -
Duration: 1 Jan 1900 → …

Publication series

Name
ISSN (Electronic)1947-1629

Conference

ConferenceIEEE Workshop on Applications of Signal Processing to Audio and Acoustics
Period1/01/00 → …

Abstract

We present the first approach to automated audio captioning. We employ an encoder-decoder scheme with an alignment model in between. The input to the encoder is a sequence of log mel-band energies calculated from an audio file, while the output is a sequence of words, i.e. a caption. The encoder is a multi-layered, bi-directional gated recurrent unit (GRU) and the decoder a multi-layered GRU with a classification layer connected to the last GRU of the decoder. The classification layer and the alignment model are fully connected layers with shared weights between timesteps. The proposed method is evaluated using data drawn from a commercial sound effects library, ProSound Effects. The resulting captions were rated through metrics utilized in machine translation and image captioning fields. Results from metrics show that the proposed method can predict words appearing in the original caption, but not always correctly ordered.

Publication forum classification

Field of science, Statistics Finland