Tampere University of Technology

TUTCRIS Research Portal

Multi-modal dense video captioning

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Details

Original languageEnglish
Title of host publicationProceedings - 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2020
PublisherIEEE
Pages4117-4126
Number of pages10
ISBN (Electronic)9781728193601
ISBN (Print)978-1-7281-9361-8
DOIs
Publication statusPublished - 2020
Publication typeA4 Article in a conference publication
EventIEEE Computer Society Conference on Computer Vision and Pattern Recognition workshops - Virtual, Online, United States
Duration: 14 Jun 202019 Jun 2020

Publication series

NameIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
ISSN (Print)2160-7508
ISSN (Electronic)2160-7516

Conference

ConferenceIEEE Computer Society Conference on Computer Vision and Pattern Recognition workshops
CountryUnited States
CityVirtual, Online
Period14/06/2019/06/20

Abstract

Dense video captioning is a task of localizing interesting events from an untrimmed video and producing textual description (captions) for each localized event. Most of the previous works in dense video captioning are solely based on visual information and completely ignore the audio track. However, audio, and speech, in particular, are vital cues for a human observer in understanding an environment. In this paper, we present a new dense video captioning approach that is able to utilize any number of modalities for event description. Specifically, we show how audio and speech modalities may improve a dense video captioning model. We apply automatic speech recognition (ASR) system to obtain a temporally aligned textual description of the speech (similar to subtitles) and treat it as a separate input alongside video frames and the corresponding audio track. We formulate the captioning task as a machine translation problem and utilize recently proposed Transformer architecture to convert multi-modal input data into textual descriptions. We demonstrate the performance of our model on ActivityNet Captions dataset. The ablation studies indicate a considerable contribution from audio and speech components suggesting that these modalities contain substantial complementary information to video frames. Furthermore, we provide an in-depth analysis of the ActivityNet Caption results by leveraging the category tags obtained from original YouTube videos. Code is publicly available: github.com/v-iashin/MDVC.

Publication forum classification

Field of science, Statistics Finland