TUTCRIS - Tampereen teknillinen yliopisto

TUTCRIS

Deep audio-visual saliency: Baseline model and data

Tutkimustuotosvertaisarvioitu

Yksityiskohdat

AlkuperäiskieliEnglanti
OtsikkoProceedings ETRA 2020 Short Papers - ACM Symposium on Eye Tracking Research and Applications, ETRA 2020
ToimittajatStephen N. Spencer
KustantajaACM
ISBN (elektroninen)9781450371346
DOI - pysyväislinkit
TilaJulkaistu - 6 helmikuuta 2020
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisussa
TapahtumaACM Symposium on Eye Tracking Research and Applications - Stuttgart, Saksa
Kesto: 2 kesäkuuta 20205 kesäkuuta 2020

Conference

ConferenceACM Symposium on Eye Tracking Research and Applications
MaaSaksa
KaupunkiStuttgart
Ajanjakso2/06/205/06/20

Tiivistelmä

This paper introduces a conceptually simple and effective Deep Audio-Visual Embedding for dynamic saliency prediction dubbed "DAVE" in conjunction with our efforts towards building an Audio-Visual Eye-tracking corpus named "AVE". Despite existing a strong relation between auditory and visual cues for guiding gaze during perception, video saliency models only consider visual cues and neglect the auditory information that is ubiquitous in dynamic scenes. Here, we propose a baseline deep audio-visual saliency model for multi-modal saliency prediction in the wild. Thus the proposed model is intentionally designed to be simple. A video baseline model is also developed on the same architecture to assess effectiveness of the audio-visual models on a fair basis. We demonstrate that audio-visual saliency model outperforms the video saliency models. The data and code are available at https://hrtavakoli.github.io/AVE/and https://github.com/hrtavakoli/DAVE.