Tampere University of Technology

TUTCRIS Research Portal

Digging deeper into egocentric gaze prediction

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Details

Original languageEnglish
Title of host publication2019 IEEE Winter Conference on Applications of Computer Vision, WACV 2019
PublisherIEEE
Pages273-282
Number of pages10
ISBN (Electronic)9781728119755
DOIs
Publication statusPublished - 4 Mar 2019
Publication typeA4 Article in a conference publication
EventIEEE Winter Conference on Applications of Computer Vision - Waikoloa Village, United States
Duration: 7 Jan 201911 Jan 2019

Publication series

NameIEEE Winter Conference on Applications of Computer Vision
ISSN (Print)1550-5790

Conference

ConferenceIEEE Winter Conference on Applications of Computer Vision
CountryUnited States
CityWaikoloa Village
Period7/01/1911/01/19

Abstract

This paper digs deeper into factors that influence egocentric gaze. Instead of training deep models for this purpose in a blind manner, we propose to inspect factors that contribute to gaze guidance during daily tasks. Bottom-up saliency and optical flow are assessed versus strong spatial prior baselines. Task-specific cues such as vanishing point, manipulation point, and hand regions are analyzed as representatives of top-down information. We also look into the contribution of these factors by investigating a simple recurrent neural model for ego-centric gaze prediction. First, deep features are extracted for all input video frames. Then, a gated recurrent unit is employed to integrate information over time and to predict the next fixation. We also propose an integrated model that combines the recurrent model with several top-down and bottom-up cues. Extensive experiments over multiple datasets reveal that (1) spatial biases are strong in egocentric videos, (2) bottom-up saliency models perform poorly in predicting gaze and underperform spatial biases, (3) deep features perform better compared to traditional features, (4) as opposed to hand regions, the manipulation point is a strong influential cue for gaze prediction, (5) combining the proposed recurrent model with bottom-up cues, vanishing points and, in particular, manipulation point results in the best gaze prediction accuracy over egocentric videos, (6) the knowledge transfer works best for cases where the tasks or sequences are similar, and (7) task and activity recognition can benefit from gaze prediction. Our findings suggest that (1) there should be more emphasis on hand-object interaction and (2) the egocentric vision community should consider larger datasets including diverse stimuli and more subjects.

Publication forum classification

Field of science, Statistics Finland