Tampere University of Technology

TUTCRIS Research Portal

The visual object tracking VOT2013 challenge results

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Details

Original languageEnglish
Title of host publicationProceedings - 2013 IEEE International Conference on Computer Vision Workshops, ICCVW 2013
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages98-111
Number of pages14
ISBN (Print)9781479930227
DOIs
Publication statusPublished - 2013
Publication typeA4 Article in a conference publication
Event2013 14th IEEE International Conference on Computer Vision Workshops, ICCVW 2013 - Sydney, NSW, Australia
Duration: 1 Dec 20138 Dec 2013

Conference

Conference2013 14th IEEE International Conference on Computer Vision Workshops, ICCVW 2013
CountryAustralia
CitySydney, NSW
Period1/12/138/12/13

Abstract

Visual tracking has attracted a significant attention in the last few decades. The recent surge in the number of publications on tracking-related problems have made it almost impossible to follow the developments in the field. One of the reasons is that there is a lack of commonly accepted annotated data-sets and standardized evaluation protocols that would allow objective comparison of different tracking methods. To address this issue, the Visual Object Tracking (VOT) workshop was organized in conjunction with ICCV2013. Researchers from academia as well as industry were invited to participate in the first VOT2013 challenge which aimed at single-object visual trackers that do not apply pre-learned models of object appearance (model-free). Presented here is the VOT2013 benchmark dataset for evaluation of single-object visual trackers as well as the results obtained by the trackers competing in the challenge. In contrast to related attempts in tracker benchmarking, the dataset is labeled per-frame by visual attributes that indicate occlusion, illumination change, motion change, size change and camera motion, offering a more systematic comparison of the trackers. Furthermore, we have designed an automated system for performing and evaluating the experiments. We present the evaluation protocol of the VOT2013 challenge and the results of a comparison of 27 trackers on the benchmark dataset. The dataset, the evaluation tools and the tracker rankings are publicly available from the challenge website (http://votchallenge. net).

Keywords

  • Visual object tracking challenge, VOT2013