Video alignment dataset

Anestis Papazoglou, Luca Del Pero, Vittorio Ferrari
University of Edinburgh (CALVIN)

Overview

qualitative_results

This dataset contains 22 video sequences depicting cars on racing sequences collected from YouTube, each 5-30 seconds long. These videos are challenging, showing different cars in different races, with fast motion, fast moving camera and cluttered backgrounds. We also provide viewpoint annotations for each frame, which we annotated as follows. We first define a set of 16 canonical viewpoints, spaced by 22.5 degrees (starting from full frontal). We manually annotate all the frames showing one of them. Then, we automatically annotated the rest of the frames by linearly interpolating the manual annotations.

We also provide foreground segmentation masks computed using the segmentation algorithm described in our paper. For more details, see the README file.

Filename Description Release Date Size
README.txt Description of contents 11 September 2016 4 KB
car-racing.tar.gz Car videos and annotations 11 September 2016 881 MB
segmentations.tar.gz Segmentations for the videos 11 September 2016 4.2 MB

Citations

@INPROCEEDINGS{papazoglou16accv,
author = {Papazoglou, A. and Del Pero, and Ferrari, V.},
title = {Video temporal alignment for object viewpoint},
booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)},
year = {2016}
}

Important Notice

These videos were downloaded from the internet, and may subject to copyright. We don’t own the copyright of the videos and only provide them for non-commercial research purposes.