|
--- |
|
license: cc-by-nc-4.0 |
|
annotations_creators: |
|
- crowdsourced |
|
task_categories: |
|
- object-detection |
|
- other |
|
language: |
|
- en |
|
tags: |
|
- video |
|
- multi-object tracking |
|
pretty_name: SportsMOT |
|
source_datasets: |
|
- MultiSports |
|
extra_gated_heading: "Acknowledge license to accept the repository" |
|
extra_gated_prompt: "This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License" |
|
extra_gated_fields: |
|
Institute: text |
|
I want to use this dataset for: |
|
type: select |
|
options: |
|
- Research |
|
- Education |
|
- label: Other |
|
value: other |
|
I agree to use this dataset for non-commerical use ONLY: checkbox |
|
--- |
|
# Dataset Card for SportsMOT |
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
Multi-object tracking (MOT) is a fundamental task in computer vision, aiming to estimate objects (e.g., pedestrians and vehicles) bounding boxes and identities in video sequences. We propose a large-scale multi-object tracking dataset named SportsMOT, consisting of 240 video clips from 3 categories (i.e., basketball, football and volleyball). The objective is to only track players on the playground (i.e., except for a number of spectators, referees and coaches) in various sports scenes. |
|
|
|
|
|
### Dataset Sources [optional] |
|
|
|
<!-- Provide the basic links for the dataset. --> |
|
|
|
- **Repository:** https://github.com/MCG-NJU/SportsMOT |
|
- **Paper:** https://arxiv.org/abs/2304.05170 |
|
- **Competiton:** https://codalab.lisn.upsaclay.fr/competitions/12424 |
|
- **Point of Contact:** mailto: yichunyang@smail.nju.edu.cn |
|
|
|
|
|
## Dataset Structure |
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
|
|
Data in SportsMOT is organized in the form of MOT Challenge 17. |
|
|
|
``` |
|
splits_txt(video-split mapping) |
|
- basketball.txt |
|
- volleyball.txt |
|
- football.txt |
|
- train.txt |
|
- val.txt |
|
- test.txt |
|
scripts |
|
- mot_to_coco.py |
|
- sportsmot_to_trackeval.py |
|
dataset(in MOT challenge format) |
|
- train |
|
- VIDEO_NAME1 |
|
- gt |
|
- img1 |
|
- 000001.jpg |
|
- 000002.jpg |
|
- seqinfo.ini |
|
- val(the same hierarchy as train) |
|
- test |
|
- VIDEO_NAME1 |
|
- img1 |
|
- 000001.jpg |
|
- 000002.jpg |
|
- seqinfo.ini |
|
``` |
|
|
|
## Dataset Creation |
|
|
|
### Curation Rationale |
|
|
|
<!-- Motivation for the creation of this dataset. --> |
|
Multi-object tracking (MOT) is a fundamental task in computer vision, aiming to estimate objects (e.g., pedestrians and vehicles) bounding boxes and identities in video sequences. |
|
|
|
Prevailing human-tracking MOT datasets mainly focus on pedestrians in crowded street scenes (e.g., MOT17/20) or dancers in static scenes (DanceTrack). In spite of the increasing demands for sports analysis, there is a lack of multi-object tracking datasets for a variety of sports scenes, where the background is complicated, players possess rapid motion and the camera lens moves fast. |
|
|
|
|
|
### Source Data |
|
|
|
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> |
|
> We select three worldwide famous sports, football, basketball, and volleyball, and collect videos of high-quality professional games including NCAA, Premier League, and Olympics from MultiSports, which is a large dataset in sports area focusing on spatio-temporal action localization. |
|
|
|
#### Annotation process |
|
|
|
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> |
|
|
|
We annotate the collected videos according to the following guidelines. |
|
|
|
1. The entire athlete’s limbs and torso, excluding any other objects like balls touching the athlete’s body, are required to be annotated. |
|
|
|
2. The annotators are asked to predict the bounding box of the athlete in the case of occlusion, as long as the athletes have a visible part of body. However, if half of the athletes’ torso is outside the view, annotators should just skip them. |
|
|
|
3. We ask the annotators to confirm that each player has a unique ID throughout the whole clip. |
|
|
|
### Dataset Curators |
|
|
|
Authors of [SportsMOT: A Large Multi-Object Tracking Dataset in Multiple Sports Scenes](https://arxiv.org/pdf/2304.05170) |
|
|
|
- Yutao Cui |
|
|
|
- Chenkai Zeng |
|
|
|
- Xiaoyu Zhao |
|
|
|
- Yichun Yang |
|
|
|
- Gangshan Wu |
|
|
|
- Limin Wang |
|
|
|
## Citation Information |
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
If you find this dataset useful, please cite as |
|
|
|
``` |
|
@inproceedings{cui2023sportsmot, |
|
title={Sportsmot: A large multi-object tracking dataset in multiple sports scenes}, |
|
author={Cui, Yutao and Zeng, Chenkai and Zhao, Xiaoyu and Yang, Yichun and Wu, Gangshan and Wang, Limin}, |
|
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision}, |
|
pages={9921--9931}, |
|
year={2023} |
|
} |
|
``` |
|
|