DanceTrack / README.md
dgural's picture
Update README.md
04c36c5 verified
metadata
annotations_creators: []
language: en
license: cc-by-4.0
task_categories: []
task_ids: []
pretty_name: DanceTrack
tags:
  - fiftyone
  - video
chunk_size: 1
dataset_summary: >



  ![image/png](dataset_preview.gif)



  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 33
  samples.


  ## Installation


  If you haven't already, install FiftyOne:


  ```bash

  pip install -U fiftyone

  ```


  ## Usage


  ```python

  import fiftyone as fo

  import fiftyone.utils.huggingface as fouh


  # Load the dataset

  # Note: other available arguments include 'split', 'max_samples', etc

  dataset = fouh.load_from_hub("voxel51/DanceTrack")


  # Launch the App

  session = fo.launch_app(dataset)

  ```

Dataset Card for DanceTrack

DanceTrack is a multi-human tracking dataset with two emphasized properties, (1) uniform appearance: humans are in highly similar and almost undistinguished appearance, (2) diverse motion: humans are in complicated motion pattern and their relative positions exchange frequently. We expect the combination of uniform appearance and complicated motion pattern makes DanceTrack a platform to encourage more comprehensive and intelligent multi-object tracking algorithms.

image/png

This is a FiftyOne dataset with 33 samples.

Installation

If you haven't already, install FiftyOne:

pip install -U fiftyone

Usage

import fiftyone as fo
import fiftyone.utils.huggingface as fouh

# Load the dataset
# Note: other available arguments include 'split', 'max_samples', etc
dataset = fouh.load_from_hub("dgural/DanceTrack")

# Launch the App
session = fo.launch_app(dataset)

Dataset Details

Dataset Description

From Multi-Object Tracking in Uniform Appearance and Diverse Motion:

A typical pipeline for multi-object tracking (MOT) is to use a detector for object localization, and following re-identification (re-ID) for object association. This pipeline is partially motivated by recent progress in both object detec- tion and re-ID, and partially motivated by biases in existing tracking datasets, where most objects tend to have distin- guishing appearance and re-ID models are sufficient for es- tablishing associations. In response to such bias, we would like to re-emphasize that methods for multi-object tracking should also work when object appearance is not sufficiently discriminative. To this end, we propose a large-scale dataset for multi-human tracking, where humans have similar appearance, diverse motion and extreme articulation. As the dataset contains mostly group dancing videos, we name it “DanceTrack”. We expect DanceTrack to provide a better platform to develop more MOT algorithms that rely less on visual discrimination and depend more on motion analysis. We benchmark several state-of-the-art trackers on our dataset and observe a significant performance drop on DanceTrack when compared against existing benchmarks.

  • Language(s) (NLP): en
  • License: cc-by-4.0

Dataset Sources

Uses

This dataset is great for tracking use cases in computer vision is a common benchmark dataset.

Citation

@inproceedings{sun2022dance, title={DanceTrack: Multi-Object Tracking in Uniform Appearance and Diverse Motion}, author={Sun, Peize and Cao, Jinkun and Jiang, Yi and Yuan, Zehuan and Bai, Song and Kitani, Kris and Luo, Ping}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2022} }