|
# TAO-Amodal Dataset |
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
Official Source for Downloading the TAO-Amodal Dataset. |
|
|
|
[**π Project Page**](https://tao-amodal.github.io/) | [**π» Code**](https://github.com/WesleyHsieh0806/TAO-Amodal) | [**π Paper Link**](https://arxiv.org/abs/2312.12433) | [**βοΈ Citations**](#citations) |
|
|
|
<div align="center"> |
|
<a href="https://tao-amodal.github.io/"><img width="95%" alt="TAO-Amodal" src="https://tao-amodal.github.io/static/images/webpage_preview.png"></a> |
|
</div> |
|
|
|
</br> |
|
|
|
Contact: [ππ»ββοΈCheng-Yen (Wesley) Hsieh](https://wesleyhsieh0806.github.io/) |
|
|
|
## Dataset Description |
|
Our dataset augments the TAO dataset with amodal bounding box annotations for fully invisible, out-of-frame, and occluded objects. |
|
Note that this implies TAO-Amodal also includes modal segmentation masks (as visualized in the color overlays above). |
|
Our dataset encompasses 880 categories, aimed at assessing the occlusion reasoning capabilities of current trackers |
|
through the paradigm of Tracking Any Object with Amodal perception (TAO-Amodal). |
|
|
|
### Dataset Download |
|
1. Download all the annotations. |
|
```bash |
|
git lfs install |
|
git clone git@hf.co:datasets/chengyenhsieh/TAO-Amodal |
|
``` |
|
|
|
2. Download all the video frames: |
|
|
|
You can either download the frames following the instructions [here](https://motchallenge.net/tao_download.php) (recommended) or modify our provided [script](./download_TAO.sh) and run |
|
```bash |
|
bash download_TAO.sh |
|
``` |
|
|
|
|
|
|
|
|
|
## π Dataset Structure |
|
|
|
The dataset should be structured like this: |
|
```bash |
|
TAO-Amodal |
|
βββ frames |
|
β βββ train |
|
β βββ ArgoVerse |
|
β βββ BDD |
|
β βββ Charades |
|
β βββ HACS |
|
β βββ LaSOT |
|
β βββ YFCC100M |
|
βββ amodal_annotations |
|
β βββ train/validation/test.json |
|
β βββ train_lvis_v1.json |
|
β βββ validation_lvis_v1.json |
|
βββ example_output |
|
β βββ prediction.json |
|
βββ BURST_annotations |
|
β βββ train |
|
β βββ train_visibility.json |
|
β ... |
|
|
|
``` |
|
|
|
## π File Descriptions |
|
|
|
| File Name | Description | |
|
| -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
|
| train/validation/test.json | Formal annotation files. We use these annotations for visualization. Categories include those in [lvis](https://www.lvisdataset.org/) v0.5 and freeform categories. | |
|
| train_lvis_v1.json | We use this file to train our [amodal-expander](https://tao-amodal.github.io/index.html#Amodal-Expander), treating each image frame as an independent sequence. Categories are aligned with those in lvis v1.0. | |
|
| validation_lvis_v1.json | We use this file to evaluate our [amodal-expander](https://tao-amodal.github.io/index.html#Amodal-Expander). Categories are aligned with those in lvis v1.0. | |
|
| prediction.json | Example output json from amodal-expander. Tracker predictions should be structured like this file to be evaluated with our [evaluation toolkit](https://github.com/WesleyHsieh0806/TAO-Amodal?tab=readme-ov-file#bar_chart-evaluation). | |
|
| BURST_annotations/XXX.json | Modal mask annotations from [BURST dataset](https://github.com/Ali2500/BURST-benchmark) with our heuristic visibility attributes. We provide these files for the convenience of visualization | |
|
|
|
### Annotation and Prediction Format |
|
|
|
Our annotations are structured similarly as [TAO](https://github.com/TAO-Dataset/annotations) with some modifications. |
|
Annotations: |
|
```bash |
|
|
|
Annotation file format: |
|
{ |
|
"info" : info, |
|
"images" : [image], |
|
"videos": [video], |
|
"tracks": [track], |
|
"annotations" : [annotation], |
|
"categories": [category], |
|
"licenses" : [license], |
|
} |
|
annotation: { |
|
"id": int, |
|
"image_id": int, |
|
"track_id": int, |
|
"bbox": [x,y,width,height], |
|
"area": float, |
|
|
|
# Redundant field for compatibility with COCO scripts |
|
"category_id": int, |
|
"video_id": int, |
|
|
|
# Other important attributes for evaluation on TAO-Amodal |
|
"amodal_bbox": [x,y,width,height], |
|
"amodal_is_uncertain": bool, |
|
"visibility": float, (0.~1.0) |
|
} |
|
image, info, video, track, category, licenses, : Same as TAO |
|
``` |
|
|
|
Predictions should be structured as: |
|
|
|
```bash |
|
[{ |
|
"image_id" : int, |
|
"category_id" : int, |
|
"bbox" : [x,y,width,height], |
|
"score" : float, |
|
"track_id": int, |
|
"video_id": int |
|
}] |
|
``` |
|
Refer to the instructions of [TAO dataset](https://github.com/TAO-Dataset/tao/blob/master/docs/evaluation.md) for further details |
|
|
|
|
|
## πΊ Example Sequences |
|
Check [here](https://tao-amodal.github.io/#TAO-Amodal) for more examples and [here](https://github.com/WesleyHsieh0806/TAO-Amodal?tab=readme-ov-file#artist-visualization) for visualization code. |
|
[<img src="https://tao-amodal.github.io/static/images/car_and_bus.png" width="50%">](https://tao-amodal.github.io/dataset.html "tao-amodal") |
|
|
|
|
|
|
|
## Citation |
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
``` |
|
@misc{hsieh2023tracking, |
|
title={Tracking Any Object Amodally}, |
|
author={Cheng-Yen Hsieh and Tarasha Khurana and Achal Dave and Deva Ramanan}, |
|
year={2023}, |
|
eprint={2312.12433}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CV} |
|
} |
|
``` |
|
|
|
--- |
|
task_categories: |
|
- object-detection |
|
- multi-object-tracking |
|
|
|
license: mit |
|
--- |