TAO-Amodal / README.md
WesleyHsieh0806
readme
fe2f834
|
raw
history blame
6.1 kB
# TAO-Amodal Dataset
<!-- Provide a quick summary of the dataset. -->
Official Source for Downloading the TAO-Amodal Dataset.
[**πŸ“™ Project Page**](https://tao-amodal.github.io/) | [**πŸ’» Code**](https://github.com/WesleyHsieh0806/TAO-Amodal) | [**πŸ“Ž Paper Link**](https://arxiv.org/abs/2312.12433) | [**✏️ Citations**](#citations)
<div align="center">
<a href="https://tao-amodal.github.io/"><img width="95%" alt="TAO-Amodal" src="https://tao-amodal.github.io/static/images/webpage_preview.png"></a>
</div>
</br>
Contact: [πŸ™‹πŸ»β€β™‚οΈCheng-Yen (Wesley) Hsieh](https://wesleyhsieh0806.github.io/)
## Dataset Description
Our dataset augments the TAO dataset with amodal bounding box annotations for fully invisible, out-of-frame, and occluded objects.
Note that this implies TAO-Amodal also includes modal segmentation masks (as visualized in the color overlays above).
Our dataset encompasses 880 categories, aimed at assessing the occlusion reasoning capabilities of current trackers
through the paradigm of Tracking Any Object with Amodal perception (TAO-Amodal).
### Dataset Download
1. Download all the annotations.
```bash
git lfs install
git clone git@hf.co:datasets/chengyenhsieh/TAO-Amodal
```
2. Download all the video frames:
You can either download the frames following the instructions [here](https://motchallenge.net/tao_download.php) (recommended) or modify our provided [script](./download_TAO.sh) and run
```bash
bash download_TAO.sh
```
## πŸ“š Dataset Structure
The dataset should be structured like this:
```bash
TAO-Amodal
β”œβ”€β”€ frames
β”‚ └── train
β”‚ β”œβ”€β”€ ArgoVerse
β”‚ β”œβ”€β”€ BDD
β”‚ β”œβ”€β”€ Charades
β”‚ β”œβ”€β”€ HACS
β”‚ β”œβ”€β”€ LaSOT
β”‚ └── YFCC100M
β”œβ”€β”€ amodal_annotations
β”‚ β”œβ”€β”€ train/validation/test.json
β”‚ β”œβ”€β”€ train_lvis_v1.json
β”‚ └── validation_lvis_v1.json
β”œβ”€β”€ example_output
β”‚ └── prediction.json
β”œβ”€β”€ BURST_annotations
β”‚ β”œβ”€β”€ train
β”‚ └── train_visibility.json
β”‚ ...
```
## πŸ“š File Descriptions
| File Name | Description |
| -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| train/validation/test.json | Formal annotation files. We use these annotations for visualization. Categories include those in [lvis](https://www.lvisdataset.org/) v0.5 and freeform categories. |
| train_lvis_v1.json | We use this file to train our [amodal-expander](https://tao-amodal.github.io/index.html#Amodal-Expander), treating each image frame as an independent sequence. Categories are aligned with those in lvis v1.0. |
| validation_lvis_v1.json | We use this file to evaluate our [amodal-expander](https://tao-amodal.github.io/index.html#Amodal-Expander). Categories are aligned with those in lvis v1.0. |
| prediction.json | Example output json from amodal-expander. Tracker predictions should be structured like this file to be evaluated with our [evaluation toolkit](https://github.com/WesleyHsieh0806/TAO-Amodal?tab=readme-ov-file#bar_chart-evaluation). |
| BURST_annotations/XXX.json | Modal mask annotations from [BURST dataset](https://github.com/Ali2500/BURST-benchmark) with our heuristic visibility attributes. We provide these files for the convenience of visualization |
### Annotation and Prediction Format
Our annotations are structured similarly as [TAO](https://github.com/TAO-Dataset/annotations) with some modifications.
Annotations:
```bash
Annotation file format:
{
"info" : info,
"images" : [image],
"videos": [video],
"tracks": [track],
"annotations" : [annotation],
"categories": [category],
"licenses" : [license],
}
annotation: {
"id": int,
"image_id": int,
"track_id": int,
"bbox": [x,y,width,height],
"area": float,
# Redundant field for compatibility with COCO scripts
"category_id": int,
"video_id": int,
# Other important attributes for evaluation on TAO-Amodal
"amodal_bbox": [x,y,width,height],
"amodal_is_uncertain": bool,
"visibility": float, (0.~1.0)
}
image, info, video, track, category, licenses, : Same as TAO
```
Predictions should be structured as:
```bash
[{
"image_id" : int,
"category_id" : int,
"bbox" : [x,y,width,height],
"score" : float,
"track_id": int,
"video_id": int
}]
```
Refer to the instructions of [TAO dataset](https://github.com/TAO-Dataset/tao/blob/master/docs/evaluation.md) for further details
## πŸ“Ί Example Sequences
Check [here](https://tao-amodal.github.io/#TAO-Amodal) for more examples and [here](https://github.com/WesleyHsieh0806/TAO-Amodal?tab=readme-ov-file#artist-visualization) for visualization code.
[<img src="https://tao-amodal.github.io/static/images/car_and_bus.png" width="50%">](https://tao-amodal.github.io/dataset.html "tao-amodal")
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
```
@misc{hsieh2023tracking,
title={Tracking Any Object Amodally},
author={Cheng-Yen Hsieh and Tarasha Khurana and Achal Dave and Deva Ramanan},
year={2023},
eprint={2312.12433},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
---
task_categories:
- object-detection
- multi-object-tracking
license: mit
---