pengzhiliang commited on
Commit
25a7819
1 Parent(s): 67c8e38

update readme

Browse files
Files changed (1) hide show
  1. README.md +23 -3
README.md CHANGED
@@ -27,15 +27,14 @@ task_ids:
27
  ### Dataset Description
28
  - **Repository:** [Microsoft unilm](https://github.com/microsoft/unilm/tree/master/kosmos-2)
29
  - **Paper:** [Kosmos-2](https://arxiv.org/abs/2306.14824)
30
- - **Point of Contact:** [Unilm team](fuwei@microsoft.com)
31
 
32
  ### Dataset Summary
33
  We introduce GRIT, a large-scale dataset of Grounded Image-Text pairs, which is created based on image-text pairs from [COYO-700M](https://github.com/kakaobrain/coyo-dataset) and LAION-2B. We construct a pipeline to extract and link text spans (i.e., noun phrases, and referring expressions) in the caption to their corresponding image regions. More details can be found in the [paper](https://arxiv.org/abs/2306.14824).
34
 
35
  ### Supported Tasks
36
- During the construction, we exclude the image-caption pair if no bounding boxes are retained. This procedure results in a high-quality image-caption subset of COYO-700M. We will validate it in the future.
37
 
38
- Furthermore, this dataset contains text-span-bounding-box pairs. So it can be employed in many location-aware mono/multimodal tasks, such as phrase grounding, referring expression comprehension, referring expression generation and open-world object detection.
39
 
40
  ### Data Instance
41
  One instance is
@@ -66,6 +65,27 @@ One instance is
66
  - `ref_exps`: The corresponding referring expressions. If a noun chunk has no expansion, we just copy it.
67
 
68
  ### Download image
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
 
70
  ### Citation Information
71
  If you apply this dataset to any project and research, please cite our paper and coyo-700m:
 
27
  ### Dataset Description
28
  - **Repository:** [Microsoft unilm](https://github.com/microsoft/unilm/tree/master/kosmos-2)
29
  - **Paper:** [Kosmos-2](https://arxiv.org/abs/2306.14824)
 
30
 
31
  ### Dataset Summary
32
  We introduce GRIT, a large-scale dataset of Grounded Image-Text pairs, which is created based on image-text pairs from [COYO-700M](https://github.com/kakaobrain/coyo-dataset) and LAION-2B. We construct a pipeline to extract and link text spans (i.e., noun phrases, and referring expressions) in the caption to their corresponding image regions. More details can be found in the [paper](https://arxiv.org/abs/2306.14824).
33
 
34
  ### Supported Tasks
35
+ During the construction, we excluded the image-caption pairs if no bounding boxes are retained. This procedure resulted in a high-quality image-caption subset of COYO-700M, which we will validate in the future.
36
 
37
+ Furthermore, this dataset contains text-span-bounding-box pairs. Thus, it can be used in many location-aware mono/multimodal tasks, such as phrase grounding, referring expression comprehension, referring expression generation, and open-world object detection.
38
 
39
  ### Data Instance
40
  One instance is
 
65
  - `ref_exps`: The corresponding referring expressions. If a noun chunk has no expansion, we just copy it.
66
 
67
  ### Download image
68
+ We recommend to use [img2dataset](https://github.com/rom1504/img2dataset) tool to download the images.
69
+ 1. Download the metadata. You can download it by cloning current repository:
70
+ ```bash
71
+ git lfs install
72
+ git clone https://huggingface.co/datasets/zzliang/GRIT
73
+ ```
74
+ 2. Install [img2dataset](https://github.com/rom1504/img2dataset).
75
+ ```bash
76
+ pip install img2dataset
77
+ ```
78
+ 3. Download images
79
+ You need to replace `/path/to/GRIT_dataset/grit-20m` with the local path to this repository.
80
+ ```bash
81
+ img2dataset --url_list /path/to/GRIT_dataset/grit-20m --input_format "parquet"\
82
+ --url_col "url" --caption_col "caption" --output_format webdataset \
83
+ --output_folder /tmp/grit --processes_count 4 --thread_count 64 --image_size 256 \
84
+ --resize_only_if_bigger=True --resize_mode="keep_ratio" --skip_reencode=True \
85
+ --save_additional_columns '["id","noun_chunks","ref_exps","clip_similarity_vitb32","clip_similarity_vitl14"]' \
86
+ --enable_wandb False
87
+ ```
88
+ More img2dataset hyper-parameters can be found in [here](https://github.com/rom1504/img2dataset#api).
89
 
90
  ### Citation Information
91
  If you apply this dataset to any project and research, please cite our paper and coyo-700m: