Datasets:
pengzhiliang
commited on
Commit
·
67c8e38
1
Parent(s):
d324bb2
update readme
Browse files
README.md
CHANGED
@@ -15,17 +15,62 @@ tags:
|
|
15 |
task_categories:
|
16 |
- text-to-image
|
17 |
- image-to-text
|
18 |
-
- zero-shot-classification
|
19 |
- object-detection
|
|
|
20 |
task_ids:
|
21 |
- image-captioning
|
22 |
- visual-question-answering
|
23 |
---
|
24 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
### Citation Information
|
26 |
-
If you apply this dataset to any project and research, please cite our
|
27 |
```
|
28 |
-
@article{
|
29 |
title={Kosmos-2: Grounding Multimodal Large Language Models to the World},
|
30 |
author={Zhiliang Peng and Wenhui Wang and Li Dong and Yaru Hao and Shaohan Huang and Shuming Ma and Furu Wei},
|
31 |
journal={ArXiv},
|
|
|
15 |
task_categories:
|
16 |
- text-to-image
|
17 |
- image-to-text
|
|
|
18 |
- object-detection
|
19 |
+
- zero-shot-classification
|
20 |
task_ids:
|
21 |
- image-captioning
|
22 |
- visual-question-answering
|
23 |
---
|
24 |
|
25 |
+
# GRIT: Large-Scale Training Corpus of Grounded Image-Text Pairs
|
26 |
+
|
27 |
+
### Dataset Description
|
28 |
+
- **Repository:** [Microsoft unilm](https://github.com/microsoft/unilm/tree/master/kosmos-2)
|
29 |
+
- **Paper:** [Kosmos-2](https://arxiv.org/abs/2306.14824)
|
30 |
+
- **Point of Contact:** [Unilm team](fuwei@microsoft.com)
|
31 |
+
|
32 |
+
### Dataset Summary
|
33 |
+
We introduce GRIT, a large-scale dataset of Grounded Image-Text pairs, which is created based on image-text pairs from [COYO-700M](https://github.com/kakaobrain/coyo-dataset) and LAION-2B. We construct a pipeline to extract and link text spans (i.e., noun phrases, and referring expressions) in the caption to their corresponding image regions. More details can be found in the [paper](https://arxiv.org/abs/2306.14824).
|
34 |
+
|
35 |
+
### Supported Tasks
|
36 |
+
During the construction, we exclude the image-caption pair if no bounding boxes are retained. This procedure results in a high-quality image-caption subset of COYO-700M. We will validate it in the future.
|
37 |
+
|
38 |
+
Furthermore, this dataset contains text-span-bounding-box pairs. So it can be employed in many location-aware mono/multimodal tasks, such as phrase grounding, referring expression comprehension, referring expression generation and open-world object detection.
|
39 |
+
|
40 |
+
### Data Instance
|
41 |
+
One instance is
|
42 |
+
```python
|
43 |
+
{
|
44 |
+
'key': '000373938',
|
45 |
+
'clip_similarity_vitb32': 0.353271484375,
|
46 |
+
'clip_similarity_vitl14': 0.2958984375,
|
47 |
+
'id': 1795296605919,
|
48 |
+
'url': "https://www.thestrapsaver.com/wp-content/uploads/customerservice-1.jpg",
|
49 |
+
'caption': 'a wire hanger with a paper cover that reads we heart our customers',
|
50 |
+
'width': 1024,
|
51 |
+
'height': 693,
|
52 |
+
'noun_chunks': [[19, 32, 0.019644069503434333, 0.31054004033406574, 0.9622142865754519, 0.9603442351023356, 0.79298526], [0, 13, 0.019422357885505368, 0.027634161214033764, 0.9593302408854166, 0.969467560450236, 0.67520964]],
|
53 |
+
'ref_exps': [[19, 66, 0.019644069503434333, 0.31054004033406574, 0.9622142865754519, 0.9603442351023356, 0.79298526], [0, 66, 0.019422357885505368, 0.027634161214033764, 0.9593302408854166, 0.969467560450236, 0.67520964]]
|
54 |
+
}
|
55 |
+
|
56 |
+
```
|
57 |
+
- `key`: The generated file name when using img2dataset to download COYO-700M (omit it).
|
58 |
+
- `clip_similarity_vitb32`: The cosine similarity between text and image(ViT-B/32) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP), provided by COYO-700M.
|
59 |
+
- `clip_similarity_vitl14`: The cosine similarity between text and image(ViT-L/14) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP), provided by COYO-700M.
|
60 |
+
- `id`: Unique 64-bit integer ID in COYO-700M.
|
61 |
+
- `url`: The image URL.
|
62 |
+
- `caption`: The corresponding caption.
|
63 |
+
- `width`: The width of the image.
|
64 |
+
- `height`: The height of the image.
|
65 |
+
- `noun_chunks`: The noun chunks (extracted by [spaCy](https://spacy.io/)) that have associated bounding boxes (predicted by [GLIP](https://github.com/microsoft/GLIP)). The items in the children list respectively represent 'Start of the noun chunk in caption', 'End of the noun chunk in caption', 'normalized x_min', 'normalized y_min', 'normalized x_max', 'normalized y_max', 'confidence score'.
|
66 |
+
- `ref_exps`: The corresponding referring expressions. If a noun chunk has no expansion, we just copy it.
|
67 |
+
|
68 |
+
### Download image
|
69 |
+
|
70 |
### Citation Information
|
71 |
+
If you apply this dataset to any project and research, please cite our paper and coyo-700m:
|
72 |
```
|
73 |
+
@article{Kosmos2,
|
74 |
title={Kosmos-2: Grounding Multimodal Large Language Models to the World},
|
75 |
author={Zhiliang Peng and Wenhui Wang and Li Dong and Yaru Hao and Shaohan Huang and Shuming Ma and Furu Wei},
|
76 |
journal={ArXiv},
|