File size: 7,871 Bytes
41c0453 dbb45f3 12e4f63 dbb45f3 12e4f63 dbb45f3 b00c7c5 767be03 b00c7c5 a9c23c3 d743a07 a9c23c3 271a0e6 41c0453 dbb45f3 023f7cc 1ecfc67 3126e6b b2e45c9 87a0e1d 3126e6b 85fff14 3126e6b 87a0e1d 85fff14 3126e6b 0abb6eb 85fff14 bf41854 85fff14 7580699 bf41854 85fff14 7580699 0abb6eb 85fff14 87a0e1d 85fff14 87a0e1d 85fff14 0abb6eb b2e45c9 0abb6eb 85fff14 0abb6eb 85fff14 0abb6eb 85fff14 0abb6eb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 |
---
license: "mit"
pretty_name: "M-BEIR"
task_categories:
- text-retrieval
- text-to-image
- image-to-text
- visual-question-answering
language:
- "en"
configs:
- config_name: query
data_files:
- split: train
path: "query/train/*.jsonl"
- split: union_train
path: "query/union_train/*.jsonl"
- split: val
path: "query/val/*.jsonl"
- split: test
path: "query/test/*.jsonl"
- config_name: cand_pool
data_files:
- split: mbeir_local
path: "cand_pool/local/*.jsonl"
- split: mbeir_global
path: "cand_pool/global/*.jsonl"
---
### **UniIR: Training and Benchmarking Universal Multimodal Information Retrievers** (ECCV 2024)
[**π Homepage**](https://tiger-ai-lab.github.io/UniIR/) | [**π€ Model(UniIR Checkpoints)**](https://huggingface.co/TIGER-Lab/UniIR) | [**π€ Paper**](https://huggingface.co/papers/2311.17136) | [**π arXiv**](https://arxiv.org/pdf/2311.17136.pdf) | [**GitHub**](https://github.com/TIGER-AI-Lab/UniIR)
<a href="#install-git-lfs" style="color: red;">How to download the M-BEIR Dataset</a>
## πNews
- **π₯[2023-12-21]: Our M-BEIR Benchmark is now available for use.**
## **Dataset Summary**
**M-BEIR**, the **M**ultimodal **BE**nchmark for **I**nstructed **R**etrieval, is a comprehensive large-scale retrieval benchmark designed to train and evaluate unified multimodal retrieval models (**UniIR models**).
The M-BEIR benchmark comprises eight multimodal retrieval tasks and ten datasets from a variety of domains and sources.
Each task is accompanied by human-authored instructions, encompassing 1.5 million queries and a pool of 5.6 million retrieval candidates in total.
## **Dataset Structure Overview**
The M-BEIR dataset is structured into five primary components: Query Data, Candidate Pool, Instructions, Qrels, and Images.
### Query Data
Below is the directory structure for the query data:
```
query/
β
βββ train/
β βββ mbeir_cirr_train.jsonl
β βββ mbeir_edis_train.jsonl
β ...
βββ union_train/
β βββ mbeir_union_up_train.jsonl
βββ val/
β βββ mbeir_visualnews_task0_val.jsonl
β βββ mbeir_visualnews_task3_val.jsonl
β ...
βββ test/
βββ mbeir_visualnews_task0_test.jsonl
βββ mbeir_visualnews_task3_test.jsonl
...
```
`train`: Contains all the training data from 8 different datasets formatted in the M-BEIR style.
`mbeir_union_up_train.jsonl`: This file is the default training data for in-batch contrastive training specifically designed for UniIR models.
It aggregates all the data from the train directory and datasets with relatively smaller sizes have been upsampled to balance the training process.
`val`: Contains separate files for validation queries, organized by task.
`test`: Contains separate files for test queries, organized by task.
Every M-BEIR query instance has at least one positive candidate data and possibly no negative candidate data
Each line in a Query Data file represents a unique query. The structure of each query JSON object is as follows::
```json
{
"qid": "A unique identifier formatted as {dataset_id}:{query_id}",
"query_txt": "The text component of the query",
"query_img_path": "The file path to the associated query image",
"query_modality": "The modality type of the query (text, image or text,image)",
"query_src_content": "Additional content from the original dataset, presented as a string by json.dumps()",
"pos_cand_list": [
{
"did": "A unique identifier formatted as {dataset_id}:{doc_id}"
}
// ... more positive candidates
],
"neg_cand_list": [
{
"did": "A unique identifier formatted as {dataset_id}:{doc_id}"
}
// ... more negative candidates
]
}
```
### Candidate Pool
The Candidate Pool contains potential matching documents for the queries.
#### M-BEIR_5.6M
Within the global directory, the default retrieval setting requires models to retrieve positive candidates from a heterogeneous pool encompassing various modalities and domains.
The M-BEIR's global candidate pool, comprising 5.6 million candidates, includes the retrieval corpus from all tasks and datasets.
#### M-BEIR_local
Within the local directory, we provide dataset-task-specific pool as M-BEIR_local. Dataset-task-specific pool contains homogeneous candidates that originate from by the original dataset.
Below is the directory structure for the candidate pool:
```
cand_pool/
β
βββ global/
β βββ mbeir_union_val_cand_pool.jsonl
β βββmbeir_union_test_cand_pool.jsonl
β
βββ local/
βββ mbeir_visualnews_task0_cand_pool.jsonl
βββ mbeir_visualnews_task3_cand_pool.jsonl
...
```
The structure of each candidate JSON object in cand_pool file is as follows::
```json
{
"did": "A unique identifier for the document, formatted as {dataset_id}:{doc_id}",
"txt": "The text content of the candidate document",
"img_path": "The file path to the candidate document's image",
"modality": "The modality type of the candidate (e.g., text, image or text,image)",
"src_content": "Additional content from the original dataset, presented as a string by json.dumps()"
}
```
### Instructions
`query_instructions.tsv` contains human-authorized instructions within the UniIR framework. Each task is accompanied by four human-authored instructions. For detailed usage, please refer to [**GitHub Repo**](https://github.com/TIGER-AI-Lab/UniIR).
### Qrels
Within the `qrels` directory, you will find qrels for both the validation and test sets. These files serve the purpose of evaluating UniIR models. For detailed information, please refer to [**GitHub Repo**](https://github.com/TIGER-AI-Lab/UniIR).
## **How to Use**
### Downloading the M-BEIR Dataset
<a name="install-git-lfs"></a>
#### Step 1: Install Git Large File Storage (LFS)
Before you begin, ensure that **Git LFS** is installed on your system. Git LFS is essential for handling large data files. If you do not have Git LFS installed, follow these steps:
Download and install Git LFS from the official website.
After installation, run the following command in your terminal to initialize Git LFS:
```
git lfs install
```
#### Step 2: Clone the M-BEIR Dataset Repository
Once Git LFS is set up, you can clone the M-BEIR repo from the current Page. Open your terminal and execute the following command:
```
git clone https://huggingface.co/datasets/TIGER-Lab/M-BEIR
```
Please note that the M-BEIR dataset is quite large, and downloading it can take several hours, depending on your internet connection.
During this time, your terminal may not show much activity. The terminal might appear stuck, but if there's no error message, the download process is still ongoing.
### Decompressing M-BEIR Images
After downloading, you will need to decompress the image files. Follow these steps in your terminal:
```bash
# Navigate to the M-BEIR directory
cd path/to/M-BEIR
# Combine the split tar.gz files into one
sh -c 'cat mbeir_images.tar.gz.part-00 mbeir_images.tar.gz.part-01 mbeir_images.tar.gz.part-02 mbeir_images.tar.gz.part-03 > mbeir_images.tar.gz'
# Extract the images from the tar.gz file
tar -xzf mbeir_images.tar.gz
```
Now, you are ready to use the M-BEIR benchmark.
### Dataloader and Evaluation Pipeline
We offer a dedicated dataloader and evaluation pipeline for the M-BEIR benchmark. Please refer to [**GitHub Repo**](https://github.com/TIGER-AI-Lab/UniIR) for detailed information.
## **Citation**
Please cite our paper if you use our data, model or code.
```
@article{wei2023uniir,
title={UniIR: Training and Benchmarking Universal Multimodal Information Retrievers},
author={Wei, Cong and Chen, Yang and Chen, Haonan and Hu, Hexiang and Zhang, Ge and Fu, Jie and Ritter, Alan and Chen, Wenhu},
journal={arXiv preprint arXiv:2311.17136},
year={2023}
}
```
|