Update README.md
Browse files
README.md
CHANGED
@@ -45,10 +45,13 @@ configs:
|
|
45 |
### **UniIR: Training and Benchmarking Universal Multimodal Information Retrievers**
|
46 |
[**๐ Homepage**](https://tiger-ai-lab.github.io/UniIR/) | [**๐ค Paper**](https://huggingface.co/papers/2311.17136) | [**๐ arXiv**](https://arxiv.org/pdf/2311.17136.pdf) | [**GitHub**](https://github.com/TIGER-AI-Lab/UniIR)
|
47 |
|
|
|
|
|
|
|
48 |
|
49 |
## **Dataset Summary**
|
50 |
|
51 |
-
M-BEIR
|
52 |
The M-BEIR benchmark comprises eight multimodal retrieval tasks and ten datasets from a variety of domains and sources.
|
53 |
Each task is accompanied by human-authored instructions, encompassing 1.5 million queries and a pool of 5.6 million retrieval candidates in total.
|
54 |
|
@@ -145,10 +148,10 @@ The structure of each candidate JSON object in cand_pool file is as follows::
|
|
145 |
```
|
146 |
|
147 |
### Instructions
|
148 |
-
`query_instructions.tsv` contains human-authorized instructions within the UniIR framework. Each task is accompanied by four human-authored instructions.
|
149 |
|
150 |
### Qrels
|
151 |
-
Within the `qrels` directory, you will find qrels for both the validation and test sets. These files serve the purpose of evaluating UniIR models.
|
152 |
|
153 |
## **How to Use**
|
154 |
### Downloading the M-BEIR Dataset
|
|
|
45 |
### **UniIR: Training and Benchmarking Universal Multimodal Information Retrievers**
|
46 |
[**๐ Homepage**](https://tiger-ai-lab.github.io/UniIR/) | [**๐ค Paper**](https://huggingface.co/papers/2311.17136) | [**๐ arXiv**](https://arxiv.org/pdf/2311.17136.pdf) | [**GitHub**](https://github.com/TIGER-AI-Lab/UniIR)
|
47 |
|
48 |
+
## ๐News
|
49 |
+
|
50 |
+
- **๐ฅ[2023-12-21]: Our M-BEIR Benchmark is now available for use.**
|
51 |
|
52 |
## **Dataset Summary**
|
53 |
|
54 |
+
**M-BEIR**, the **M**ultimodal **BE**nchmark for **I**nstructed **R**etrieval, is a comprehensive large-scale retrieval benchmark designed to train and evaluate unified multimodal retrieval models (**UniIR models**).
|
55 |
The M-BEIR benchmark comprises eight multimodal retrieval tasks and ten datasets from a variety of domains and sources.
|
56 |
Each task is accompanied by human-authored instructions, encompassing 1.5 million queries and a pool of 5.6 million retrieval candidates in total.
|
57 |
|
|
|
148 |
```
|
149 |
|
150 |
### Instructions
|
151 |
+
`query_instructions.tsv` contains human-authorized instructions within the UniIR framework. Each task is accompanied by four human-authored instructions. For detailed usage, please refer to [**GitHub Repo**](https://github.com/TIGER-AI-Lab/UniIR).
|
152 |
|
153 |
### Qrels
|
154 |
+
Within the `qrels` directory, you will find qrels for both the validation and test sets. These files serve the purpose of evaluating UniIR models. For detailed information, please refer to [**GitHub Repo**](https://github.com/TIGER-AI-Lab/UniIR).
|
155 |
|
156 |
## **How to Use**
|
157 |
### Downloading the M-BEIR Dataset
|