Datasets:
Tasks:
Text Retrieval
Modalities:
Text
Formats:
json
Sub-tasks:
document-retrieval
Languages:
English
Size:
10K - 100K
language: | |
- en | |
multilinguality: | |
- monolingual | |
task_categories: | |
- text-retrieval | |
source_datasets: | |
- NevIR | |
task_ids: | |
- document-retrieval | |
config_names: | |
- corpus | |
- queries | |
- qrels | |
- top_ranked | |
tags: | |
- text-retrieval | |
- negation | |
dataset_info: | |
- config_name: corpus | |
features: | |
- name: _id | |
dtype: string | |
- name: text | |
dtype: string | |
splits: | |
- name: corpus | |
num_examples: 5112 | |
- config_name: queries | |
features: | |
- name: _id | |
dtype: string | |
- name: text | |
dtype: string | |
splits: | |
- name: queries | |
num_examples: 5112 | |
- config_name: default | |
features: | |
- name: query-id | |
dtype: string | |
- name: corpus-id | |
dtype: string | |
- name: score | |
dtype: float64 | |
splits: | |
- name: test | |
num_examples: 2766 # 1383 * 2 | |
- config_name: top_ranked | |
features: | |
- name: query-id | |
dtype: string | |
- name: corpus-ids | |
list: string | |
splits: | |
- name: test | |
num_examples: 2766 | |
configs: | |
- config_name: corpus | |
data_files: | |
- split: corpus | |
path: corpus.jsonl | |
- config_name: queries | |
data_files: | |
- split: queries | |
path: queries.jsonl | |
- config_name: default | |
data_files: | |
- split: test | |
path: qrels/test.jsonl | |
- config_name: top_ranked | |
data_files: | |
- split: test | |
path: top_ranked/test.jsonl | |
# NevIR-mteb Dataset | |
This is the MTEB-compatible version of the NevIR dataset, structured for information retrieval tasks focused on negation understanding. | |
## Dataset Structure | |
The dataset is organized into multiple configurations: | |
1. `corpus`: Contains all documents (doc1 and doc2 from each sample) | |
2. `queries`: Contains all queries (q1 and q2 from each sample) | |
3. `qrels`: Contains relevance judgments (q1 matches with doc1, q2 matches with doc2) | |
4. `top_ranked`: Contains candidate documents for each query (both doc1 and doc2 for every query) | |
## Usage | |
```python | |
from datasets import load_dataset | |
# Load the entire dataset | |
dataset = load_dataset("orionweller/NevIR-mteb") | |
# Load specific configurations | |
corpus = load_dataset("orionweller/NevIR-mteb", "corpus") | |
queries = load_dataset("orionweller/NevIR-mteb", "queries") | |
qrels = load_dataset("orionweller/NevIR-mteb", "qrels") | |
top_ranked = load_dataset("orionweller/NevIR-mteb", "top_ranked") | |
``` | |