Datasets:
Tasks:
Text Retrieval
Modalities:
Text
Formats:
json
Sub-tasks:
document-retrieval
Languages:
English
Size:
10K - 100K
File size: 2,385 Bytes
57edb51 43ce179 57edb51 46299a0 eab9957 57edb51 366c055 57edb51 366c055 57edb51 e7841b6 57edb51 46299a0 57edb51 366c055 57edb51 366c055 1eaded9 57edb51 46299a0 57edb51 eab9957 57edb51 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
---
language:
- en
multilinguality:
- monolingual
task_categories:
- text-retrieval
source_datasets:
- NevIR
task_ids:
- document-retrieval
config_names:
- corpus
- queries
- qrels
- top_ranked
tags:
- text-retrieval
- negation
dataset_info:
- config_name: corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_examples: 5112
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: queries
num_examples: 5112
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: float64
splits:
- name: test
num_examples: 2766 # 1383 * 2
- config_name: top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
list: string
splits:
- name: test
num_examples: 2766
configs:
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: queries.jsonl
- config_name: default
data_files:
- split: test
path: qrels/test.jsonl
- config_name: top_ranked
data_files:
- split: test
path: top_ranked/test.jsonl
---
# NevIR-mteb Dataset
This is the MTEB-compatible version of the NevIR dataset, structured for information retrieval tasks focused on negation understanding.
## Dataset Structure
The dataset is organized into multiple configurations:
1. `corpus`: Contains all documents (doc1 and doc2 from each sample)
2. `queries`: Contains all queries (q1 and q2 from each sample)
3. `qrels`: Contains relevance judgments (q1 matches with doc1, q2 matches with doc2)
4. `top_ranked`: Contains candidate documents for each query (both doc1 and doc2 for every query)
## Usage
```python
from datasets import load_dataset
# Load the entire dataset
dataset = load_dataset("orionweller/NevIR-mteb")
# Load specific configurations
corpus = load_dataset("orionweller/NevIR-mteb", "corpus")
queries = load_dataset("orionweller/NevIR-mteb", "queries")
qrels = load_dataset("orionweller/NevIR-mteb", "qrels")
top_ranked = load_dataset("orionweller/NevIR-mteb", "top_ranked")
```
|