Datasets:

ArXiv:
License:
adwaitagashe commited on
Commit
7c21e35
·
1 Parent(s): 50aa0d8

Updated dataset loading to include anotations

Browse files
Files changed (2) hide show
  1. README.md +159 -122
  2. bordirlines.py +69 -1
README.md CHANGED
@@ -1,65 +1,66 @@
1
  ---
2
  language:
3
- - en
4
- - ar
5
- - es
6
- - fr
7
- - ru
8
- - hi
9
- - ms
10
- - sw
11
- - az
12
- - ko
13
- - pt
14
- - hy
15
- - th
16
- - uk
17
- - ur
18
- - sr
19
- - iw
20
- - ja
21
- - hr
22
- - tl
23
- - ky
24
- - vi
25
- - fa
26
- - tg
27
- - mg
28
- - nl
29
- - ne
30
- - uz
31
- - my
32
- - da
33
- - dz
34
- - id
35
- - is
36
- - tr
37
- - lo
38
- - sl
39
- - so
40
- - mn
41
- - bn
42
- - bs
43
- - ht
44
- - el
45
- - it
46
- - to
47
- - ka
48
- - sn
49
- - sq
50
- - zh
51
  pretty_name: BordIRlines
52
  multilinguality:
53
- - multilingual
54
  annotations_creators:
55
- - machine-generated
 
56
  language_creators:
57
- - found
58
  source_datasets:
59
- - manestay/borderlines
60
  license: mit
61
  task_categories:
62
- - question-answering
63
  arxiv: 2410.01171
64
  ---
65
 
@@ -74,85 +75,110 @@ Each `doc` is a passage from a Wikipedia article.
74
 
75
  ### Languages
76
 
77
- The dataset includes docs and queries in the following __languages__:
78
-
79
- * `en`: English
80
- * `zht`: Traditional Chinese
81
- * `ar`: Arabic
82
- * `zhs`: Simplified Chinese
83
- * `es`: Spanish
84
- * `fr`: French
85
- * `ru`: Russian
86
- * `hi`: Hindi
87
- * `ms`: Malay
88
- * `sw`: Swahili
89
- * `az`: Azerbaijani
90
- * `ko`: Korean
91
- * `pt`: Portuguese
92
- * `hy`: Armenian
93
- * `th`: Thai
94
- * `uk`: Ukrainian
95
- * `ur`: Urdu
96
- * `sr`: Serbian
97
- * `iw`: Hebrew
98
- * `ja`: Japanese
99
- * `hr`: Croatian
100
- * `tl`: Tagalog
101
- * `ky`: Kyrgyz
102
- * `vi`: Vietnamese
103
- * `fa`: Persian
104
- * `tg`: Tajik
105
- * `mg`: Malagasy
106
- * `nl`: Dutch
107
- * `ne`: Nepali
108
- * `uz`: Uzbek
109
- * `my`: Burmese
110
- * `da`: Danish
111
- * `dz`: Dzongkha
112
- * `id`: Indonesian
113
- * `is`: Icelandic
114
- * `tr`: Turkish
115
- * `lo`: Lao
116
- * `sl`: Slovenian
117
- * `so`: Somali
118
- * `mn`: Mongolian
119
- * `bn`: Bengali
120
- * `bs`: Bosnian
121
- * `ht`: Haitian Creole
122
- * `el`: Greek
123
- * `it`: Italian
124
- * `to`: Tonga
125
- * `ka`: Georgian
126
- * `sn`: Shona
127
- * `sq`: Albanian
128
- * `control`: see below
129
 
130
  The **control** language is English, and contains the queries for all 251 territories. In contrast, **en** is only the 38 territories which have an English-speaking claimant.
131
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
132
  ## Systems
 
133
  We have processed retrieval results for these IR systems:
134
- * `openai`: OpenAI's model `text-embedding-3-large`, cosine similarity
135
- * `m3`: M3-embedding ([link](https://huggingface.co/BAAI/bge-m3)) (Chen et al., 2024)
 
136
 
137
  ## Modes
 
138
  Considering a user query in language `l` on a territory `t`, there are 4 modes for the IR.
139
- * `qlang`: consider passages in `{l}`. This is monolingual IR (the default).
140
- * `qlang_en`: consider passages in either `{l, en}`.
141
- * `en`: consider passages in `{en}`.
142
- * `rel_langs`: consider passages in all relevant languages to `t` + `en`, so `{l1, l2, ..., en}`. This is a set, so `en` will not be duplicated if it already is relevant.
 
143
 
144
  ## Dataset Structure
145
 
146
  ### Data Fields
147
 
148
  The dataset consists of the following fields:
149
- * `query_id (string)`: The id of the query.
150
- * `query (string)`: The query text as provided by the `queries.tsv` file.
151
- * `territory (string)`: The territory of the query hit.
152
- * `rank (int32)`: The rank of the document for the corresponding query.
153
- * `score (float32)`: The relevance score of the document as provided by the search engine or model.
154
- * `doc_id (string)`: The unique identifier of the article.
155
- * `doc_text (string)`: The full text of the corresponding article or document.
 
 
 
 
 
156
 
157
  ### Download Structure
158
 
@@ -167,13 +193,17 @@ data/
167
  ...
168
  all_docs.json
169
  queries.tsv
 
 
170
  ```
171
 
172
- * `queries.tsv`: Contains the list of query IDs and their associated query texts.
173
- * `all_docs.json`: JSON dict containing all docs. It is organized as a nested dict, with keys `lang`, and values another dict with keys `doc_id`, and values `doc_text`.
174
- * `{lang}\_query_hits.tsv`: A TSV file with relevance scores and hit ranks for queries.
 
 
175
 
176
- Currently, there are 50 langs * 1 system * 4 modes = 200 query hit TSV files.
177
 
178
  ## Example Usage
179
 
@@ -195,9 +225,16 @@ ds_oa_zhs2 = load_dataset("borderlines/bordirlines", "zhs", split="openai.qlang"
195
  ds_m3_zhs1 = load_dataset("borderlines/bordirlines", "zhs", split="m3.en")
196
  # load Dataset for Simplified Chinese, qlang mode, m3 embedding
197
  ds_m3_zhs2 = load_dataset("borderlines/bordirlines", "zhs", split="m3.qlang")
 
 
 
 
 
 
198
  ```
199
 
200
  ## Citation
 
201
  ```
202
  @misc{li2024bordirlines,
203
  title={BordIRlines: A Dataset for Evaluating Cross-lingual Retrieval-Augmented Generation},
 
1
  ---
2
  language:
3
+ - en
4
+ - ar
5
+ - es
6
+ - fr
7
+ - ru
8
+ - hi
9
+ - ms
10
+ - sw
11
+ - az
12
+ - ko
13
+ - pt
14
+ - hy
15
+ - th
16
+ - uk
17
+ - ur
18
+ - sr
19
+ - iw
20
+ - ja
21
+ - hr
22
+ - tl
23
+ - ky
24
+ - vi
25
+ - fa
26
+ - tg
27
+ - mg
28
+ - nl
29
+ - ne
30
+ - uz
31
+ - my
32
+ - da
33
+ - dz
34
+ - id
35
+ - is
36
+ - tr
37
+ - lo
38
+ - sl
39
+ - so
40
+ - mn
41
+ - bn
42
+ - bs
43
+ - ht
44
+ - el
45
+ - it
46
+ - to
47
+ - ka
48
+ - sn
49
+ - sq
50
+ - zh
51
  pretty_name: BordIRlines
52
  multilinguality:
53
+ - multilingual
54
  annotations_creators:
55
+ - human
56
+ - machine-generated
57
  language_creators:
58
+ - found
59
  source_datasets:
60
+ - manestay/borderlines
61
  license: mit
62
  task_categories:
63
+ - question-answering
64
  arxiv: 2410.01171
65
  ---
66
 
 
75
 
76
  ### Languages
77
 
78
+ The dataset includes docs and queries in the following **languages**:
79
+
80
+ - `en`: English
81
+ - `zht`: Traditional Chinese
82
+ - `ar`: Arabic
83
+ - `zhs`: Simplified Chinese
84
+ - `es`: Spanish
85
+ - `fr`: French
86
+ - `ru`: Russian
87
+ - `hi`: Hindi
88
+ - `ms`: Malay
89
+ - `sw`: Swahili
90
+ - `az`: Azerbaijani
91
+ - `ko`: Korean
92
+ - `pt`: Portuguese
93
+ - `hy`: Armenian
94
+ - `th`: Thai
95
+ - `uk`: Ukrainian
96
+ - `ur`: Urdu
97
+ - `sr`: Serbian
98
+ - `iw`: Hebrew
99
+ - `ja`: Japanese
100
+ - `hr`: Croatian
101
+ - `tl`: Tagalog
102
+ - `ky`: Kyrgyz
103
+ - `vi`: Vietnamese
104
+ - `fa`: Persian
105
+ - `tg`: Tajik
106
+ - `mg`: Malagasy
107
+ - `nl`: Dutch
108
+ - `ne`: Nepali
109
+ - `uz`: Uzbek
110
+ - `my`: Burmese
111
+ - `da`: Danish
112
+ - `dz`: Dzongkha
113
+ - `id`: Indonesian
114
+ - `is`: Icelandic
115
+ - `tr`: Turkish
116
+ - `lo`: Lao
117
+ - `sl`: Slovenian
118
+ - `so`: Somali
119
+ - `mn`: Mongolian
120
+ - `bn`: Bengali
121
+ - `bs`: Bosnian
122
+ - `ht`: Haitian Creole
123
+ - `el`: Greek
124
+ - `it`: Italian
125
+ - `to`: Tonga
126
+ - `ka`: Georgian
127
+ - `sn`: Shona
128
+ - `sq`: Albanian
129
+ - `control`: see below
130
 
131
  The **control** language is English, and contains the queries for all 251 territories. In contrast, **en** is only the 38 territories which have an English-speaking claimant.
132
 
133
+ ### Annotations
134
+
135
+ The dataset contains two types of relevance annotations:
136
+
137
+ 1. **Human Annotations**:
138
+
139
+ - Provided by three annotators for a subset of query-document pairs.
140
+ - Relevance is determined by majority vote across annotators.
141
+ - Territories are listed per annotator, capturing individual perspectives.
142
+
143
+ 2. **LLM Annotations**:
144
+ - Includes two modes:
145
+ - **Zero-shot**: Predictions without any task-specific examples.
146
+ - **Few-shot**: Predictions with a small number of task-specific examples.
147
+ - Default mode is **few-shot**.
148
+
149
  ## Systems
150
+
151
  We have processed retrieval results for these IR systems:
152
+
153
+ - `openai`: OpenAI's model `text-embedding-3-large`, cosine similarity
154
+ - `m3`: M3-embedding ([link](https://huggingface.co/BAAI/bge-m3)) (Chen et al., 2024)
155
 
156
  ## Modes
157
+
158
  Considering a user query in language `l` on a territory `t`, there are 4 modes for the IR.
159
+
160
+ - `qlang`: consider passages in `{l}`. This is monolingual IR (the default).
161
+ - `qlang_en`: consider passages in either `{l, en}`.
162
+ - `en`: consider passages in `{en}`.
163
+ - `rel_langs`: consider passages in all relevant languages to `t` + `en`, so `{l1, l2, ..., en}`. This is a set, so `en` will not be duplicated if it already is relevant.
164
 
165
  ## Dataset Structure
166
 
167
  ### Data Fields
168
 
169
  The dataset consists of the following fields:
170
+
171
+ - `query_id (string)`: The id of the query.
172
+ - `query (string)`: The query text as provided by the `queries.tsv` file.
173
+ - `territory (string)`: The territory of the query hit.
174
+ - `rank (int32)`: The rank of the document for the corresponding query.
175
+ - `score (float32)`: The relevance score of the document as provided by the search engine or model.
176
+ - `doc_id (string)`: The unique identifier of the article.
177
+ - `doc_text (string)`: The full text of the corresponding article or document.
178
+ - `relevant_human (bool)`: Majority relevance determined by human annotators.
179
+ - `territory_human (list[string])`: Territories as judged by human annotators.
180
+ - `relevant_llm_zeroshot (bool)`: LLM zero-shot relevance prediction.
181
+ - `relevant_llm_fewshot (bool)`: LLM few-shot relevance prediction.
182
 
183
  ### Download Structure
184
 
 
193
  ...
194
  all_docs.json
195
  queries.tsv
196
+ human_annotations.tsv
197
+ llm_annotations.tsv
198
  ```
199
 
200
+ - `queries.tsv`: Contains the list of query IDs and their associated query texts.
201
+ - `all_docs.json`: JSON dict containing all docs. It is organized as a nested dict, with keys `lang`, and values another dict with keys `doc_id`, and values `doc_text`.
202
+ - `{lang}\_query_hits.tsv`: A TSV file with relevance scores and hit ranks for queries.
203
+ - `human_annotations.tsv`: A TSV file with human relevance annotations.
204
+ - `llm_annotations.tsv`: A TSV file with LLM relevance predictions.
205
 
206
+ Currently, there are 50 langs _ 1 system _ 4 modes = 200 query hit TSV files.
207
 
208
  ## Example Usage
209
 
 
225
  ds_m3_zhs1 = load_dataset("borderlines/bordirlines", "zhs", split="m3.en")
226
  # load Dataset for Simplified Chinese, qlang mode, m3 embedding
227
  ds_m3_zhs2 = load_dataset("borderlines/bordirlines", "zhs", split="m3.qlang")
228
+
229
+ # Load Dataset for English, relevant-only with human annotations
230
+ ds_human_en = load_dataset("borderlines/bordirlines", "en", relevant_only=True, annotation_type="human")
231
+
232
+ # Load Dataset for Simplified Chinese, few-shot LLM mode
233
+ ds_llm_fewshot_zhs = load_dataset("borderlines/bordirlines", "zhs", relevant_only=True, annotation_type="llm", llm_mode="fewshot")
234
  ```
235
 
236
  ## Citation
237
+
238
  ```
239
  @misc{li2024bordirlines,
240
  title={BordIRlines: A Dataset for Evaluating Cross-lingual Retrieval-Augmented Generation},
bordirlines.py CHANGED
@@ -95,6 +95,12 @@ class BordIRLinesDataset(datasets.GeneratorBasedBuilder):
95
  for lang in SUPPORTED_LANGUAGES
96
  ]
97
 
 
 
 
 
 
 
98
  def _info(self):
99
  return datasets.DatasetInfo(
100
  description="IR Dataset for BordIRLines paper.",
@@ -109,6 +115,10 @@ class BordIRLinesDataset(datasets.GeneratorBasedBuilder):
109
  "doc_id": datasets.Value("string"),
110
  "doc_text": datasets.Value("string"),
111
  "doc_lang": datasets.Value("string"),
 
 
 
 
112
  }
113
  ),
114
  )
@@ -117,6 +127,8 @@ class BordIRLinesDataset(datasets.GeneratorBasedBuilder):
117
  base_url = self.config.data_root_dir
118
  queries_path = f"{base_url}/queries.tsv"
119
  docs_path = dl_manager.download_and_extract(f"{base_url}/all_docs.json")
 
 
120
 
121
  lang = self.config.language
122
 
@@ -131,6 +143,8 @@ class BordIRLinesDataset(datasets.GeneratorBasedBuilder):
131
  "hits": f"{base_url}/{lang}/{system}/{mode}/{lang}_query_hits.tsv",
132
  "docs": docs_path,
133
  "queries": queries_path,
 
 
134
  }
135
  )
136
 
@@ -140,13 +154,15 @@ class BordIRLinesDataset(datasets.GeneratorBasedBuilder):
140
  "hits_path": downloaded_data[source]["hits"],
141
  "docs_path": downloaded_data[source]["docs"],
142
  "queries_path": downloaded_data[source]["queries"],
 
 
143
  },
144
  )
145
  splits.append(split)
146
 
147
  return splits
148
 
149
- def _generate_examples(self, hits_path, docs_path, queries_path):
150
  n_hits = self.config.n_hits
151
  queries_df = pd.read_csv(queries_path, sep="\t")
152
  query_map = dict(zip(queries_df["query_id"], queries_df["query_text"]))
@@ -156,6 +172,9 @@ class BordIRLinesDataset(datasets.GeneratorBasedBuilder):
156
  docs = load_json(docs_path)
157
 
158
  hits = pd.read_csv(hits_path, sep="\t")
 
 
 
159
  if n_hits:
160
  hits = hits.groupby("query_id").head(n_hits)
161
 
@@ -164,12 +183,57 @@ class BordIRLinesDataset(datasets.GeneratorBasedBuilder):
164
  hits = hits.sort_values(by=["query_id_int", "rank"])
165
  hits = hits.drop(columns=["query_id_int"])
166
 
 
 
 
167
  for _, row in hits.iterrows():
168
  doc_id = row["doc_id"]
169
  doc_lang = row["doc_lang"]
170
  query_id = row["query_id"]
171
  query_text = query_map[query_id]
172
  query_lang = query_to_lang_map[query_id]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
173
  yield (
174
  counter,
175
  {
@@ -182,6 +246,10 @@ class BordIRLinesDataset(datasets.GeneratorBasedBuilder):
182
  "doc_id": doc_id,
183
  "doc_text": docs[doc_lang][doc_id],
184
  "doc_lang": doc_lang,
 
 
 
 
185
  },
186
  )
187
 
 
95
  for lang in SUPPORTED_LANGUAGES
96
  ]
97
 
98
+ def __init__(self, *args, relevant_only=False, annotation_type=None, llm_mode="fewshot", **kwargs):
99
+ super().__init__(*args, **kwargs)
100
+ self.relevant_only = relevant_only
101
+ self.annotation_type = annotation_type
102
+ self.llm_mode = llm_mode # Choose between "zeroshot" and "fewshot". Default: "fewshot".
103
+
104
  def _info(self):
105
  return datasets.DatasetInfo(
106
  description="IR Dataset for BordIRLines paper.",
 
115
  "doc_id": datasets.Value("string"),
116
  "doc_text": datasets.Value("string"),
117
  "doc_lang": datasets.Value("string"),
118
+ "relevant_human": datasets.Value("bool"),
119
+ "territory_human": datasets.Sequence(datasets.Value("string")),
120
+ "relevant_llm_zeroshot": datasets.Value("bool"),
121
+ "relevant_llm_fewshot": datasets.Value("bool"),
122
  }
123
  ),
124
  )
 
127
  base_url = self.config.data_root_dir
128
  queries_path = f"{base_url}/queries.tsv"
129
  docs_path = dl_manager.download_and_extract(f"{base_url}/all_docs.json")
130
+ human_annotations_path = dl_manager.download_and_extract(f"{base_url}/human_annotations.tsv")
131
+ llm_annotations_path = dl_manager.download_and_extract(f"{base_url}/llm_annotations.tsv")
132
 
133
  lang = self.config.language
134
 
 
143
  "hits": f"{base_url}/{lang}/{system}/{mode}/{lang}_query_hits.tsv",
144
  "docs": docs_path,
145
  "queries": queries_path,
146
+ "human_annotations": human_annotations_path,
147
+ "llm_annotations": llm_annotations_path,
148
  }
149
  )
150
 
 
154
  "hits_path": downloaded_data[source]["hits"],
155
  "docs_path": downloaded_data[source]["docs"],
156
  "queries_path": downloaded_data[source]["queries"],
157
+ "human_annotations_path": downloaded_data[source]["human_annotations"],
158
+ "llm_annotations_path": downloaded_data[source]["llm_annotations"],
159
  },
160
  )
161
  splits.append(split)
162
 
163
  return splits
164
 
165
+ def _generate_examples(self, hits_path, docs_path, queries_path, human_annotations_path, llm_annotations_path):
166
  n_hits = self.config.n_hits
167
  queries_df = pd.read_csv(queries_path, sep="\t")
168
  query_map = dict(zip(queries_df["query_id"], queries_df["query_text"]))
 
172
  docs = load_json(docs_path)
173
 
174
  hits = pd.read_csv(hits_path, sep="\t")
175
+ human_annotations = pd.read_csv(human_annotations_path, sep="\t")
176
+ llm_annotations = pd.read_csv(llm_annotations_path, sep="\t")
177
+
178
  if n_hits:
179
  hits = hits.groupby("query_id").head(n_hits)
180
 
 
183
  hits = hits.sort_values(by=["query_id_int", "rank"])
184
  hits = hits.drop(columns=["query_id_int"])
185
 
186
+ human_map = human_annotations.set_index(["query_id", "doc_id"]).to_dict(orient="index")
187
+ llm_map = llm_annotations.set_index(["query_id", "doc_id"]).to_dict(orient="index")
188
+
189
  for _, row in hits.iterrows():
190
  doc_id = row["doc_id"]
191
  doc_lang = row["doc_lang"]
192
  query_id = row["query_id"]
193
  query_text = query_map[query_id]
194
  query_lang = query_to_lang_map[query_id]
195
+
196
+ # Get Human Data
197
+ human_data = human_map.get((query_id, doc_id), {})
198
+ # Parse relevant_human_votes manually
199
+ raw_votes = human_data.get("relevant_human", "[]")
200
+ relevant_human_votes = [
201
+ True if v.strip() == "True" else False if v.strip() == "False" else False
202
+ for v in raw_votes.strip("[]").split(",")
203
+ if v.strip()
204
+ ]
205
+
206
+ # Parse territory_human manually
207
+ raw_territories = human_data.get("territory_human", "[]")
208
+ territory_human = [
209
+ v.strip().strip("'").strip('"') # Remove extra quotes and whitespace
210
+ for v in raw_territories.strip("[]").split(",")
211
+ if v.strip()
212
+ ]
213
+
214
+ # Calculate majority relevance
215
+ majority_relevant_human = (
216
+ sum(relevant_human_votes) > len(relevant_human_votes) / 2 if relevant_human_votes else False
217
+ )
218
+
219
+
220
+ # Get LLM Data
221
+ llm_data = llm_map.get((query_id, doc_id), {})
222
+ relevant_llm = (
223
+ llm_data.get("relevant_fewshot", None)
224
+ if self.llm_mode == "fewshot"
225
+ else llm_data.get("relevant_zeroshot", None)
226
+ )
227
+ # Filtering logic
228
+ if self.relevant_only:
229
+ if self.annotation_type == "human" and not majority_relevant_human:
230
+ continue
231
+ elif self.annotation_type == "llm" and not (relevant_llm is True):
232
+ continue
233
+ elif not majority_relevant_human and not (relevant_llm is True):
234
+ continue
235
+
236
+
237
  yield (
238
  counter,
239
  {
 
246
  "doc_id": doc_id,
247
  "doc_text": docs[doc_lang][doc_id],
248
  "doc_lang": doc_lang,
249
+ "relevant_human": majority_relevant_human,
250
+ "territory_human": territory_human,
251
+ "relevant_llm_zeroshot": llm_data.get("relevant_zeroshot", None),
252
+ "relevant_llm_fewshot": llm_data.get("relevant_fewshot", None),
253
  },
254
  )
255