Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
Dask
License:
javiccano commited on
Commit
c20e361
·
verified ·
1 Parent(s): 786dd41

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +122 -14
README.md CHANGED
@@ -24,10 +24,11 @@ p{color:Black !important;}
24
  <!-- <img src="./figs/Example.png" width=70%/> -->
25
  </p>
26
 
 
 
 
27
 
28
 
29
- Wikipedia contradict benchmark is a dataset consisting of 253 high-quality, human-annotated instances designed to assess LLM performance when augmented with retrieved passages containing real-world knowledge conflicts. The dataset was created intentionally with that task in mind, focusing on a benchmark consisting of high-quality, human-annotated instances.
30
-
31
  <!-- Note that, in the dataset viewer, there are 130 valid-tag instances, but each instance can contain more that one question and its respective two answers. Then, the total number of questions and answers is 253. -->
32
 
33
  <!-- This dataset card has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). -->
@@ -38,9 +39,22 @@ Wikipedia contradict benchmark is a dataset consisting of 253 high-quality, huma
38
 
39
  <!-- Provide a longer summary of what this dataset is. -->
40
 
 
 
41
  Wikipedia contradict benchmark is a QA-based benchmark consisting of 253 human-annotated instances that cover different types of real-world knowledge conflicts.
 
 
 
 
42
 
 
 
43
  Each instance consists of a question, a pair of contradictory passages extracted from Wikipedia, and two distinct answers, each derived from on the passages. The pair is annotated by a human annotator who identify where the conflicted information is and what type of conflict is observed. The annotator then produces a set of questions related to the passages with different answers reflecting the conflicting source of knowledge.
 
 
 
 
 
44
 
45
  - **Curated by:** Yufang Hou, Alessandra Pascale, Javier Carnerero-Cano, Tigran Tchrakian, Radu Marinescu, Elizabeth Daly, Inkit Padhi, and Prasanna Sattigeri. All authors are employed by IBM Research.
46
  <!-- - **Funded by [optional]:** There was no associated grant. -->
@@ -64,41 +78,66 @@ Each instance consists of a question, a pair of contradictory passages extracted
64
 
65
  <!-- This section describes suitable use cases for the dataset. -->
66
 
 
 
 
67
  The dataset has been used in the paper to assess LLMs performance when augmented with retrieved passages containing real-world knowledge conflicts.
 
 
 
 
68
 
69
- The following figure illustrates the evaluation process:
70
 
71
  <p align="center">
72
  <img src="https://github.com/javiccano/Wikipedia_contradict_benchmark/blob/main/figs/Evaluation.png?raw=true" width=70%/>
73
  <!-- <img src="./figs/Evaluation.png" width=70%/> -->
74
  </p>
75
 
 
 
 
76
  And the following table shows the performance of five LLMs (Mistral-7b-inst, Mixtral-8x7b-inst, Llama-2-70b-chat, Llama-3-70b-inst, and GPT-4) on the Wikipedia Contradict Benchmark based on rigorous human evaluations on a subset of answers for 55 instances, which corresponds to 1,375 LLM responses in total.
 
 
 
77
 
78
  <p align="center">
79
  <img src="https://github.com/javiccano/Wikipedia_contradict_benchmark/blob/main/figs/table2.png?raw=true" width=70%/>
80
  <!-- <img src="./figs/table2.png" width=70%/> -->
81
  </p>
82
 
83
- Notes: “C”, “PC” and “IC” stand for “Correct”, “Partially correct”, “Incorrect”, respectively. “all”, “exp”, and “imp” represent for instance
84
- types: all instances, instances with explicit conflicts, and instances with implicit conflicts. The
85
- numbers represent the ratio of responses from each LLM that were assessed as “Correct, “Partially
86
- correct, or “Incorrect for each instance type under a prompt template. The bold numbers highlight
87
- the best models that correctly answer questions for each type and prompt template.
 
 
 
88
 
89
  ### Out-of-Scope Use
90
 
91
  <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
92
 
 
 
93
  N/A.
 
 
94
 
95
  ## Dataset Structure
96
 
97
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
98
 
 
 
 
99
  Wikipedia contradict benchmark is given in CSV format to store the corresponding information, so researchers can easily use our data. There are 253 instances in total.
 
 
 
 
100
 
101
- The description of each field (when the instance contains two questions) is as follows:
102
 
103
 
104
  - **question_ID:** ID of question.
@@ -118,8 +157,12 @@ The description of each field (when the instance contains two questions) is as f
118
 
119
  ## Usage of the Dataset
120
 
121
- We provide the following starter code. Please refer to the [GitHub repository](https://github.com/) for more information about the functions ```load_testingdata``` and ```generateAnswers_bam_models```.
122
 
 
 
 
 
 
123
 
124
  ```python
125
  from genai import Client, Credentials
@@ -178,7 +221,12 @@ generateAnswers_bam_models(testingUnits)
178
 
179
  <!-- Motivation for the creation of this dataset. -->
180
 
 
 
181
  Retrieval-augmented generation (RAG) has emerged as a promising solution to mitigate the limitations of large language models (LLMs), such as hallucinations and outdated information. However, it remains unclear how LLMs handle knowledge conflicts arising from different augmented retrieved passages, especially when these passages originate from the same source and have equal trustworthiness. In this regard, the motivation of Wikipedia Contradict Benchmark is to comprehensively evaluate LLM-generated answers to questions that have varying answers based on contradictory passages from Wikipedia, a dataset widely regarded as a high-quality pre-training resource for most LLMs.
 
 
 
182
 
183
  ### Source Data
184
 
@@ -188,13 +236,24 @@ Retrieval-augmented generation (RAG) has emerged as a promising solution to miti
188
 
189
  <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
190
 
 
 
 
191
  The data was mostly observable as raw text. The raw data was retrieved from Wikipedia articles containing inconsistent, self-contradictory, and contradict-other tags. The first two tags denote contradictory statements within the same article, whereas the third tag highlights instances where the content of one article contradicts that of another article. In total, we collected around 1,200 articles that contain these tags through the Wikipedia maintenance category “Wikipedia articles with content issues”. Given a content inconsistency tag provided by Wikipedia editors, the annotators verified whether the tag is valid by checking the relevant article content, the editor’s comment, as well as the information in the edit history and the article’s talk page if necessary.
 
 
192
 
193
  #### Who are the source data producers?
194
 
195
  <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
196
 
 
 
 
197
  Wikipedia contributors.
 
 
 
198
 
199
  ### Annotations
200
 
@@ -204,44 +263,79 @@ Wikipedia contributors.
204
 
205
  <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
206
 
 
 
207
  The annotation interface was developed using [Label Studio](https://labelstud.io/).
208
-
209
  The annotators were required to slightly modify the original passages to make them stand-alone (decontextualization). Normally, this requires resolving the coreference anaphors or the bridging anaphors in the first sentence (see annotation guidelines). In Wikipedia, oftentimes the antecedents for these anaphors are the article titles themselves.
210
-
211
  For further information, see the annotation guidelines of the paper.
 
 
 
 
212
 
213
  #### Who are the annotators?
214
 
215
  <!-- This section describes the people or systems who created the annotations. -->
216
 
 
 
 
217
  Yufang Hou, Alessandra Pascale, Javier Carnerero-Cano, Tigran Tchrakian, Radu Marinescu, Elizabeth Daly, Inkit Padhi
 
 
 
218
 
219
  #### Personal and Sensitive Information
220
 
221
  <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
222
 
 
 
 
223
  N/A.
 
 
 
224
 
225
  ## Bias, Risks, and Limitations
226
 
227
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
228
 
 
 
229
  Each annotation instance contains at least one question and two possible answers, but some instances may contain more than one question (and the corresponding two possible answers for each question). Some instances may not contain a value for **paragraphA_clean**, **tagDate**, and **tagReason**.
 
 
 
230
 
231
  ### Recommendations
232
 
233
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
234
 
 
 
235
  Our data is downloaded from Wikipedia. As such, the data is biased towards the original content and sources. Given that human data annotation involves some degree of subjectivity we created a comprehensive 17-page annotation guidelines document to clarify important cases during the annotation process. The annotators were explicitly instructed not to take their personal feeling about the particular topic. Nevertheless, some degree of intrinsic subjectivity might have impacted the techniques picked up by the annotators during the annotation.
236
-
237
  Since our dataset requires manual annotation, annotation noise is inevitably introduced.
 
 
 
 
238
 
239
 
240
  ## Citation
241
 
242
  <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
243
 
 
 
 
244
  If this dataset is utilized in your research, kindly cite the following paper:
 
 
 
245
 
246
  **BibTeX:**
247
 
@@ -256,7 +350,12 @@ If this dataset is utilized in your research, kindly cite the following paper:
256
 
257
  **APA:**
258
 
 
 
259
  Hou, Y., Pascale, A., Carnerero-Cano, J., Tchrakian, T., Marinescu, R., Daly, E., Padhi, I., & Sattigeri, P. (2024). WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia. *arXiv preprint arXiv:2406.13805*.
 
 
 
260
 
261
  <!-- ## Glossary [optional] -->
262
 
@@ -270,8 +369,17 @@ Hou, Y., Pascale, A., Carnerero-Cano, J., Tchrakian, T., Marinescu, R., Daly, E.
270
 
271
  ## Dataset Card Authors
272
 
 
 
273
  Yufang Hou, Alessandra Pascale, Javier Carnerero-Cano, Tigran Tchrakian, Radu Marinescu, Elizabeth Daly, Inkit Padhi, and Prasanna Sattigeri.
 
 
 
274
 
275
  ## Dataset Card Contact
276
 
277
- Yufang Hou (yhou@ie.ibm.com), Alessandra Pascale (apascale@ie.ibm.com), Javier Carnerero-Cano (javier.cano@ibm.com), Tigran Tchrakian (tigran@ie.ibm.com), Radu Marinescu (radu.marinescu@ie.ibm.com), Elizabeth Daly (elizabeth.daly@ie.ibm.com), Inkit Padhi (inkpad@ibm.com), and Prasanna Sattigeri (psattig@us.ibm.com).
 
 
 
 
 
24
  <!-- <img src="./figs/Example.png" width=70%/> -->
25
  </p>
26
 
27
+ <div align="left">
28
+ <span style="font-size:16px;">Wikipedia contradict benchmark is a dataset consisting of 253 high-quality, human-annotated instances designed to assess LLM performance when augmented with retrieved passages containing real-world knowledge conflicts. The dataset was created intentionally with that task in mind, focusing on a benchmark consisting of high-quality, human-annotated instances.</span>
29
+ </div>
30
 
31
 
 
 
32
  <!-- Note that, in the dataset viewer, there are 130 valid-tag instances, but each instance can contain more that one question and its respective two answers. Then, the total number of questions and answers is 253. -->
33
 
34
  <!-- This dataset card has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). -->
 
39
 
40
  <!-- Provide a longer summary of what this dataset is. -->
41
 
42
+ <div align="left">
43
+ <span style="font-size:16px;">
44
  Wikipedia contradict benchmark is a QA-based benchmark consisting of 253 human-annotated instances that cover different types of real-world knowledge conflicts.
45
+ <br><br>
46
+ </span>
47
+ </div>
48
+
49
 
50
+ <div align="left">
51
+ <span style="font-size:16px;">
52
  Each instance consists of a question, a pair of contradictory passages extracted from Wikipedia, and two distinct answers, each derived from on the passages. The pair is annotated by a human annotator who identify where the conflicted information is and what type of conflict is observed. The annotator then produces a set of questions related to the passages with different answers reflecting the conflicting source of knowledge.
53
+ </span>
54
+ </div>
55
+
56
+
57
+
58
 
59
  - **Curated by:** Yufang Hou, Alessandra Pascale, Javier Carnerero-Cano, Tigran Tchrakian, Radu Marinescu, Elizabeth Daly, Inkit Padhi, and Prasanna Sattigeri. All authors are employed by IBM Research.
60
  <!-- - **Funded by [optional]:** There was no associated grant. -->
 
78
 
79
  <!-- This section describes suitable use cases for the dataset. -->
80
 
81
+
82
+ <div align="left">
83
+ <span style="font-size:16px;">
84
  The dataset has been used in the paper to assess LLMs performance when augmented with retrieved passages containing real-world knowledge conflicts.
85
+ <br><br>
86
+ The following figure illustrates the evaluation process:
87
+ </span>
88
+ </div>
89
 
 
90
 
91
  <p align="center">
92
  <img src="https://github.com/javiccano/Wikipedia_contradict_benchmark/blob/main/figs/Evaluation.png?raw=true" width=70%/>
93
  <!-- <img src="./figs/Evaluation.png" width=70%/> -->
94
  </p>
95
 
96
+
97
+ <div align="left">
98
+ <span style="font-size:16px;">
99
  And the following table shows the performance of five LLMs (Mistral-7b-inst, Mixtral-8x7b-inst, Llama-2-70b-chat, Llama-3-70b-inst, and GPT-4) on the Wikipedia Contradict Benchmark based on rigorous human evaluations on a subset of answers for 55 instances, which corresponds to 1,375 LLM responses in total.
100
+ </span>
101
+ </div>
102
+
103
 
104
  <p align="center">
105
  <img src="https://github.com/javiccano/Wikipedia_contradict_benchmark/blob/main/figs/table2.png?raw=true" width=70%/>
106
  <!-- <img src="./figs/table2.png" width=70%/> -->
107
  </p>
108
 
109
+
110
+ <div align="left">
111
+ <span style="font-size:16px;">
112
+ Notes: “C”, “PC” and “IC” stand for “Correct”, “Partially correct”, “Incorrect”, respectively. “all”, “exp”, and “imp” represent for instance types: all instances, instances with explicit conflicts, and instances with implicit conflicts. The numbers represent the ratio of responses from each LLM that were assessed as “Correct, “Partially correct, or “Incorrect for each instance type under a prompt template. The bold numbers highlight the best models that correctly answer questions for each type and prompt template.
113
+ </span>
114
+ </div>
115
+
116
+
117
 
118
  ### Out-of-Scope Use
119
 
120
  <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
121
 
122
+ <div align="left">
123
+ <span style="font-size:16px;">
124
  N/A.
125
+ </span>
126
+ </div>
127
 
128
  ## Dataset Structure
129
 
130
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
131
 
132
+
133
+ <div align="left">
134
+ <span style="font-size:16px;">
135
  Wikipedia contradict benchmark is given in CSV format to store the corresponding information, so researchers can easily use our data. There are 253 instances in total.
136
+ <br><br>
137
+ The description of each field (when the instance contains two questions) is as follows:
138
+ </span>
139
+ </div>
140
 
 
141
 
142
 
143
  - **question_ID:** ID of question.
 
157
 
158
  ## Usage of the Dataset
159
 
 
160
 
161
+ <div align="left">
162
+ <span style="font-size:16px;">
163
+ We provide the following starter code. Please refer to the [GitHub repository](https://github.com/) for more information about the functions ```load_testingdata``` and ```generateAnswers_bam_models```.
164
+ </span>
165
+ </div>
166
 
167
  ```python
168
  from genai import Client, Credentials
 
221
 
222
  <!-- Motivation for the creation of this dataset. -->
223
 
224
+ <div align="left">
225
+ <span style="font-size:16px;">
226
  Retrieval-augmented generation (RAG) has emerged as a promising solution to mitigate the limitations of large language models (LLMs), such as hallucinations and outdated information. However, it remains unclear how LLMs handle knowledge conflicts arising from different augmented retrieved passages, especially when these passages originate from the same source and have equal trustworthiness. In this regard, the motivation of Wikipedia Contradict Benchmark is to comprehensively evaluate LLM-generated answers to questions that have varying answers based on contradictory passages from Wikipedia, a dataset widely regarded as a high-quality pre-training resource for most LLMs.
227
+ </span>
228
+ </div>
229
+
230
 
231
  ### Source Data
232
 
 
236
 
237
  <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
238
 
239
+
240
+ <div align="left">
241
+ <span style="font-size:16px;">
242
  The data was mostly observable as raw text. The raw data was retrieved from Wikipedia articles containing inconsistent, self-contradictory, and contradict-other tags. The first two tags denote contradictory statements within the same article, whereas the third tag highlights instances where the content of one article contradicts that of another article. In total, we collected around 1,200 articles that contain these tags through the Wikipedia maintenance category “Wikipedia articles with content issues”. Given a content inconsistency tag provided by Wikipedia editors, the annotators verified whether the tag is valid by checking the relevant article content, the editor’s comment, as well as the information in the edit history and the article’s talk page if necessary.
243
+ </span>
244
+ </div>
245
 
246
  #### Who are the source data producers?
247
 
248
  <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
249
 
250
+
251
+ <div align="left">
252
+ <span style="font-size:16px;">
253
  Wikipedia contributors.
254
+ </span>
255
+ </div>
256
+
257
 
258
  ### Annotations
259
 
 
263
 
264
  <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
265
 
266
+ <div align="left">
267
+ <span style="font-size:16px;">
268
  The annotation interface was developed using [Label Studio](https://labelstud.io/).
269
+ <br><br>
270
  The annotators were required to slightly modify the original passages to make them stand-alone (decontextualization). Normally, this requires resolving the coreference anaphors or the bridging anaphors in the first sentence (see annotation guidelines). In Wikipedia, oftentimes the antecedents for these anaphors are the article titles themselves.
271
+ <br><br>
272
  For further information, see the annotation guidelines of the paper.
273
+ </span>
274
+ </div>
275
+
276
+
277
 
278
  #### Who are the annotators?
279
 
280
  <!-- This section describes the people or systems who created the annotations. -->
281
 
282
+
283
+ <div align="left">
284
+ <span style="font-size:16px;">
285
  Yufang Hou, Alessandra Pascale, Javier Carnerero-Cano, Tigran Tchrakian, Radu Marinescu, Elizabeth Daly, Inkit Padhi
286
+ </span>
287
+ </div>
288
+
289
 
290
  #### Personal and Sensitive Information
291
 
292
  <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
293
 
294
+
295
+ <div align="left">
296
+ <span style="font-size:16px;">
297
  N/A.
298
+ </span>
299
+ </div>
300
+
301
 
302
  ## Bias, Risks, and Limitations
303
 
304
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
305
 
306
+ <div align="left">
307
+ <span style="font-size:16px;">
308
  Each annotation instance contains at least one question and two possible answers, but some instances may contain more than one question (and the corresponding two possible answers for each question). Some instances may not contain a value for **paragraphA_clean**, **tagDate**, and **tagReason**.
309
+ </span>
310
+ </div>
311
+
312
 
313
  ### Recommendations
314
 
315
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
316
 
317
+ <div align="left">
318
+ <span style="font-size:16px;">
319
  Our data is downloaded from Wikipedia. As such, the data is biased towards the original content and sources. Given that human data annotation involves some degree of subjectivity we created a comprehensive 17-page annotation guidelines document to clarify important cases during the annotation process. The annotators were explicitly instructed not to take their personal feeling about the particular topic. Nevertheless, some degree of intrinsic subjectivity might have impacted the techniques picked up by the annotators during the annotation.
320
+ <br><br>
321
  Since our dataset requires manual annotation, annotation noise is inevitably introduced.
322
+ </span>
323
+ </div>
324
+
325
+
326
 
327
 
328
  ## Citation
329
 
330
  <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
331
 
332
+
333
+ <div align="left">
334
+ <span style="font-size:16px;">
335
  If this dataset is utilized in your research, kindly cite the following paper:
336
+ </span>
337
+ </div>
338
+
339
 
340
  **BibTeX:**
341
 
 
350
 
351
  **APA:**
352
 
353
+ <div align="left">
354
+ <span style="font-size:16px;">
355
  Hou, Y., Pascale, A., Carnerero-Cano, J., Tchrakian, T., Marinescu, R., Daly, E., Padhi, I., & Sattigeri, P. (2024). WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia. *arXiv preprint arXiv:2406.13805*.
356
+ </span>
357
+ </div>
358
+
359
 
360
  <!-- ## Glossary [optional] -->
361
 
 
369
 
370
  ## Dataset Card Authors
371
 
372
+ <div align="left">
373
+ <span style="font-size:16px;">
374
  Yufang Hou, Alessandra Pascale, Javier Carnerero-Cano, Tigran Tchrakian, Radu Marinescu, Elizabeth Daly, Inkit Padhi, and Prasanna Sattigeri.
375
+ </span>
376
+ </div>
377
+
378
 
379
  ## Dataset Card Contact
380
 
381
+ <div align="left">
382
+ <span style="font-size:16px;">
383
+ Yufang Hou (yhou@ie.ibm.com), Alessandra Pascale (apascale@ie.ibm.com), Javier Carnerero-Cano (javier.cano@ibm.com), Tigran Tchrakian (tigran@ie.ibm.com), Radu Marinescu (radu.marinescu@ie.ibm.com), Elizabeth Daly (elizabeth.daly@ie.ibm.com), Inkit Padhi (inkpad@ibm.com), and Prasanna Sattigeri (psattig@us.ibm.com). </span>
384
+ </div>
385
+