Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
dfurman commited on
Commit
c50ada3
·
1 Parent(s): e077e99

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -32
README.md CHANGED
@@ -7,7 +7,7 @@ language_creators:
7
  - machine-generated
8
  multilinguality:
9
  - multilingual
10
- pretty_name: Fact Completion Benchmark for Text Models
11
  size_categories:
12
  - 100K<n<1M
13
  task_categories:
@@ -128,11 +128,21 @@ language:
128
 
129
  ### Dataset Summary
130
 
131
- This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
132
 
133
- ### Supported Tasks and Leaderboards
134
 
135
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
136
 
137
  ### Languages
138
 
@@ -209,12 +219,11 @@ This dataset card aims to be a base template for new datasets. It has been gener
209
  ### Citation Information
210
 
211
  ```
212
- @misc{calibragpt,
213
- author = {Shreshta Bhat and Daniel Furman and Tim Schott},
214
- title = {CalibraGPT: The Search for (Mis)Information in Large Language Models},
215
- year = {2023},
216
  publisher = {GitHub},
217
- journal = {GitHub repository},
218
  howpublished = {\url{https://github.com/daniel-furman/Capstone}},
219
  }
220
  ```
@@ -243,26 +252,3 @@ This dataset card aims to be a base template for new datasets. It has been gener
243
  }
244
  ```
245
 
246
- ```
247
- @inproceedings{elsahar-etal-2018-rex,
248
- title = "{T}-{RE}x: A Large Scale Alignment of Natural Language with Knowledge Base Triples",
249
- author = "Elsahar, Hady and
250
- Vougiouklis, Pavlos and
251
- Remaci, Arslen and
252
- Gravier, Christophe and
253
- Hare, Jonathon and
254
- Laforest, Frederique and
255
- Simperl, Elena",
256
- booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
257
- month = may,
258
- year = "2018",
259
- address = "Miyazaki, Japan",
260
- publisher = "European Language Resources Association (ELRA)",
261
- url = "https://aclanthology.org/L18-1544",
262
- }
263
-
264
- ```
265
-
266
- ### Contributions
267
-
268
- [More Information Needed]
 
7
  - machine-generated
8
  multilinguality:
9
  - multilingual
10
+ pretty_name: Polyglot or Not? Fact-Completion Benchmark
11
  size_categories:
12
  - 100K<n<1M
13
  task_categories:
 
128
 
129
  ### Dataset Summary
130
 
131
+ This is the dataset for **Polyglot or Not?: Measuring Multilingual Encyclopedic Knowledge Retrieval from Foundation Language Models**.
132
 
133
+ ### Test Description
134
 
135
+ Given a factual association such as *The capital of France is **Paris***, we determine whether a model adequately "knows" this information with the following test:
136
+
137
+ * Step **1**: prompt the model to predict the likelihood of the token **Paris** following *The Capital of France is*
138
+
139
+ * Step **2**: prompt the model to predict the average likelihood of a set of false, counterfactual tokens following the same stem.
140
+
141
+ If the value from **1** is greater than the value from **2** we conclude that model adequately recalls that fact. Formally, this is an application of the Contrastive Knowledge Assessment proposed in [[1][bib]].
142
+
143
+ For every foundation model of interest (like [LLaMA](https://arxiv.org/abs/2302.13971)), we perform this assessment on a set of facts translated into 20 languages. All told, we score foundation models on 303k fact-completions ([results](https://github.com/daniel-furman/capstone#multilingual-fact-completion-results)).
144
+
145
+ We also score monolingual models (like [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)) on English-only fact-completion ([results](https://github.com/daniel-furman/capstone#english-fact-completion-results)).
146
 
147
  ### Languages
148
 
 
219
  ### Citation Information
220
 
221
  ```
222
+ @misc{polyglot_or_not,
223
+ author = {Daniel Furman and Tim Schott and Shreshta Bhat},
224
+ title = {Polyglot or Not?: Measuring Multilingual Encyclopedic Knowledge Retrieval from Foundation Language Models},
225
+ year = {2023}
226
  publisher = {GitHub},
 
227
  howpublished = {\url{https://github.com/daniel-furman/Capstone}},
228
  }
229
  ```
 
252
  }
253
  ```
254