metadata
license: mit
tags:
- natural-language-understanding
language_creators:
- expert-generated
- machine-generated
language:
- en
multilinguality:
- multilingual
pretty_name: Fact Completion Benchmark for Text Models
size_categories:
- 100K<n<1M
task_categories:
- text-generation
- fill-mask
- text2text-generation
task_ids:
- fact-checking
Dataset Card for Fact_Completion
Dataset Description
- Homepage: https://bit.ly/ischool-berkeley-capstone
- Repository: https://github.com/daniel-furman/Capstone
- Paper:
- Leaderboard:
- Point of Contact: Daniel Furman (daniel_furman@berkeley.edu)
Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Supported Tasks and Leaderboards
[More Information Needed]
Languages
[More Information Needed]
Dataset Structure
Data Instances
[More Information Needed]
Data Fields
[More Information Needed]
Data Splits
[More Information Needed]
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Initial Data Collection and Normalization
[More Information Needed]
Who are the source language producers?
[More Information Needed]
Annotations
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
[More Information Needed]
Citation Information
@misc{calibragpt,
author = {Shreshta Bhat and Daniel Furman and Tim Schott},
title = {CalibraGPT: The Search for (Mis)Information in Large Language Models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/daniel-furman/Capstone}},
}
@misc{dong2022calibrating,
doi = {10.48550/arXiv.2210.03329},
title={Calibrating Factual Knowledge in Pretrained Language Models},
author={Qingxiu Dong and Damai Dai and Yifan Song and Jingjing Xu and Zhifang Sui and Lei Li},
year={2022},
eprint={2210.03329},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{meng2022massediting,
doi = {10.48550/arXiv.2210.07229},
title={Mass-Editing Memory in a Transformer},
author={Kevin Meng and Arnab Sen Sharma and Alex Andonian and Yonatan Belinkov and David Bau},
year={2022},
eprint={2210.07229},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Contributions
[More Information Needed]