Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
License:
license: apache-2.0 | |
task_categories: | |
- question-answering | |
language: | |
- en | |
size_categories: | |
- 10K<n<100K | |
dataset_info: | |
features: | |
- name: 'Unnamed: 0' | |
dtype: int64 | |
- name: Head | |
dtype: string | |
- name: Fact | |
dtype: string | |
- name: Alternate Facts | |
dtype: string | |
- name: Title | |
dtype: string | |
- name: Text | |
dtype: string | |
splits: | |
- name: train | |
num_bytes: 13271648 | |
num_examples: 5000 | |
- name: test | |
num_bytes: 53275376 | |
num_examples: 20000 | |
download_size: 30891771 | |
dataset_size: 66547024 | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: data/train-* | |
- split: test | |
path: data/test-* | |
# Dataset Card for T-Rex-MC | |
## Table of Contents | |
- [Dataset Description](#dataset-description) | |
- [Dataset Summary](#dataset-summary) | |
- [Dataset Structure](#dataset-structure) | |
- [Relation Map](#relation-map) | |
- [Data Instances](#data-instances) | |
- [Dataset Creation](#dataset-creation) | |
- [Creation of Multiple Choices for T-REx-MC](#creation-of-multiple-choices-for-t-rex-mc) | |
- [Personal and Sensitive Information](#personal-and-sensitive-information) | |
- [Considerations for Using the Data](#considerations-for-using-the-data) | |
- [Licensing Information](#licensing-information) | |
- [Citation Information](#citation-information) | |
<!-- - [Contributions](#contributions) --> | |
## Dataset Description | |
<!-- - **Homepage:** --> | |
- **Repository:** | |
- **Paper:** https://arxiv.org/abs/2404.12957 | |
- **Point of Contact:** | |
### Dataset Summary | |
The T-Rex-MC dataset is designed to assess models’ factual knowledge. It is derived from the T-REx dataset, focusing on relations with at least 500 samples connected to 100 or more unique objects. For each sample, a multiple-choice list is generated, ensuring that instances with multiple correct objects do not include any correct answers in the choices provided. This filtering process results in 50 distinct relations across various categories, such as birth dates, directorial roles, parental relationships, and educational backgrounds. The final T-Rex-MC dataset includes 5,000 training facts and 20,000 test facts. | |
## Dataset Structure | |
### Relation Map | |
The relation mapping in T-Rex-MC links each relation number to a specific Wikipedia property identifier and its descriptive name. This mapping is stored in the JSON file located at /dataset/trex_MC/all_relationID_names_nomask.json. Each entry in this file maps a relation ID to a tuple containing: | |
1. Wikipedia Property ID (e.g., P17, P54, etc.): This identifier corresponds to a specific property or attribute on Wikipedia, typically aligned with Wikidata’s property codes. Each property ID points to a standardized type of information. | |
2. Descriptive Name (e.g., “country,” “member of sports team”): This name provides a human-readable description of the Wikipedia property, explaining the type of relationship it represents. | |
Examples from the Mapping File: | |
- "1": ["P17", "country"]: Relation ID 1 corresponds to the Wikipedia property P17, which represents the “country” associated with the subject. | |
- "2": ["P54", "member of sports team"]: Relation ID 2 links to P54, indicating the “member of sports team” relationship. | |
This mapping is essential for interpreting the types of relationships tested in T-Rex-MC and facilitates understanding of each fact’s specific factual relationship. | |
### Data Instances | |
Each instance in the T-Rex-MC dataset represents a single factual statement and is structured as follows: | |
1. Subject: The main entity in the fact, which the factual relation is about. For example, if the fact is about a specific person’s birth date, the subject would be that person’s name. | |
2. Object: The correct answer for the factual relation. This is the specific value tied to the subject in the context of the relation, such as the exact birth date, or the correct country of origin. | |
4. Multiple Choices: A list of 99 potential answers. This includes: | |
- 99 distractors that serve as plausible but incorrect choices, designed to challenge the model’s knowledge. | |
5. Title: The title of the relevant Wikipedia page for the subject, which provides a direct reference to the fact’s source. | |
6. Text: The Wikipedia abstract or introductory paragraph that corresponds to the subject, giving additional context and helping to clarify the nature of the relation. | |
Each data instance thus provides a structured test of a model’s ability to select the correct fact from a curated list of plausible alternatives, covering various factual relations between entities in Wikipedia. | |
## Dataset Creation | |
### Creation of Multiple Choices for T-REx-MC | |
The T-REx-MC dataset is built from T-REx, a large-scale alignment dataset linking Wikipedia abstracts with factual triples. For our experiments, we used the processed T-REx version on HuggingFace. Relations with over 500 facts and at least 100 unique objects were selected to ensure a pool of feasible multiple-choice options per fact. We manually filtered out ambiguous cases where multiple correct answers exist (e.g., “America,” “USA”) and standardized partial matches (e.g., “French” vs. “French language”) to maintain consistency. | |
Our curated T-REx-MC dataset includes 50 relations, each represented as a tuple of <subject, relation, multiple choices>. Each multiple-choice list includes the correct answer and 99 distractors. A detailed list of these 50 relations is available in Table 4. | |
Attributes in T-REx-MC for Each Relation: | |
- Subject: The entity at the center of each fact. | |
- Object: The correct answer for each fact. | |
- Multiple Choices: A list of 100 options, including the correct answer and 99 alternatives. | |
- Title: The Wikipedia title associated with each fact. | |
### Personal and Sensitive Information | |
The T-Rex-MC dataset does not contain any sensitive personal information. The facts within the dataset are derived from Wikipedia, which is publicly available, and they pertain to general knowledge rather than private or sensitive data. The content of the dataset includes: | |
- Data about general factual relationships (e.g., countries, occupations, affiliations) | |
- Data about places, cultural items, and general knowledge topics | |
## Considerations for Using the Data | |
### Licensing Information | |
While we release this dataset under Alpaca, please consider citing the accompanying paper if you use this dataset or any derivative of it. | |
### Citation Information | |
**BiBTeX:** | |
``` | |
@article{wu2024towards, | |
title={Towards Reliable Latent Knowledge Estimation in LLMs: In-Context Learning vs. Prompting Based Factual Knowledge Extraction}, | |
author={Wu, Qinyuan and Khan, Mohammad Aflah and Das, Soumi and Nanda, Vedant and Ghosh, Bishwamittra and Kolling, Camila and Speicher, Till and Bindschaedler, Laurent and Gummadi, Krishna P and Terzi, Evimaria}, | |
journal={arXiv preprint arXiv:2404.12957}, | |
year={2024} | |
} | |
``` | |
<!--- ### Contributions | |
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | |
---> |