Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 6,253 Bytes
3506ba8
 
 
 
 
 
 
0837ad8
915074d
 
2c494bc
915074d
2c494bc
915074d
2c494bc
915074d
2c494bc
 
 
915074d
 
 
 
 
 
 
2c494bc
915074d
 
2c494bc
915074d
2c494bc
 
915074d
 
 
 
 
 
 
3506ba8
 
0837ad8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7f5049d
0837ad8
 
 
 
 
 
 
7f5049d
 
 
 
 
0837ad8
7f5049d
 
0837ad8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
---
license: apache-2.0
task_categories:
- question-answering
language:
- en
size_categories:
- 10K<n<100K
dataset_info:
  features:
  - name: Relation_ID
    dtype: int64
  - name: Relation Name
    dtype: string
  - name: Subject
    dtype: string
  - name: Object
    dtype: string
  - name: Multiple Choices
    dtype: string
  - name: Title
    dtype: string
  - name: Text
    dtype: string
  splits:
  - name: train
    num_bytes: 13358248
    num_examples: 5000
  - name: test
    num_bytes: 53621776
    num_examples: 20000
  download_size: 30826462
  dataset_size: 66980024
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---


# Dataset Card for T-Rex-MC

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Dataset Structure](#dataset-structure)
    - [Relation Map](#relation-map)
    - [Data Instances](#data-instances)
- [Dataset Creation](#dataset-creation)
  - [Creation of Multiple Choices for T-REx-MC](#creation-of-multiple-choices-for-t-rex-mc)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)

<!--  - [Contributions](#contributions) -->

## Dataset Description

<!-- - **Homepage:** -->
- **Repository:**
- **Paper:**  https://arxiv.org/abs/2404.12957 
- **Point of Contact:**

### Dataset Summary

The T-Rex-MC dataset is designed to assess models’ factual knowledge. It is derived from the T-REx dataset, focusing on relations with at least 500 samples connected to 100 or more unique objects. For each sample, a multiple-choice list is generated, ensuring that instances with multiple correct objects do not include any correct answers in the choices provided. This filtering process results in 50 distinct relations across various categories, such as birth dates, directorial roles, parental relationships, and educational backgrounds. The final T-Rex-MC dataset includes 5,000 training facts and 20,000 test facts.

## Dataset Structure

### Relation Map

The relation mapping in T-Rex-MC links each relation number to a specific Wikipedia property identifier and its descriptive name. This mapping is stored in the JSON file located at /dataset/trex_MC/relationID_names.json. Each entry in this file maps a relation ID to a relation name, e.g., “country,” “member of sports team”): This name provides a human-readable description of the Wikipedia property, explaining the type of relationship it represents.

This mapping is essential for interpreting the types of relationships tested in T-Rex-MC and facilitates understanding of each fact’s specific factual relationship.

### Data Instances

Each instance in the T-Rex-MC dataset represents a single factual statement and is structured as follows:

1. Relation ID
2. Relation Name
3. Subject: The main entity in the fact, which the factual relation is about. For example, if the fact is about a specific person’s birth date, the subject would be that person’s name.
4. Object: The correct answer for the factual relation. This is the specific value tied to the subject in the context of the relation, such as the exact birth date, or the correct country of origin.
5. Multiple Choices: A list of 99 potential answers. This includes:
    - 99 distractors that serve as plausible but incorrect choices, designed to challenge the model’s knowledge.
6.	Title: The title of the relevant Wikipedia page for the subject, which provides a direct reference to the fact’s source.
7.	Text: The Wikipedia abstract or introductory paragraph that corresponds to the subject, giving additional context and helping to clarify the nature of the relation.

Each data instance thus provides a structured test of a model’s ability to select the correct fact from a curated list of plausible alternatives, covering various factual relations between entities in Wikipedia.


## Dataset Creation

### Creation of Multiple Choices for T-REx-MC
The T-REx-MC dataset is built from T-REx, a large-scale alignment dataset linking Wikipedia abstracts with factual triples. For our experiments, we used the processed T-REx version on HuggingFace. Relations with over 500 facts and at least 100 unique objects were selected to ensure a pool of feasible multiple-choice options per fact. We manually filtered out ambiguous cases where multiple correct answers exist (e.g., “America,” “USA”) and standardized partial matches (e.g., “French” vs. “French language”) to maintain consistency.

Our curated T-REx-MC dataset includes 50 relations, each represented as a tuple of <subject, relation, multiple choices>. Each multiple-choice list includes the correct answer and 99 distractors. A detailed list of these 50 relations is available in Table 4.

### Personal and Sensitive Information
The T-Rex-MC dataset does not contain any sensitive personal information. The facts within the dataset are derived from Wikipedia, which is publicly available, and they pertain to general knowledge rather than private or sensitive data. The content of the dataset includes:

- Data about general factual relationships (e.g., countries, occupations, affiliations)
- Data about places, cultural items, and general knowledge topics

## Considerations for Using the Data


### Licensing Information

While we release this dataset under Alpaca, please consider citing the accompanying paper if you use this dataset or any derivative of it.

### Citation Information

**BiBTeX:**
```
@article{wu2024towards,
  title={Towards Reliable Latent Knowledge Estimation in LLMs: In-Context Learning vs. Prompting Based Factual Knowledge Extraction},
  author={Wu, Qinyuan and Khan, Mohammad Aflah and Das, Soumi and Nanda, Vedant and Ghosh, Bishwamittra and Kolling, Camila and Speicher, Till and Bindschaedler, Laurent and Gummadi, Krishna P and Terzi, Evimaria},
  journal={arXiv preprint arXiv:2404.12957},
  year={2024}
}
```

<!--- ### Contributions

Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
--->