|
--- |
|
license: mit |
|
dataset_info: |
|
features: |
|
- name: clean |
|
dtype: string |
|
- name: corrupted |
|
dtype: string |
|
- name: year |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 1234280 |
|
num_examples: 10000 |
|
download_size: 204638 |
|
dataset_size: 1234280 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
language: |
|
- en |
|
--- |
|
|
|
# Dataset Card for Dataset Name |
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
|
|
This is a dataset with examples from the Greater-Than circuit task. |
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
|
|
- **Curated by:** Michael Hanna |
|
- **Language(s) (NLP):** English |
|
- **License:** MIT |
|
|
|
### Dataset Sources |
|
|
|
<!-- Provide the basic links for the dataset. --> |
|
|
|
- **Repository:** [https://github.com/hannamw/gpt2-greater-than](https://github.com/hannamw/gpt2-greater-than) |
|
- **Paper:** [How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model](https://openreview.net/forum?id=p4PckNQR8k) |
|
|
|
## Uses |
|
|
|
This dataset is intended to be a model-agnostic version of the greater-than task. |
|
The original task consisted of examples like `The war lasted from the year 1742 to the year 17`, based on the fact that GPT-2 small tokenizes 4-digit years into two, two-digit tokens. |
|
One would then compute model performance as the probability assigned to years greater than 42, minus that assigned to years less-than or equal to 42. |
|
|
|
New models now tokenize years differently; Llama tokenizes 1742 as `[174][2]`, and Gemma 2 tokenizes it as `[1][7][4][2]`. |
|
You can still compute the probability assigned to good and bad decades; for example: |
|
- For Llama 3, if the token at position [174] is y1, and the token at [2] is y1, you want to compute p(y1>174) + p(y1=174)* p(y2>2) - (p(y1<174) + p(y1=174)* p(y2<=2)) |
|
- For Gemma 2, if the token at position [4] is y1, and the token at [2] is y1, you want to compute p(y1>4) + p(y1=4)* p(y2>2) - (p(y1<4) + p(y1=4)* p(y2<=2)) |
|
|
|
For these purposes, it's easier to have the full string, i.e. `The war lasted from the year 1742 to the year 1743`, rather than the shortened version `The war lasted from the year 1742 to the year 17`. |
|
|
|
## Dataset Structure |
|
|
|
`clean`: The original greater-than example sentences |
|
|
|
`corrupted`: The corrupted version of the corresponding sentence in `clean`, with the start-year decade set to `01`. |
|
|
|
`year`: The start year from the corresponding sentence in `clean`. |
|
|
|
## Dataset Creation |
|
|
|
### Source Data |
|
|
|
As described in the paper, this dataset was automatically created, using the template `The [event] lasted from the year [XX][YY] to the year [XX]`. |
|
Michael Hanna and Ollie Liu developed the list of nouns used as `[event]`. |
|
|
|
## Citation [optional] |
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
[How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model](https://openreview.net/forum?id=p4PckNQR8k) |
|
|
|
**BibTeX:** |
|
``` |
|
@inproceedings{ |
|
hanna2023how, |
|
title={How does {GPT}-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model}, |
|
author={Michael Hanna and Ollie Liu and Alexandre Variengien}, |
|
booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, |
|
year={2023}, |
|
url={https://openreview.net/forum?id=p4PckNQR8k} |
|
} |
|
``` |
|
|
|
## Dataset Card Authors |
|
|
|
Michael Hanna |
|
|
|
## Dataset Card Contact |
|
|
|
m.w.hanna@uva.nl |