Datasets:
Dataset Card for BookSORT
Dataset Description
- Repository:
- Paper: https://arxiv.org/abs/2410.08133
- Point of Contact:
Dataset Summary
BookSORT is a dataset created from books for evaluation on the Sequence Order Recall Task (SORT), which assesses a model's ability to use temporal context in memory. SORT evaluation samples can be constructed from any sequential data. For BookSORT, the sequences are derived from text from 9 English language books that were released to the public domain between 2022 and 2024 via Project Gutenberg.
SORT presents models with two segments of data from a continuous sequence, like text, and asks the model to judge the order in which they appeared. In one SORT condition, the relevant text excerpt is provided as additional context to the model to help it perform the task. This BookSORT dataset varies text excerpt lengths, segment pair lengths, and distances between segment pairs.
Dataset Structure
Data Instances
A typical sample in BookSORT consists of a text excerpt and two text segments. The excerpt is a continuous sequence of text from a book, and the segments are two non-overlapping parts of the excerpt.
{
"book_idx": 69087,
"excerpt_idx": 0,
"segment_idx": 0,
"excerpt_text": "*Never anything out of the ordinary.” “Good, w...",
"excerpt_length": 250,
"segment_1": "*I asked apprehensively. “I want you to introd...",
"segment_2": "*You understand the kind of thing I mean. And ...",
"segment_length": 20,
"seg1_pos": 72,
"seg2_pos": 104,
"present_seg1_first": 0,
"distance_bin": 62,
"excerpt_pos": 0.396943,
"book_title": "The Murder Of Roger Ackroyd",
"num_words": 69720
}
Data Fields
Each sample contains the fields below. The fields to be used for the model are excerpt_text
, segment_1
, segment_2
, and book_title
. The field present_seg1_first
should be used to decide which segment should appear first in the prompt to the model. All the remaining fields are useful for analyzing the results.
book_idx
: Book ID (see dataset sources)excerpt_idx
: Unique ID representing an excerpt from the book abovesegment_idx"
: Unique ID representing the pair of segmentsexcerpt_text
: Excerpt (text) from the bookexcerpt_length
: Excerpt length in wordssegment_1
: First segment (text) from within the excerptsegment_2
: Second segment (text) from within the excerptsegment_length
: Segment lengths in wordsseg1_pos
: Position (in words) of segment 1 within the excerptseg2_pos
: Position (in words) of segment 2 within the excerptpresent_seg1_first
: 1 is Segment 1 should be presented first or 0 if should be presented second in the prompt (boolean)distance_bin
: Distance bin for this sampleexcerpt_pos
: Relative position of the excerpt in the bookbook_title
: Title of the booknum_words
: Number of words in the book
Data Splits
We created data samples for 5 different excerpt lengths (LE={250, 1000, 2500, 10000, 20000} words) and 2 segment lengths (LS={20, 50} words). For each unique combination of LE and LS, we sampled 110 excerpts from each included book. Most of the dataset used all 9 books; 1 book is excluded from the extended excerpt length data as it is shorter than 10000 words.
Within each unique book excerpt, we sampled segment pairs with varying distances between them. 110 segment pairs were sampled for 4 different distance bins, yielding 440 SORT trials per book, excerpt length, and segment length. Since distance is bounded by the excerpt length, we generally used LE to scale the bin edges.
Condition | Minimum | Bin0 | Bin1 | Bin2 | Bin3 |
---|---|---|---|---|---|
Standard Context Length | LS | LE / 4 | LE / 3 | LE / 2 | LE / 0.8 |
Extended Context Length | LS | 1000 | LE / 4 | LE / 2 | LE / 0.8 |
Above: The definition of the segment distance bins that determine how far apart the text segments are from one another. Distance is defined by the beginning of the first segment to the beginning of the second segment.
We only evaluated the Sequence Order Recall Task on 100 segment pairs in each combination of book, LE, LS, and LD. The remaining 10 pairs are reserved for other uses (e.g. selecting which prompt format produces the best SORT results).
Condition | Configuration | validation | test |
---|---|---|---|
LS=20, LE=250 | excerpt-250-segment-20 |
360 | 3600 |
LS=50, LE=250 | excerpt-250-segment-50 |
360 | 3600 |
LS=20, LE=1000 | excerpt-1000-segment-20 |
360 | 3600 |
LS=50, LE=1000 | excerpt-1000-segment-50 |
360 | 3600 |
LS=20, LE=2500 | excerpt-2500-segment-20 |
360 | 3600 |
LS=50, LE=2500 | excerpt-2500-segment-50 |
360 | 3600 |
LS=20, LE=10000 | excerpt-10000-segment-20 |
320 | 3200 |
LS=50, LE=10000 | excerpt-10000-segment-50 |
320 | 3200 |
LS=20, LE=20000 | excerpt-20000-segment-20 |
320 | 3200 |
LS=50, LE=20000 | excerpt-20000-segment-50 |
320 | 3200 |
Dataset Creation
Curation Rationale
To evaluate text on the Sequence Order Recall Task (SORT), we extracted text excerpts E and pairs of text segments S contained within those excerpts. As detailed in the accompanying paper, BookSORT varied the length of the text excerpts LE, the length of the segments LS, and the distance between the segments DS. All excerpts and segments began at a sentence boundary, and units of length and distance are computed in words.
Since we evaluated LLMs with varying maximum context windows, we constructed a dataset for fairly standard context length limits (providing text excerpts up to 2500 words to fit within 4096 tokens) and for extended context length limits (providing 10K-20K word excerpts).
Source Data
The dataset is entirely constructed from English language books in the public domain in the United States, shared via Project Gutenberg. We manually selected the first title to use in a companion study with human participants (see publication link). For the remaining 8 books, we first downloaded Project Gutenberg metadata. We then filtered this metadata to only view books released in 2024, and originally published in 1928 (thus passing the 95 year mark for copyright to expire). Titles were manually selected to attempt to maximize diversity over the Library of Congress Classification (LoCC), and to have some range in subject matter and book length. These filtered titles were then examined to check that they contained a continuous narrative across the entire book (i.e. not collections of stories or poems), and were therefore appropriate for the SORT evaluation.
ID | Title | Author | Word count | Release | Pub | LoCC | Subjects |
---|---|---|---|---|---|---|---|
69087 | The Murder of Roger Ackroyd | Christie, Agatha | 69,720 | 10/2/2022 | 1926 | PR | Detective and mystery stories; Fiction: Private investigators - England, Murder - Investigation, Belgians - England |
72578 | Tom Swift and His Talking Pictures | Appleton, Victor | 43,853 | 1/1/2024 | 1928 | PZ | Adventure stories; Motion pictures |
72600 | The Trumpeter of Krakow | Kelly, Eric Philbrook | 59,081 | 1/2/2024 | 1928 | PZ | Juvenile fiction: Middle Ages, Poland - History - Casimir IV, 1447-1492 |
72869 | Meet the Tiger | Charteris, Leslie | 79,946 | 2/4/2024 | 1928 | PR | Fiction: Private investigators - England; Detective and mystery stories |
72958 | Hunting for Hidden Gold | Dixon, Franklin W. | 42,354 | 2/14/2024 | 1928 | PZ | Juvenile fiction: Brothers, Gold mines and mining, Montana, Robbers and outlaws; Mystery and detective stories |
72963 | The Nature of the Physical World | Eddington, Arthur Stanley, Sir | 104,530 | 2/15/2024 | 1928 | Q | Physics - Philosophy; Science - Philosophy |
72972 | Money for Nothing | Wodehouse, P.G. (Pelham Grenville) | 82,331 | 2/16/2024 | 1928 | PR | Humorous stories; Fiction: Swindlers and swindling, Greed |
73017 | Pomona; or, the Future of English | De Selincourt, Basil | 9,273 | 2/22/2024 | 1928 | PE | English language |
73042 | The Well of Loneliness | Hall, Radclyffe | 163,217 | 2/26/2024 | 1928 | PR | Fiction: Lesbians - England - Social conditions |
Above: Project Gutenberg metadata for the books in this dataset.
Initial Data Collection and Normalization
We wrote custom Python code to only retain the book text that formed a continuous narrative. We stripped the front and back matter of the book, and extracted chapter titles if they existed. 8 of the 9 books contained individual section or chapter breaks. For these 8 books, we parsed the text corresponding to each chapter. Chapter titles or section headings (e.g. 'VI' to indicate section six) were removed, and all remaining text was concatenated. This string was split into words (assuming simple whitespace separators with python string.split()
) to produce a final text array for each book. This text array was sampled for the BookSORT dataset.
Who are the source language producers?
The books in the dataset are in the public domain and were written by various authors. The list of books and authors is provided in the table above. The sample text is derived from the books and preprocessed using custom Python code.
Annotations
The BookSORT dataset includes annotations that help evaluate the performance of models. The annotations are derived from the text of the books and are used to construct the SORT evaluation samples. The annotations include information about the books, excerpts, and segments used in the evaluation. Additionally, they include information on the order to present the segments to the model to enable measuring its performance. All annotations were automatically generated from the text of the books and the sampling process.
Personal and Sensitive Information
The books in the dataset do not contain sensitive personal information. Most of the works are fiction, and all of the information is already in the public domain. The content of the books contains the following:
- Non-sensitive data about people
- Data about natural phenomena
- Data about places and objects
Considerations for Using the Data
Discussion of Biases
All of the books are English language books, and we did not include any that were translated from other languages. The books are also from a specific time window. Both factors mean that the semantic content of the dataset has some cultural biases. SORT, the intended evaluation task over the books, is agnostic to the specific content of the sequence. However, models evaluated on BookSORT may perform slightly better on the task if their training data contain similar cultural knowledge or biases to the content of the books.
Additional Information
Licensing Information
While we release this dataset under CC0, please consider citing the accompanying paper if you use this dataset or any derivative of it.
Citation Information
BiBTeX:
@misc{pink2024assessingepisodicmemoryllms,
title={Assessing Episodic Memory in LLMs with Sequence Order Recall Tasks},
author={Mathis Pink and Vy A. Vo and Qinyuan Wu and Jianing Mu and Javier S. Turek and Uri Hasson and Kenneth A. Norman and Sebastian Michelmann and Alexander Huth and Mariya Toneva},
year={2024},
eprint={2410.08133},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.08133},
}
- Downloads last month
- 35