Update README.md
Browse files
README.md
CHANGED
@@ -1,37 +1,108 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
dataset_info:
|
4 |
-
features:
|
5 |
-
- name: id
|
6 |
-
dtype: string
|
7 |
-
- name: question
|
8 |
-
dtype: string
|
9 |
-
- name: context
|
10 |
-
dtype: string
|
11 |
-
- name: A
|
12 |
-
dtype: string
|
13 |
-
- name: B
|
14 |
-
dtype: string
|
15 |
-
- name: C
|
16 |
-
dtype: string
|
17 |
-
- name: D
|
18 |
-
dtype: string
|
19 |
-
- name: label
|
20 |
-
dtype: int64
|
21 |
-
splits:
|
22 |
-
- name: train
|
23 |
-
num_bytes: 63759933.69322235
|
24 |
-
num_examples: 2517
|
25 |
-
- name: test
|
26 |
-
num_bytes: 52057383.0
|
27 |
-
num_examples: 2086
|
28 |
-
download_size: 19849080
|
29 |
-
dataset_size: 115817316.69322234
|
30 |
-
configs:
|
31 |
-
- config_name: default
|
32 |
-
data_files:
|
33 |
-
- split: train
|
34 |
-
path: data/train-*
|
35 |
-
- split: test
|
36 |
-
path: data/test-*
|
37 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
dataset_info:
|
4 |
+
features:
|
5 |
+
- name: id
|
6 |
+
dtype: string
|
7 |
+
- name: question
|
8 |
+
dtype: string
|
9 |
+
- name: context
|
10 |
+
dtype: string
|
11 |
+
- name: A
|
12 |
+
dtype: string
|
13 |
+
- name: B
|
14 |
+
dtype: string
|
15 |
+
- name: C
|
16 |
+
dtype: string
|
17 |
+
- name: D
|
18 |
+
dtype: string
|
19 |
+
- name: label
|
20 |
+
dtype: int64
|
21 |
+
splits:
|
22 |
+
- name: train
|
23 |
+
num_bytes: 63759933.69322235
|
24 |
+
num_examples: 2517
|
25 |
+
- name: test
|
26 |
+
num_bytes: 52057383.0
|
27 |
+
num_examples: 2086
|
28 |
+
download_size: 19849080
|
29 |
+
dataset_size: 115817316.69322234
|
30 |
+
configs:
|
31 |
+
- config_name: default
|
32 |
+
data_files:
|
33 |
+
- split: train
|
34 |
+
path: data/train-*
|
35 |
+
- split: test
|
36 |
+
path: data/test-*
|
37 |
+
---
|
38 |
+
|
39 |
+
This dataset is derived from `tau/scrolls` [dataset](tau/scrolls) by running the following script:
|
40 |
+
|
41 |
+
```python
|
42 |
+
import re
|
43 |
+
|
44 |
+
from datasets import load_dataset
|
45 |
+
|
46 |
+
quality_dataset = load_dataset("tau/scrolls", "quality")
|
47 |
+
|
48 |
+
|
49 |
+
def parse_example(example):
|
50 |
+
text = example["input"]
|
51 |
+
options = dict(re.findall(r"\((A|B|C|D)\) ([^\n]+)", text))
|
52 |
+
|
53 |
+
question_part, context = re.split(r"\(D\) [^\n]+\n", text, maxsplit=1)
|
54 |
+
question = re.sub(r"\([A-D]\) [^\n]+\n?", "", question_part).strip()
|
55 |
+
|
56 |
+
result = {"question": question, "context": context.strip(), **options}
|
57 |
+
|
58 |
+
if not all(key in result for key in ["A", "B", "C", "D"]):
|
59 |
+
raise ValueError("One or more options (A, B, C, D) are missing!")
|
60 |
+
|
61 |
+
# get label
|
62 |
+
label = -1
|
63 |
+
answer = example["output"]
|
64 |
+
if answer is None:
|
65 |
+
answer = ""
|
66 |
+
|
67 |
+
for idx, option in enumerate([options["A"], options["B"], options["C"], options["D"]]):
|
68 |
+
if answer.strip() == option.strip():
|
69 |
+
label = idx
|
70 |
+
|
71 |
+
result["label"] = label
|
72 |
+
return result
|
73 |
+
|
74 |
+
|
75 |
+
quality_dataset = quality_dataset.map(parse_example)
|
76 |
+
quality_dataset = quality_dataset.filter(lambda x: x["label"] >= 0)
|
77 |
+
|
78 |
+
train_ds = quality_dataset["train"].remove_columns(["pid", "input", "output"])
|
79 |
+
test_ds = quality_dataset["validation"].remove_columns(["pid", "input", "output"])
|
80 |
+
```
|
81 |
+
|
82 |
+
Specifically, only `quality` subset is kept and processed into MCQ format. The `test` split from original dataset is removed since it doesn't have ground truth labels.
|
83 |
+
Instead, validation split is assigned as test.
|
84 |
+
|
85 |
+
Number of examples in train: ~2.5k
|
86 |
+
Number of examples in test: ~2.1k
|
87 |
+
|
88 |
+
This dataset can be used to test performance of a model focusing on long contexts.
|
89 |
+
Input Tokens as per [llama2](bclavie/bert24_32k_tok_llama2) tokenizer: Mean -> 7.4k, SD: 2.3k, Max -> 11.6k
|
90 |
+
|
91 |
+
---
|
92 |
+
Relevant sections from the [SCROLLS: Standardized CompaRison Over Long Language Sequences paper](https://arxiv.org/pdf/2201.03533)
|
93 |
+
```
|
94 |
+
QuALITY (Pang et al., 2021): A multiplechoice question answering dataset over stories
|
95 |
+
and articles sourced from Project Gutenberg,10 the
|
96 |
+
Open American National Corpus (Fillmore et al.,
|
97 |
+
1998; Ide and Suderman, 2004), and more. Experienced writers wrote questions and distractors, and
|
98 |
+
were incentivized to write answerable, unambiguous questions such that in order to correctly answer
|
99 |
+
them, human annotators must read large portions
|
100 |
+
of the given document. To measure the difficulty
|
101 |
+
of their questions, Pang et al. conducted a speed
|
102 |
+
validation process, where another set of annotators
|
103 |
+
were asked to answer questions given only a short
|
104 |
+
period of time to skim through the document. As
|
105 |
+
a result, 50% of the questions in QuALITY are
|
106 |
+
labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong
|
107 |
+
answer.
|
108 |
+
```
|