sangmichaelxie commited on
Commit
49ebabe
1 Parent(s): b17b9fc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md CHANGED
@@ -1,3 +1,72 @@
1
  ---
2
  license: mit
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - en
5
+ size_categories:
6
+ - 10M<n<100M
7
  ---
8
+ # Dataset Card for DSIR-filtered-pile-50M
9
+
10
+ ## Dataset Description
11
+
12
+ - **Repository:**
13
+ - **Paper:**
14
+ - **Point of Contact: Sang Michael Xie <xie@cs.stanford.edu>**
15
+
16
+ ### Dataset Summary
17
+
18
+ This dataset is a subset of The Pile, selected via the DSIR data selection method. The target distribution for DSIR is the Wikipedia and book-related subsets of The Pile.
19
+
20
+
21
+ ### Languages
22
+
23
+ English (EN)
24
+
25
+ ## Dataset Structure
26
+
27
+ ### Data Instances
28
+
29
+ [More Information Needed]
30
+
31
+ ### Data Fields
32
+
33
+ [More Information Needed]
34
+
35
+ ## Dataset Creation
36
+ We first select 102.4M examples then concatenate every two examples to create 51.2M examples.
37
+ This ensures that the examples are long enough for a max token length of 512 without much padding.
38
+ We train the importance weight estimator for DSIR from The Pile validation set, where the target is Wikipedia + BookCorpus2 + Gutenberg + Books3 and the raw data come from the rest of the data sources in The Pile.
39
+ We first select 98.4M examples from non-Wikipedia and book data, then randomly select 2M from Wikipedia and 0.66M each from BookCorpus2, Gutenberg, and Books3.
40
+ After this, we concatenate every two examples.
41
+
42
+ ### Source Data
43
+ The Pile
44
+
45
+ #### Initial Data Collection and Normalization
46
+ We select data from The Pile, which comes in 30 random chunks. We reserve chunk 0 for validation purposes and only consider the last 29 chunks.
47
+ We first divided the documents in The Pile into chunks of 128 words, according to whitespace tokenization.
48
+ These chunks define the examples that we do data selection on, totaling 1.7B examples.
49
+ Before DSIR, we first apply a manual quality filter (see paper for details) and only consider the examples that pass the filter.
50
+
51
+
52
+ ### Annotations
53
+
54
+ There are no annotations.
55
+
56
+
57
+ ## Considerations for Using the Data
58
+
59
+ The dataset is biased towards choosing data from non-Wikipedia and non-Books sources. A balanced approach would be to mix in more data from Wikipedia and books.
60
+
61
+ ### Dataset Curators
62
+
63
+ Sang Michael Xie
64
+
65
+ ### Citation Information
66
+
67
+ @article{xie2023data,
68
+ author = {Sang Michael Xie and Shibani Santurkar and Tengyu Ma and Percy Liang},
69
+ journal = {TODO},
70
+ title = {Data Selection for Language Models via Importance Resampling},
71
+ year = {2023},
72
+ }