ivanzhouyq commited on
Commit
6614d41
1 Parent(s): 47458a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -59
README.md CHANGED
@@ -4,31 +4,18 @@ task_categories:
4
  language:
5
  - en
6
  pretty_name: RedPajama Tiny
 
 
 
7
  ---
8
  # Dataset Card for Dataset Name
9
 
10
  ### Dataset Summary
11
 
12
- This is a tiny version of the RedPajama dataset, which is a clean-room, fully open-source implementation of the LLaMa dataset.
13
-
14
- This dataset contains 64 samples from each of the 7 sources.
15
-
16
- The full dataset has the following token counts and is available for [download]( https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T):
17
-
18
- | Dataset | Token Count |
19
- |---------------|-------------|
20
- | Commoncrawl | 878 Billion |
21
- | C4 | 175 Billion |
22
- | GitHub | 59 Billion |
23
- | Books | 26 Billion |
24
- | ArXiv | 28 Billion |
25
- | Wikipedia | 24 Billion |
26
- | StackExchange | 20 Billion |
27
- | Total | 1.2 Trillion |
28
-
29
- ### Languages
30
-
31
- Primarily English, though the Wikipedia slice contains multiple languages.
32
 
33
  ## Dataset Structure
34
 
@@ -40,42 +27,3 @@ The dataset structure is as follows:
40
  "meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...}
41
  }
42
  ```
43
-
44
- ## Dataset Creation
45
-
46
- This dataset was created to follow the LLaMa paper as closely as possible to try to reproduce its recipe.
47
-
48
- ### Source Data
49
-
50
- #### Commoncrawl
51
-
52
- We download five dumps from Commoncrawl, and run the dumps through the official `cc_net` pipeline.
53
- We then deduplicate on the paragraph level, and filter out low quality text using a linear classifier trained to
54
- classify paragraphs as Wikipedia references or random Commoncrawl samples.
55
-
56
- #### C4
57
-
58
- C4 is downloaded from Huggingface. The only preprocessing step is to bring the data into our own format.
59
-
60
- #### GitHub
61
-
62
- The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality
63
- files and only keep projects that are distributed under the MIT, BSD, or Apache license.
64
-
65
- #### Wikipedia
66
- We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains
67
- text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other
68
- formatting boilerplate has been removed.
69
-
70
- #### Gutenberg and Books3
71
- The PG19 subset of the Gutenberg Project and Books3 datasets are downloaded from Huggingface. After downloading, we use
72
- simhash to remove near duplicates.
73
-
74
- #### ArXiv
75
- ArXiv data is downloaded from Amazon S3 in the `arxiv` requester pays bucket. We only keep latex source files and
76
- remove preambles, comments, macros and bibliographies.
77
-
78
- #### Stackexchange
79
- The Stack Exchange split of the dataset is download from the
80
- [Internet Archive](https://archive.org/download/stackexchange). Here we only keep the posts from the 28 largest sites,
81
- remove html tags, group the posts into question-answer pairs, and order answers by their score.
 
4
  language:
5
  - en
6
  pretty_name: RedPajama Tiny
7
+ license: apache-2.0
8
+ size_categories:
9
+ - n<1K
10
  ---
11
  # Dataset Card for Dataset Name
12
 
13
  ### Dataset Summary
14
 
15
+ This is a tiny version of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T).
16
+ It contains 64 samples from each of the 7 sources.
17
+ This dataset is intended for developing and testing data/training pipeline for loading the full RedPajama dataset or any general HuggingFace dataset.
18
+ It is very fast to download and easy to examine. You should not use it for training a full model, but you can use it for overfitting test or any other sanity checks.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  ## Dataset Structure
21
 
 
27
  "meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...}
28
  }
29
  ```