Update README.md
Browse files
README.md
CHANGED
@@ -1,18 +1,18 @@
|
|
1 |
-
# Dataset
|
2 |
|
3 |
-
This release
|
4 |
|
5 |
## Stage 1
|
6 |
-
During this stage,
|
7 |
|
8 |
## Stage 2
|
9 |
-
In the second stage, the remaining half of the [SlimPajama data](https://huggingface.co/datasets/cerebras/SlimPajama-627B) is employed, along with two epochs of [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata). For StarCoder data, we
|
10 |
|
11 |
## Stage 3
|
12 |
-
The third stage involves
|
13 |
|
14 |
-
# Primary
|
15 |
|
16 |
-
This dataset
|
17 |
|
18 |
-
# License:
|
|
|
1 |
+
# Description of the Dataset
|
2 |
|
3 |
+
This release integrates the entire data sequence utilized in the CrystalCoder training. It encompasses data sequences from the three pre-training stages, combining information from two prior works: the [SlimPajama dataset](https://huggingface.co/datasets/cerebras/SlimPajama-627B) and [StarCoder](https://huggingface.co/datasets/bigcode/starcoderdata), totaling approximately 1300 billion tokens. These tokens are distributed across three stages, each with distinct weights.
|
4 |
|
5 |
## Stage 1
|
6 |
+
During this initial stage, half of the [SlimPajama data](https://huggingface.co/datasets/cerebras/SlimPajama-627B) is utilized, equivalent to approximately 345 billion tokens.
|
7 |
|
8 |
## Stage 2
|
9 |
+
In the second stage, the remaining half of the [SlimPajama data](https://huggingface.co/datasets/cerebras/SlimPajama-627B) is employed, along with two epochs of [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata). For the StarCoder data, we apply [FIM augmentation](https://arxiv.org/abs/2207.14255) with an FIM rate of 0.9. The total token count for this stage is calculated as 0.5 * 690 + 2 * 291, resulting in 927 billion tokens.
|
10 |
|
11 |
## Stage 3
|
12 |
+
The third stage involves reusing Python and web-related data from the [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata), including HTML, CSS, and JavaScript. This data is utilized for training over three epochs, with the application of FIM at a rate of 0.3. The total token count for this stage is 100 billion. Additionally, a small portion of the SlimPajama dataset, excluding the Github part, is also reused, contributing around 10 billion tokens.
|
13 |
|
14 |
+
# Primary Usage
|
15 |
|
16 |
+
This dataset serves as the foundation for training CrystalCoder and supports further reproduction.
|
17 |
|
18 |
+
# License:
|