Update README.md
Browse filesoccuring -> occurring
README.md
CHANGED
@@ -80,7 +80,7 @@ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
|
|
80 |
|
81 |
### Training Dataset
|
82 |
|
83 |
-
The first pre-training stage relies on 300B tokens sourced from various top programming languages
|
84 |
|
85 |
### Training Procedure
|
86 |
|
|
|
80 |
|
81 |
### Training Dataset
|
82 |
|
83 |
+
The first pre-training stage relies on 300B tokens sourced from various top programming languages occurring in the stackoverflow developer survey in the `starcoder-data` dataset. We then finetune it on a longer context augmentation of `starcoder-data` dataset which increased the average token per sample to 20k.
|
84 |
|
85 |
### Training Procedure
|
86 |
|