infinitylogesh
commited on
Commit
•
a28e531
1
Parent(s):
b7adda1
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,33 @@
|
|
1 |
---
|
2 |
license: openrail
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: openrail
|
3 |
+
datasets:
|
4 |
+
- bigcode/the-stack-dedup
|
5 |
+
library_name: transformers
|
6 |
+
tags:
|
7 |
+
- code_generation
|
8 |
+
- R programming
|
9 |
+
- sas
|
10 |
+
- santacoder
|
11 |
---
|
12 |
+
|
13 |
+
# Statscoder
|
14 |
+
|
15 |
+
This model is a fine-tuned version of [bigcode/santacoder](https://huggingface.co/bigcode/santacoder) on `R` and `SAS` language repositories in [the stack ](https://huggingface.co/datasets/bigcode/the-stack-dedup) dataset.
|
16 |
+
|
17 |
+
|
18 |
+
## Training procedure
|
19 |
+
|
20 |
+
The model was finetuned using the code adapted from [loubnabnl/santacoder-finetuning](https://github.com/loubnabnl/santacoder-finetuning). Adapted to handle multiple subsets of datasets and it is [here](https://github.com/infinitylogesh/santacoder-finetuning).
|
21 |
+
|
22 |
+
The following hyperparameters were used during training:
|
23 |
+
- learning_rate: 5e-05
|
24 |
+
- train_batch_size: 8
|
25 |
+
- eval_batch_size: 8
|
26 |
+
- seed: 42
|
27 |
+
- gradient_accumulation_steps: 4
|
28 |
+
- optimizer: adafactor
|
29 |
+
- lr_scheduler_type: cosine
|
30 |
+
- lr_scheduler_warmup_steps: 100
|
31 |
+
- training_steps: 1600
|
32 |
+
- seq_length: 1024
|
33 |
+
- no_fp16
|