chad-brouze commited on
Commit
61270c5
1 Parent(s): 7be55fe

Model save

Browse files
Files changed (4) hide show
  1. README.md +3 -9
  2. all_results.json +4 -9
  3. train_results.json +4 -4
  4. trainer_state.json +0 -0
README.md CHANGED
@@ -1,16 +1,10 @@
1
  ---
2
  base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
3
  datasets:
4
- - africa-intelligence/yahma-alpaca-cleaned-tn
5
- - africa-intelligence/yahma-alpaca-cleaned-xh
6
- - africa-intelligence/yahma-alpaca-cleaned-zu
7
- - africa-intelligence/yahma-alpaca-cleaned-af
8
- - africa-intelligence/yahma-alpaca-cleaned-en
9
- - africa-intelligence/yahma-alpaca-cleaned-nso
10
  library_name: peft
11
  license: llama3.1
12
  tags:
13
- - alignment-handbook
14
  - trl
15
  - sft
16
  - generated_from_trainer
@@ -24,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
24
 
25
  # llama-8b-south-africa
26
 
27
- This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the africa-intelligence/yahma-alpaca-cleaned-tn, the africa-intelligence/yahma-alpaca-cleaned-xh, the africa-intelligence/yahma-alpaca-cleaned-zu, the africa-intelligence/yahma-alpaca-cleaned-af, the africa-intelligence/yahma-alpaca-cleaned-en and the africa-intelligence/yahma-alpaca-cleaned-nso datasets.
28
  It achieves the following results on the evaluation set:
29
  - Loss: 1.0571
30
 
@@ -61,7 +55,7 @@ The following hyperparameters were used during training:
61
 
62
  | Training Loss | Epoch | Step | Validation Loss |
63
  |:-------------:|:------:|:----:|:---------------:|
64
- | 1.0962 | 0.9999 | 5596 | 1.0571 |
65
 
66
 
67
  ### Framework versions
 
1
  ---
2
  base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
3
  datasets:
4
+ - generator
 
 
 
 
 
5
  library_name: peft
6
  license: llama3.1
7
  tags:
 
8
  - trl
9
  - sft
10
  - generated_from_trainer
 
18
 
19
  # llama-8b-south-africa
20
 
21
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the generator dataset.
22
  It achieves the following results on the evaluation set:
23
  - Loss: 1.0571
24
 
 
55
 
56
  | Training Loss | Epoch | Step | Validation Loss |
57
  |:-------------:|:------:|:----:|:---------------:|
58
+ | 1.0959 | 0.9999 | 5596 | 1.0571 |
59
 
60
 
61
  ### Framework versions
all_results.json CHANGED
@@ -1,14 +1,9 @@
1
  {
2
  "epoch": 0.9999106584472438,
3
- "eval_loss": 1.0571058988571167,
4
- "eval_runtime": 871.0794,
5
- "eval_samples": 31055,
6
- "eval_samples_per_second": 5.121,
7
- "eval_steps_per_second": 0.641,
8
  "total_flos": 4.151601388859687e+18,
9
- "train_loss": 1.2674157931559251,
10
- "train_runtime": 31909.1388,
11
  "train_samples": 310544,
12
- "train_samples_per_second": 1.403,
13
- "train_steps_per_second": 0.175
14
  }
 
1
  {
2
  "epoch": 0.9999106584472438,
 
 
 
 
 
3
  "total_flos": 4.151601388859687e+18,
4
+ "train_loss": 1.267410275018411,
5
+ "train_runtime": 31869.1034,
6
  "train_samples": 310544,
7
+ "train_samples_per_second": 1.405,
8
+ "train_steps_per_second": 0.176
9
  }
train_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "epoch": 0.9999106584472438,
3
  "total_flos": 4.151601388859687e+18,
4
- "train_loss": 1.2674157931559251,
5
- "train_runtime": 31909.1388,
6
  "train_samples": 310544,
7
- "train_samples_per_second": 1.403,
8
- "train_steps_per_second": 0.175
9
  }
 
1
  {
2
  "epoch": 0.9999106584472438,
3
  "total_flos": 4.151601388859687e+18,
4
+ "train_loss": 1.267410275018411,
5
+ "train_runtime": 31869.1034,
6
  "train_samples": 310544,
7
+ "train_samples_per_second": 1.405,
8
+ "train_steps_per_second": 0.176
9
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff