willtensora commited on
Commit
77c3bed
·
verified ·
1 Parent(s): c7e6413

End of training

Browse files
Files changed (3) hide show
  1. README.md +36 -17
  2. generation_config.json +2 -3
  3. pytorch_model.bin +2 -2
README.md CHANGED
@@ -1,11 +1,12 @@
1
  ---
2
  library_name: transformers
3
- base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
 
4
  tags:
5
  - axolotl
6
  - generated_from_trainer
7
  model-index:
8
- - name: dab16ec4-4ddf-4ee5-8888-3dc2a83f0f86
9
  results: []
10
  ---
11
 
@@ -17,20 +18,21 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  axolotl version: `0.4.1`
19
  ```yaml
20
- base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
21
  batch_size: 32
22
  bf16: true
23
  chat_template: tokenizer_default_fallback_alpaca
24
  datasets:
25
  - data_files:
26
- - f4a61305a746447c_train_data.json
27
  ds_type: json
28
  format: custom
29
- path: /workspace/input_data/f4a61305a746447c_train_data.json
30
  type:
31
- field_instruction: sentence1
32
- field_output: sentence2
33
- format: '{instruction}'
 
34
  no_input_format: '{instruction}'
35
  system_format: '{system}'
36
  system_prompt: ''
@@ -39,7 +41,7 @@ flash_attention: true
39
  gpu_memory_limit: 80GiB
40
  gradient_checkpointing: true
41
  group_by_length: true
42
- hub_model_id: willtensora/dab16ec4-4ddf-4ee5-8888-3dc2a83f0f86
43
  hub_strategy: checkpoint
44
  learning_rate: 0.0002
45
  logging_steps: 10
@@ -55,13 +57,15 @@ sample_packing: false
55
  save_steps: 40
56
  save_total_limit: 1
57
  sequence_len: 2048
58
- tokenizer_type: LlamaTokenizerFast
 
 
59
  train_on_inputs: false
60
  trust_remote_code: true
61
  val_set_size: 0.1
62
  wandb_entity: ''
63
  wandb_mode: online
64
- wandb_name: trl-internal-testing/tiny-random-LlamaForCausalLM-/workspace/input_data/f4a61305a746447c_train_data.json
65
  wandb_project: Gradients-On-Demand
66
  wandb_run: your_name
67
  wandb_runid: default
@@ -72,9 +76,11 @@ xformers_attention: true
72
 
73
  </details><br>
74
 
75
- # dab16ec4-4ddf-4ee5-8888-3dc2a83f0f86
76
 
77
- This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
 
 
78
 
79
  ## Model description
80
 
@@ -103,13 +109,26 @@ The following hyperparameters were used during training:
103
  - total_eval_batch_size: 32
104
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
105
  - lr_scheduler_type: cosine
106
- - training_steps: 13
 
107
 
108
  ### Training results
109
 
110
- | Training Loss | Epoch | Step | Validation Loss |
111
- |:-------------:|:-----:|:----:|:---------------:|
112
- | No log | 0.01 | 1 | 10.3686 |
 
 
 
 
 
 
 
 
 
 
 
 
113
 
114
 
115
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
+ license: llama3.2
4
+ base_model: NousResearch/Llama-3.2-1B
5
  tags:
6
  - axolotl
7
  - generated_from_trainer
8
  model-index:
9
+ - name: 0c2649cc-2fe7-4e88-b672-6da1fee4001f
10
  results: []
11
  ---
12
 
 
18
 
19
  axolotl version: `0.4.1`
20
  ```yaml
21
+ base_model: NousResearch/Llama-3.2-1B
22
  batch_size: 32
23
  bf16: true
24
  chat_template: tokenizer_default_fallback_alpaca
25
  datasets:
26
  - data_files:
27
+ - f51beb4c568b9128_train_data.json
28
  ds_type: json
29
  format: custom
30
+ path: /workspace/input_data/f51beb4c568b9128_train_data.json
31
  type:
32
+ field_input: keywords
33
+ field_instruction: idea
34
+ field_output: full_response
35
+ format: '{instruction} {input}'
36
  no_input_format: '{instruction}'
37
  system_format: '{system}'
38
  system_prompt: ''
 
41
  gpu_memory_limit: 80GiB
42
  gradient_checkpointing: true
43
  group_by_length: true
44
+ hub_model_id: willtensora/0c2649cc-2fe7-4e88-b672-6da1fee4001f
45
  hub_strategy: checkpoint
46
  learning_rate: 0.0002
47
  logging_steps: 10
 
57
  save_steps: 40
58
  save_total_limit: 1
59
  sequence_len: 2048
60
+ special_tokens:
61
+ pad_token: <|end_of_text|>
62
+ tokenizer_type: PreTrainedTokenizerFast
63
  train_on_inputs: false
64
  trust_remote_code: true
65
  val_set_size: 0.1
66
  wandb_entity: ''
67
  wandb_mode: online
68
+ wandb_name: NousResearch/Llama-3.2-1B-/workspace/input_data/f51beb4c568b9128_train_data.json
69
  wandb_project: Gradients-On-Demand
70
  wandb_run: your_name
71
  wandb_runid: default
 
76
 
77
  </details><br>
78
 
79
+ # 0c2649cc-2fe7-4e88-b672-6da1fee4001f
80
 
81
+ This model is a fine-tuned version of [NousResearch/Llama-3.2-1B](https://huggingface.co/NousResearch/Llama-3.2-1B) on the None dataset.
82
+ It achieves the following results on the evaluation set:
83
+ - Loss: 0.0849
84
 
85
  ## Model description
86
 
 
109
  - total_eval_batch_size: 32
110
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
111
  - lr_scheduler_type: cosine
112
+ - lr_scheduler_warmup_steps: 12
113
+ - training_steps: 258
114
 
115
  ### Training results
116
 
117
+ | Training Loss | Epoch | Step | Validation Loss |
118
+ |:-------------:|:------:|:----:|:---------------:|
119
+ | No log | 0.0005 | 1 | 0.2074 |
120
+ | 0.5472 | 0.0097 | 20 | 0.1746 |
121
+ | 0.3199 | 0.0194 | 40 | 0.2036 |
122
+ | 0.2013 | 0.0291 | 60 | 0.1772 |
123
+ | 0.0903 | 0.0388 | 80 | 0.1702 |
124
+ | 0.0875 | 0.0485 | 100 | 0.2040 |
125
+ | 0.1425 | 0.0582 | 120 | 0.1392 |
126
+ | 0.1982 | 0.0679 | 140 | 0.1194 |
127
+ | 0.1372 | 0.0776 | 160 | 0.1014 |
128
+ | 0.0278 | 0.0873 | 180 | 0.0952 |
129
+ | 0.0248 | 0.0970 | 200 | 0.0893 |
130
+ | 0.1051 | 0.1067 | 220 | 0.0875 |
131
+ | 0.0649 | 0.1164 | 240 | 0.0849 |
132
 
133
 
134
  ### Framework versions
generation_config.json CHANGED
@@ -1,8 +1,7 @@
1
  {
2
  "_from_model_config": true,
3
- "bos_token_id": 0,
4
  "do_sample": true,
5
- "eos_token_id": 1,
6
- "pad_token_id": 2,
7
  "transformers_version": "4.46.0"
8
  }
 
1
  {
2
  "_from_model_config": true,
3
+ "bos_token_id": 128000,
4
  "do_sample": true,
5
+ "eos_token_id": 128001,
 
6
  "transformers_version": "4.46.0"
7
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:70872e8af35c48abf8dc8a0f41f28f7673e23a762cd5f5a4707b0788bf617ebf
3
- size 2071661
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a3488e39325dea60c21ab0cf3a2715d26192702fde06183582341380d5a328b
3
+ size 2471678226