andysalerno commited on
Commit
8b4f5e0
1 Parent(s): 7334384

End of training

Browse files
Files changed (3) hide show
  1. README.md +193 -0
  2. adapter_model.bin +3 -0
  3. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - axolotl
6
+ - generated_from_trainer
7
+ base_model: andysalerno/mistral-sft-v3
8
+ model-index:
9
+ - name: rainbowfish-v9-adapter
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.4.0`
20
+ ```yaml
21
+ base_model: andysalerno/mistral-sft-v3
22
+ model_type: AutoModelForCausalLM
23
+
24
+ load_in_8bit: true
25
+ load_in_4bit: false
26
+ strict: false
27
+
28
+ datasets:
29
+ - path: andysalerno/rainbowfish-v1
30
+ type:
31
+ system_prompt: ""
32
+ field_system: system
33
+ field_instruction: input
34
+ field_output: output
35
+ format: "{instruction}"
36
+ no_input_format: "{instruction}"
37
+ dataset_prepared_path: last_run_prepared
38
+ val_set_size: 0.005
39
+ output_dir: ./lora-out-rainbow9
40
+
41
+ adapter: lora
42
+ lora_model_dir:
43
+
44
+ sequence_len: 2048
45
+ sample_packing: false # was true
46
+ eval_sample_packing: false
47
+ pad_to_sequence_len: false
48
+ padding_side: left
49
+
50
+ lora_r: 64
51
+ lora_alpha: 16
52
+ lora_dropout: 0.05
53
+ lora_target_linear: true
54
+ lora_fan_in_fan_out:
55
+ lora_target_modules:
56
+ - gate_proj
57
+ - down_proj
58
+ - up_proj
59
+ - q_proj
60
+ - v_proj
61
+ - k_proj
62
+ - o_proj
63
+
64
+ lora_modules_to_save:
65
+ - embed_tokens
66
+ - lm_head
67
+
68
+ wandb_project: axolotl
69
+ wandb_entity:
70
+ wandb_watch:
71
+ wandb_name:
72
+ wandb_log_model:
73
+
74
+ gradient_accumulation_steps: 4
75
+ micro_batch_size: 4
76
+ optimizer: paged_adamw_8bit
77
+ lr_scheduler: cosine
78
+ learning_rate: 2e-5
79
+
80
+ neftune_noise_alpha: 5
81
+
82
+ train_on_inputs: false
83
+ group_by_length: false
84
+ bf16: true
85
+ fp16:
86
+ tf32: false
87
+
88
+ gradient_checkpointing: true
89
+ gradient_checkpointing_kwargs:
90
+ use_reentrant: false
91
+ # early_stopping_patience: 3
92
+ local_rank:
93
+ logging_steps: 1
94
+ xformers_attention:
95
+ flash_attention: true
96
+
97
+ loss_watchdog_threshold: 5.0
98
+ loss_watchdog_patience: 3
99
+
100
+ hub_strategy: "every_save"
101
+ hub_model_id: andysalerno/rainbowfish-v9-adapter
102
+
103
+ num_epochs: 4
104
+ warmup_steps: 100
105
+ eval_steps: 200
106
+ eval_table_size:
107
+ eval_table_max_new_tokens: 128
108
+ # max_steps: 500
109
+ saves_per_epoch: 1
110
+ debug:
111
+ weight_decay: 0.1
112
+ fsdp:
113
+ fsdp_config:
114
+ special_tokens:
115
+ bos_token: "<|im_start|>"
116
+ eos_token: "<|im_end|>"
117
+ unk_token: "<unk>"
118
+
119
+ ```
120
+
121
+ </details><br>
122
+
123
+ # rainbowfish-v9-adapter
124
+
125
+ This model is a fine-tuned version of [andysalerno/mistral-sft-v3](https://huggingface.co/andysalerno/mistral-sft-v3) on the None dataset.
126
+ It achieves the following results on the evaluation set:
127
+ - Loss: 0.6456
128
+
129
+ ## Model description
130
+
131
+ More information needed
132
+
133
+ ## Intended uses & limitations
134
+
135
+ More information needed
136
+
137
+ ## Training and evaluation data
138
+
139
+ More information needed
140
+
141
+ ## Training procedure
142
+
143
+ ### Training hyperparameters
144
+
145
+ The following hyperparameters were used during training:
146
+ - learning_rate: 2e-05
147
+ - train_batch_size: 4
148
+ - eval_batch_size: 4
149
+ - seed: 42
150
+ - distributed_type: multi-GPU
151
+ - num_devices: 4
152
+ - gradient_accumulation_steps: 4
153
+ - total_train_batch_size: 64
154
+ - total_eval_batch_size: 16
155
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
156
+ - lr_scheduler_type: cosine
157
+ - lr_scheduler_warmup_steps: 100
158
+ - num_epochs: 4
159
+
160
+ ### Training results
161
+
162
+ | Training Loss | Epoch | Step | Validation Loss |
163
+ |:-------------:|:-----:|:----:|:---------------:|
164
+ | 0.6535 | 0.18 | 200 | 0.6840 |
165
+ | 0.69 | 0.37 | 400 | 0.6711 |
166
+ | 0.6649 | 0.55 | 600 | 0.6641 |
167
+ | 0.6959 | 0.74 | 800 | 0.6590 |
168
+ | 0.717 | 0.92 | 1000 | 0.6547 |
169
+ | 0.5243 | 1.11 | 1200 | 0.6540 |
170
+ | 0.6285 | 1.29 | 1400 | 0.6523 |
171
+ | 0.6219 | 1.47 | 1600 | 0.6504 |
172
+ | 0.6334 | 1.66 | 1800 | 0.6486 |
173
+ | 0.6627 | 1.84 | 2000 | 0.6466 |
174
+ | 0.6319 | 2.03 | 2200 | 0.6460 |
175
+ | 0.6081 | 2.21 | 2400 | 0.6466 |
176
+ | 0.5721 | 2.4 | 2600 | 0.6459 |
177
+ | 0.5794 | 2.58 | 2800 | 0.6447 |
178
+ | 0.721 | 2.76 | 3000 | 0.6443 |
179
+ | 0.5825 | 2.95 | 3200 | 0.6436 |
180
+ | 0.5921 | 3.13 | 3400 | 0.6457 |
181
+ | 0.5224 | 3.32 | 3600 | 0.6461 |
182
+ | 0.5466 | 3.5 | 3800 | 0.6456 |
183
+ | 0.5972 | 3.69 | 4000 | 0.6460 |
184
+ | 0.5999 | 3.87 | 4200 | 0.6456 |
185
+
186
+
187
+ ### Framework versions
188
+
189
+ - PEFT 0.8.2
190
+ - Transformers 4.38.0.dev0
191
+ - Pytorch 2.1.2+cu118
192
+ - Datasets 2.17.0
193
+ - Tokenizers 0.15.0
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b57675b830ee3bac4186c722ecd8a81b547ca05cd889123ce4ab957460418dc4
3
+ size 1195572114
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:602b0a01f234f5a285de488ce61335362e0ddfae88dad1d0daadf0d0bb945bbd
3
  size 1195470168
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f0b925d7e4f36eda6cd10e19a5ed097041fd0371874f91d9a0a0a0cf380fd3d
3
  size 1195470168