Liu-Xiang commited on
Commit
3a42011
1 Parent(s): 3134a66

LLM-Alchemy-Chamber/mistral-instruct-generation

Browse files
README.md CHANGED
@@ -1,9 +1,9 @@
1
  ---
2
- license: apache-2.0
3
  library_name: peft
 
4
  tags:
5
  - generated_from_trainer
6
- base_model: mistralai/Mixtral-8x7B-v0.1
7
  model-index:
8
  - name: Mixtral_Alpace_v3
9
  results: []
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.7630
20
 
21
  ## Model description
22
 
@@ -36,34 +36,34 @@ More information needed
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 2.5e-05
39
- - train_batch_size: 10
40
  - eval_batch_size: 8
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
- - lr_scheduler_warmup_steps: 0.03
45
  - training_steps: 100
46
 
47
  ### Training results
48
 
49
  | Training Loss | Epoch | Step | Validation Loss |
50
  |:-------------:|:-----:|:----:|:---------------:|
51
- | 0.9965 | 0.01 | 10 | 0.9608 |
52
- | 0.9611 | 0.02 | 20 | 0.9045 |
53
- | 0.8601 | 0.02 | 30 | 0.8574 |
54
- | 0.8382 | 0.03 | 40 | 0.8280 |
55
- | 0.8326 | 0.04 | 50 | 0.8072 |
56
- | 0.7815 | 0.05 | 60 | 0.7904 |
57
- | 0.796 | 0.06 | 70 | 0.7786 |
58
- | 0.7668 | 0.07 | 80 | 0.7701 |
59
- | 0.7774 | 0.07 | 90 | 0.7648 |
60
- | 0.7699 | 0.08 | 100 | 0.7630 |
61
 
62
 
63
  ### Framework versions
64
 
65
- - PEFT 0.9.1.dev0
66
  - Transformers 4.36.0
67
- - Pytorch 2.0.1+cu118
68
- - Datasets 2.18.0
69
  - Tokenizers 0.15.2
 
1
  ---
2
+ base_model: mistralai/Mixtral-8x7B-v0.1
3
  library_name: peft
4
+ license: apache-2.0
5
  tags:
6
  - generated_from_trainer
 
7
  model-index:
8
  - name: Mixtral_Alpace_v3
9
  results: []
 
16
 
17
  This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.8517
20
 
21
  ## Model description
22
 
 
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 2.5e-05
39
+ - train_batch_size: 16
40
  - eval_batch_size: 8
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
+ - lr_scheduler_warmup_steps: 3
45
  - training_steps: 100
46
 
47
  ### Training results
48
 
49
  | Training Loss | Epoch | Step | Validation Loss |
50
  |:-------------:|:-----:|:----:|:---------------:|
51
+ | 1.1536 | 0.03 | 10 | 1.1278 |
52
+ | 1.0733 | 0.07 | 20 | 1.0587 |
53
+ | 1.0201 | 0.1 | 30 | 0.9941 |
54
+ | 0.9622 | 0.13 | 40 | 0.9509 |
55
+ | 0.9268 | 0.16 | 50 | 0.9188 |
56
+ | 0.8984 | 0.2 | 60 | 0.8944 |
57
+ | 0.9067 | 0.23 | 70 | 0.8756 |
58
+ | 0.8712 | 0.26 | 80 | 0.8622 |
59
+ | 0.8485 | 0.3 | 90 | 0.8544 |
60
+ | 0.8703 | 0.33 | 100 | 0.8517 |
61
 
62
 
63
  ### Framework versions
64
 
65
+ - PEFT 0.12.1.dev0
66
  - Transformers 4.36.0
67
+ - Pytorch 2.2.2+cu121
68
+ - Datasets 2.20.0
69
  - Tokenizers 0.15.2
adapter_config.json CHANGED
@@ -20,13 +20,13 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
 
 
23
  "k_proj",
24
  "up_proj",
25
- "q_proj",
26
  "o_proj",
27
  "down_proj",
28
- "lm_head",
29
- "v_proj",
30
  "gate_proj"
31
  ],
32
  "task_type": "CAUSAL_LM",
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "q_proj",
24
+ "lm_head",
25
  "k_proj",
26
  "up_proj",
27
+ "v_proj",
28
  "o_proj",
29
  "down_proj",
 
 
30
  "gate_proj"
31
  ],
32
  "task_type": "CAUSAL_LM",
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e6d7b3b0957ac0b7bf695d53bc989c422c608927ccd72d92de5ef0f8a27b7bb5
3
  size 751667752
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3526a823dd72bb5ec2d942eee6418113aa5d5e3c17dd3638798095c3c3c885da
3
  size 751667752
runs/Aug02_10-11-21_genertive-ai-workbench-0/events.out.tfevents.1722593484.genertive-ai-workbench-0.3881.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e66034b28d67c32863719026eb1e3a397e23118c758c1f04cdc46ed7a8258be
3
+ size 4892
runs/Aug02_15-08-44_genertive-ai-workbench-0/events.out.tfevents.1722611325.genertive-ai-workbench-0.348.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ca87ca39caf5d5d1fb8517d71c2f3b57df47d0bcba347b75d19374fa0ca3f57
3
+ size 4892
runs/Aug03_05-27-29_genertive-ai-workbench-0/events.out.tfevents.1722662851.genertive-ai-workbench-0.302.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0487daef055c7bd1247f0dbd61b872c4ac5392a4e94e857ec8a77dbed363b851
3
+ size 4892
runs/Aug03_05-48-06_genertive-ai-workbench-0/events.out.tfevents.1722664088.genertive-ai-workbench-0.828.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7de4e7a40c0a7af8513a093b2d738dc6f8a407f668736094d594b4e73a0d1db4
3
+ size 4892
runs/Aug03_06-43-15_genertive-ai-workbench-0/events.out.tfevents.1722667396.genertive-ai-workbench-0.1335.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f97e3d2bae844614c37b3748534afcf5b88ae5ed7b1d3792b15f3af4464feb18
3
+ size 4889
runs/Aug03_07-07-56_genertive-ai-workbench-0/events.out.tfevents.1722668878.genertive-ai-workbench-0.1753.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c87cd6ad039415ad9823bafc50166fc37f55a9cb348458d19409295370dab761
3
+ size 9437
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c954fb7eb42f3f555738afc9de64de8c969c8194d16790ace3d1f155aae492b9
3
- size 4283
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89f2f6393370d2e7fdd31fd6345d28030534c309c506d5df6df0daa02ea55bfe
3
+ size 4728