Model save
Browse files- README.md +15 -15
- all_results.json +16 -16
- eval_results.json +12 -12
- runs/Dec14_12-12-04_uclaml03.cs.ucla.edu/events.out.tfevents.1702669747.uclaml03.cs.ucla.edu.3104594.1 +3 -0
- train_results.json +4 -4
- trainer_state.json +0 -0
README.md
CHANGED
@@ -15,15 +15,15 @@ should probably proofread and complete it, then remove this comment. -->
|
|
15 |
|
16 |
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
|
17 |
It achieves the following results on the evaluation set:
|
18 |
-
- Loss: 0.
|
19 |
-
- Rewards/chosen: -0.
|
20 |
-
- Rewards/rejected: -0.
|
21 |
-
- Rewards/accuracies: 0.
|
22 |
-
- Rewards/margins: 0.
|
23 |
-
- Logps/rejected: -
|
24 |
-
- Logps/chosen: -
|
25 |
-
- Logits/rejected: -
|
26 |
-
- Logits/chosen: -2.
|
27 |
|
28 |
## Model description
|
29 |
|
@@ -43,13 +43,13 @@ More information needed
|
|
43 |
|
44 |
The following hyperparameters were used during training:
|
45 |
- learning_rate: 5e-07
|
46 |
-
- train_batch_size:
|
47 |
- eval_batch_size: 4
|
48 |
- seed: 42
|
49 |
- distributed_type: multi-GPU
|
50 |
- num_devices: 4
|
51 |
-
- gradient_accumulation_steps:
|
52 |
-
- total_train_batch_size:
|
53 |
- total_eval_batch_size: 16
|
54 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
55 |
- lr_scheduler_type: linear
|
@@ -60,9 +60,9 @@ The following hyperparameters were used during training:
|
|
60 |
|
61 |
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|
62 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
|
63 |
-
| 0.
|
64 |
-
| 0.
|
65 |
-
| 0.
|
66 |
|
67 |
|
68 |
### Framework versions
|
|
|
15 |
|
16 |
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
|
17 |
It achieves the following results on the evaluation set:
|
18 |
+
- Loss: 0.5263
|
19 |
+
- Rewards/chosen: -0.1493
|
20 |
+
- Rewards/rejected: -0.8998
|
21 |
+
- Rewards/accuracies: 0.7480
|
22 |
+
- Rewards/margins: 0.7505
|
23 |
+
- Logps/rejected: -228.2820
|
24 |
+
- Logps/chosen: -266.1538
|
25 |
+
- Logits/rejected: -1.9412
|
26 |
+
- Logits/chosen: -2.0663
|
27 |
|
28 |
## Model description
|
29 |
|
|
|
43 |
|
44 |
The following hyperparameters were used during training:
|
45 |
- learning_rate: 5e-07
|
46 |
+
- train_batch_size: 8
|
47 |
- eval_batch_size: 4
|
48 |
- seed: 42
|
49 |
- distributed_type: multi-GPU
|
50 |
- num_devices: 4
|
51 |
+
- gradient_accumulation_steps: 2
|
52 |
+
- total_train_batch_size: 64
|
53 |
- total_eval_batch_size: 16
|
54 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
55 |
- lr_scheduler_type: linear
|
|
|
60 |
|
61 |
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|
62 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
|
63 |
+
| 0.5506 | 1.0 | 968 | 0.5556 | -0.1128 | -0.6425 | 0.7120 | 0.5297 | -225.7089 | -265.7884 | -1.9914 | -2.1123 |
|
64 |
+
| 0.545 | 2.0 | 1937 | 0.5313 | -0.1468 | -0.8623 | 0.7440 | 0.7156 | -227.9077 | -266.1287 | -1.9506 | -2.0746 |
|
65 |
+
| 0.5342 | 3.0 | 2904 | 0.5263 | -0.1493 | -0.8998 | 0.7480 | 0.7505 | -228.2820 | -266.1538 | -1.9412 | -2.0663 |
|
66 |
|
67 |
|
68 |
### Framework versions
|
all_results.json
CHANGED
@@ -1,21 +1,21 @@
|
|
1 |
{
|
2 |
"epoch": 3.0,
|
3 |
-
"eval_logits/chosen": -2.
|
4 |
-
"eval_logits/rejected": -
|
5 |
-
"eval_logps/chosen": -
|
6 |
-
"eval_logps/rejected": -
|
7 |
-
"eval_loss": 0.
|
8 |
-
"eval_rewards/accuracies": 0.
|
9 |
-
"eval_rewards/chosen": -0.
|
10 |
-
"eval_rewards/margins": 0.
|
11 |
-
"eval_rewards/rejected": -0.
|
12 |
-
"eval_runtime":
|
13 |
"eval_samples": 2000,
|
14 |
-
"eval_samples_per_second": 4.
|
15 |
-
"eval_steps_per_second": 0.
|
16 |
-
"train_loss": 0.
|
17 |
-
"train_runtime":
|
18 |
"train_samples": 61966,
|
19 |
-
"train_samples_per_second":
|
20 |
-
"train_steps_per_second": 0.
|
21 |
}
|
|
|
1 |
{
|
2 |
"epoch": 3.0,
|
3 |
+
"eval_logits/chosen": -2.0662753582000732,
|
4 |
+
"eval_logits/rejected": -1.9411602020263672,
|
5 |
+
"eval_logps/chosen": -266.15380859375,
|
6 |
+
"eval_logps/rejected": -228.28196716308594,
|
7 |
+
"eval_loss": 0.5263338685035706,
|
8 |
+
"eval_rewards/accuracies": 0.7480000257492065,
|
9 |
+
"eval_rewards/chosen": -0.14929771423339844,
|
10 |
+
"eval_rewards/margins": 0.7504671812057495,
|
11 |
+
"eval_rewards/rejected": -0.899764895439148,
|
12 |
+
"eval_runtime": 443.1683,
|
13 |
"eval_samples": 2000,
|
14 |
+
"eval_samples_per_second": 4.513,
|
15 |
+
"eval_steps_per_second": 0.282,
|
16 |
+
"train_loss": 0.5640471254170105,
|
17 |
+
"train_runtime": 84492.9354,
|
18 |
"train_samples": 61966,
|
19 |
+
"train_samples_per_second": 2.2,
|
20 |
+
"train_steps_per_second": 0.034
|
21 |
}
|
eval_results.json
CHANGED
@@ -1,16 +1,16 @@
|
|
1 |
{
|
2 |
"epoch": 3.0,
|
3 |
-
"eval_logits/chosen": -2.
|
4 |
-
"eval_logits/rejected": -
|
5 |
-
"eval_logps/chosen": -
|
6 |
-
"eval_logps/rejected": -
|
7 |
-
"eval_loss": 0.
|
8 |
-
"eval_rewards/accuracies": 0.
|
9 |
-
"eval_rewards/chosen": -0.
|
10 |
-
"eval_rewards/margins": 0.
|
11 |
-
"eval_rewards/rejected": -0.
|
12 |
-
"eval_runtime":
|
13 |
"eval_samples": 2000,
|
14 |
-
"eval_samples_per_second": 4.
|
15 |
-
"eval_steps_per_second": 0.
|
16 |
}
|
|
|
1 |
{
|
2 |
"epoch": 3.0,
|
3 |
+
"eval_logits/chosen": -2.0662753582000732,
|
4 |
+
"eval_logits/rejected": -1.9411602020263672,
|
5 |
+
"eval_logps/chosen": -266.15380859375,
|
6 |
+
"eval_logps/rejected": -228.28196716308594,
|
7 |
+
"eval_loss": 0.5263338685035706,
|
8 |
+
"eval_rewards/accuracies": 0.7480000257492065,
|
9 |
+
"eval_rewards/chosen": -0.14929771423339844,
|
10 |
+
"eval_rewards/margins": 0.7504671812057495,
|
11 |
+
"eval_rewards/rejected": -0.899764895439148,
|
12 |
+
"eval_runtime": 443.1683,
|
13 |
"eval_samples": 2000,
|
14 |
+
"eval_samples_per_second": 4.513,
|
15 |
+
"eval_steps_per_second": 0.282
|
16 |
}
|
runs/Dec14_12-12-04_uclaml03.cs.ucla.edu/events.out.tfevents.1702669747.uclaml03.cs.ucla.edu.3104594.1
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7ea514858ec6c1cb9fb871f9820dd154de5a9c4135265689d4c0077071bae501
|
3 |
+
size 828
|
train_results.json
CHANGED
@@ -1,8 +1,8 @@
|
|
1 |
{
|
2 |
"epoch": 3.0,
|
3 |
-
"train_loss": 0.
|
4 |
-
"train_runtime":
|
5 |
"train_samples": 61966,
|
6 |
-
"train_samples_per_second":
|
7 |
-
"train_steps_per_second": 0.
|
8 |
}
|
|
|
1 |
{
|
2 |
"epoch": 3.0,
|
3 |
+
"train_loss": 0.5640471254170105,
|
4 |
+
"train_runtime": 84492.9354,
|
5 |
"train_samples": 61966,
|
6 |
+
"train_samples_per_second": 2.2,
|
7 |
+
"train_steps_per_second": 0.034
|
8 |
}
|
trainer_state.json
CHANGED
The diff for this file is too large to render.
See raw diff
|
|