Karzan commited on
Commit
49ececa
1 Parent(s): 9024169

End of training

Browse files
Files changed (5) hide show
  1. README.md +11 -42
  2. config.json +1 -1
  3. generation_config.json +1 -1
  4. pytorch_model.bin +2 -2
  5. training_args.bin +2 -2
README.md CHANGED
@@ -15,7 +15,12 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [Karzan/gpt2-walamakan-2](https://huggingface.co/Karzan/gpt2-walamakan-2) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 6.9220
 
 
 
 
 
19
 
20
  ## Model description
21
 
@@ -35,54 +40,18 @@ More information needed
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 3e-05
38
- - train_batch_size: 16
39
- - eval_batch_size: 16
40
  - seed: 42
41
  - gradient_accumulation_steps: 4
42
- - total_train_batch_size: 64
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - num_epochs: 30
46
 
47
- ### Training results
48
-
49
- | Training Loss | Epoch | Step | Validation Loss |
50
- |:-------------:|:-----:|:----:|:---------------:|
51
- | 0.2697 | 1.0 | 47 | 6.7283 |
52
- | 0.2562 | 2.0 | 94 | 6.7642 |
53
- | 0.2491 | 3.0 | 141 | 6.7544 |
54
- | 0.2403 | 4.0 | 188 | 6.7617 |
55
- | 0.2332 | 5.0 | 235 | 6.7501 |
56
- | 0.2287 | 6.0 | 282 | 6.7719 |
57
- | 0.2178 | 7.0 | 329 | 6.7966 |
58
- | 0.2111 | 8.0 | 376 | 6.8080 |
59
- | 0.2051 | 9.0 | 423 | 6.8298 |
60
- | 0.1984 | 10.0 | 470 | 6.8288 |
61
- | 0.1933 | 11.0 | 517 | 6.8321 |
62
- | 0.1896 | 12.0 | 564 | 6.8422 |
63
- | 0.1829 | 13.0 | 611 | 6.8685 |
64
- | 0.1762 | 14.0 | 658 | 6.8504 |
65
- | 0.1757 | 15.0 | 705 | 6.8636 |
66
- | 0.1695 | 16.0 | 752 | 6.8704 |
67
- | 0.165 | 17.0 | 799 | 6.8803 |
68
- | 0.1617 | 18.0 | 846 | 6.8826 |
69
- | 0.159 | 19.0 | 893 | 6.8774 |
70
- | 0.1557 | 20.0 | 940 | 6.8872 |
71
- | 0.152 | 21.0 | 987 | 6.8998 |
72
- | 0.1473 | 22.0 | 1034 | 6.8998 |
73
- | 0.1455 | 23.0 | 1081 | 6.9136 |
74
- | 0.1425 | 24.0 | 1128 | 6.9149 |
75
- | 0.1392 | 25.0 | 1175 | 6.9105 |
76
- | 0.1395 | 26.0 | 1222 | 6.9167 |
77
- | 0.136 | 27.0 | 1269 | 6.9145 |
78
- | 0.1355 | 28.0 | 1316 | 6.9185 |
79
- | 0.1335 | 29.0 | 1363 | 6.9192 |
80
- | 0.1322 | 30.0 | 1410 | 6.9220 |
81
-
82
-
83
  ### Framework versions
84
 
85
- - Transformers 4.32.0
86
- - Pytorch 2.1.0.dev20230605+cu121
87
  - Datasets 2.14.4
88
  - Tokenizers 0.13.3
 
15
 
16
  This model is a fine-tuned version of [Karzan/gpt2-walamakan-2](https://huggingface.co/Karzan/gpt2-walamakan-2) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - eval_loss: 6.8668
19
+ - eval_runtime: 6.2614
20
+ - eval_samples_per_second: 15.971
21
+ - eval_steps_per_second: 7.985
22
+ - epoch: 13.0
23
+ - step: 4881
24
 
25
  ## Model description
26
 
 
40
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 3e-05
43
+ - train_batch_size: 2
44
+ - eval_batch_size: 2
45
  - seed: 42
46
  - gradient_accumulation_steps: 4
47
+ - total_train_batch_size: 8
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - num_epochs: 30
51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
  ### Framework versions
53
 
54
+ - Transformers 4.32.1
55
+ - Pytorch 2.0.1+cu118
56
  - Datasets 2.14.4
57
  - Tokenizers 0.13.3
config.json CHANGED
@@ -26,7 +26,7 @@
26
  "summary_type": "cls_index",
27
  "summary_use_proj": true,
28
  "torch_dtype": "float32",
29
- "transformers_version": "4.32.0",
30
  "use_cache": true,
31
  "vocab_size": 184789
32
  }
 
26
  "summary_type": "cls_index",
27
  "summary_use_proj": true,
28
  "torch_dtype": "float32",
29
+ "transformers_version": "4.32.1",
30
  "use_cache": true,
31
  "vocab_size": 184789
32
  }
generation_config.json CHANGED
@@ -2,5 +2,5 @@
2
  "_from_model_config": true,
3
  "bos_token_id": 0,
4
  "eos_token_id": 0,
5
- "transformers_version": "4.32.0"
6
  }
 
2
  "_from_model_config": true,
3
  "bos_token_id": 0,
4
  "eos_token_id": 0,
5
+ "transformers_version": "4.32.1"
6
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:797010b0cad8c4e1d13d37e8d571d46451965da2b8a6061561052d8b8092e4da
3
- size 854379130
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c25f6da097c90e3800730a1a64451bae23689f25da9284743bf0bfae0f07629
3
+ size 854378685
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bc147e07f8fdde37b903b4976ca1ee40f9d2e4008c92304b184ca10e340d5ff2
3
- size 4472
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbac9accf54c62a7bef980227ae76577e7663496ae25a08d1c8fe8eca3ed1860
3
+ size 4027