AhsenKacar commited on
Commit
3f203f9
1 Parent(s): 035d801

Training in progress epoch 0

Browse files
README.md CHANGED
@@ -1,22 +1,24 @@
1
  ---
2
- base_model: AhsenKacar/TurkishReviews-ds-mini
3
  library_name: transformers
4
  license: mit
 
5
  tags:
6
  - generated_from_keras_callback
7
  model-index:
8
- - name: TurkishReviews-ds-mini
9
  results: []
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
13
  probably proofread and complete it, then remove this comment. -->
14
 
15
- # TurkishReviews-ds-mini
16
 
17
- This model is a fine-tuned version of [AhsenKacar/TurkishReviews-ds-mini](https://huggingface.co/AhsenKacar/TurkishReviews-ds-mini) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
-
 
 
20
 
21
  ## Model description
22
 
@@ -35,11 +37,14 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -998, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
39
  - training_precision: mixed_float16
40
 
41
  ### Training results
42
 
 
 
 
43
 
44
 
45
  ### Framework versions
 
1
  ---
 
2
  library_name: transformers
3
  license: mit
4
+ base_model: gpt2
5
  tags:
6
  - generated_from_keras_callback
7
  model-index:
8
+ - name: AhsenKacar/TurkishReviews-ds-mini
9
  results: []
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
13
  probably proofread and complete it, then remove this comment. -->
14
 
15
+ # AhsenKacar/TurkishReviews-ds-mini
16
 
17
+ This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Train Loss: 8.7685
20
+ - Validation Loss: 7.6386
21
+ - Epoch: 0
22
 
23
  ## Model description
24
 
 
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 43, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
41
  - training_precision: mixed_float16
42
 
43
  ### Training results
44
 
45
+ | Train Loss | Validation Loss | Epoch |
46
+ |:----------:|:---------------:|:-----:|
47
+ | 8.7685 | 7.6386 | 0 |
48
 
49
 
50
  ### Framework versions
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "AhsenKacar/TurkishReviews-ds-mini",
3
  "activation_function": "gelu_new",
4
  "architectures": [
5
  "GPT2LMHeadModel"
@@ -34,5 +34,5 @@
34
  },
35
  "transformers_version": "4.44.2",
36
  "use_cache": true,
37
- "vocab_size": 44208
38
  }
 
1
  {
2
+ "_name_or_path": "gpt2",
3
  "activation_function": "gelu_new",
4
  "architectures": [
5
  "GPT2LMHeadModel"
 
34
  },
35
  "transformers_version": "4.44.2",
36
  "use_cache": true,
37
+ "vocab_size": 52000
38
  }
special_tokens_map.json CHANGED
@@ -13,6 +13,7 @@
13
  "rstrip": false,
14
  "single_word": false
15
  },
 
16
  "unk_token": {
17
  "content": "<|endoftext|>",
18
  "lstrip": false,
 
13
  "rstrip": false,
14
  "single_word": false
15
  },
16
+ "pad_token": "<|endoftext|>",
17
  "unk_token": {
18
  "content": "<|endoftext|>",
19
  "lstrip": false,
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:021f21f66b68414a34de686180afc76d171f6203a85a2d3a06a81a976317eab9
3
- size 479352912
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0f55340b50784449b5cf6d58607c29975f4641942b787618d8091c31e7a792f
3
+ size 503289936
tokenizer.json CHANGED
@@ -1,6 +1,11 @@
1
  {
2
  "version": "1.0",
3
- "truncation": null,
 
 
 
 
 
4
  "padding": null,
5
  "added_tokens": [
6
  {
 
1
  {
2
  "version": "1.0",
3
+ "truncation": {
4
+ "direction": "Right",
5
+ "max_length": 40,
6
+ "strategy": "LongestFirst",
7
+ "stride": 0
8
+ },
9
  "padding": null,
10
  "added_tokens": [
11
  {
tokenizer_config.json CHANGED
@@ -16,7 +16,7 @@
16
  "eos_token": "<|endoftext|>",
17
  "errors": "replace",
18
  "model_max_length": 1024,
19
- "pad_token": null,
20
  "tokenizer_class": "GPT2Tokenizer",
21
  "unk_token": "<|endoftext|>"
22
  }
 
16
  "eos_token": "<|endoftext|>",
17
  "errors": "replace",
18
  "model_max_length": 1024,
19
+ "pad_token": "<|endoftext|>",
20
  "tokenizer_class": "GPT2Tokenizer",
21
  "unk_token": "<|endoftext|>"
22
  }