File size: 1,815 Bytes
f32b27f c53390c 7c07f73 c53390c f32b27f c53390c f32b27f 7c07f73 f32b27f c53390c 690dcb5 e45ee84 690dcb5 f32b27f c53390c f32b27f c53390c f32b27f c53390c f32b27f c53390c f32b27f c53390c f32b27f c53390c f32b27f c53390c f32b27f c53390c f32b27f c53390c e45ee84 c53390c f32b27f c53390c f32b27f c53390c e45ee84 f6733d4 690dcb5 f32b27f c53390c f32b27f c53390c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
---
license: mit
base_model: gpt2
tags:
- generated_from_keras_callback
model-index:
- name: turkishReviews-ds-large
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# turkishReviews-ds-large
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.5266
- Validation Loss: 5.4973
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 10239, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 6.3071 | 5.4973 | 0 |
| 5.5267 | 5.4973 | 1 |
| 5.5266 | 5.4973 | 2 |
### Framework versions
- Transformers 4.42.3
- TensorFlow 2.15.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|