metadata
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt2-psych_chatbot
results: []
gpt2-psych_chatbot
This model is a fine-tuned version of gpt2 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.9593
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
5.8353 | 1.0 | 651 | 4.8999 |
4.8713 | 2.0 | 1302 | 4.4407 |
4.4876 | 3.0 | 1953 | 4.0638 |
3.7647 | 4.0 | 2604 | 3.7308 |
3.3969 | 5.0 | 3255 | 3.4360 |
3.1115 | 6.0 | 3906 | 3.1895 |
2.4283 | 7.0 | 4557 | 2.9595 |
2.0813 | 8.0 | 5208 | 2.7442 |
1.7785 | 9.0 | 5859 | 2.5330 |
1.3337 | 10.0 | 6510 | 2.3619 |
1.066 | 11.0 | 7161 | 2.2079 |
0.9607 | 12.0 | 7812 | 2.1072 |
0.8078 | 13.0 | 8463 | 2.0385 |
0.6072 | 14.0 | 9114 | 1.9966 |
0.5351 | 15.0 | 9765 | 1.9736 |
0.5049 | 16.0 | 10416 | 1.9646 |
0.3958 | 17.0 | 11067 | 1.9607 |
0.346 | 18.0 | 11718 | 1.9592 |
0.3244 | 19.0 | 12369 | 1.9565 |
0.2777 | 20.0 | 13020 | 1.9593 |
Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1