gpt2-jokes
This model is a Fine-tune version of gpt2 on the Fraser/short-jokes dataset. It achieves the following results on the evaluation set:
- Loss: 0.6748
- Accuracy: 0.8796
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.