Edit model card

Spanish GPT-2 trained on large_spanish_corpus

This is a Spanish GPT-2 model trained from scratch on the large_spanish_corpus aka BETO's corpus with Flax This is part of the Flax/Jax Community Week, organised by HuggingFace and TPU usage sponsored by Google.

Dataset

The dataset is about 20 GB. 95% of the data was used for training and the rest 5% for validation.

Metrics (on evaluation dataset)

  • Loss: 2.413
  • Perplexity: 11.36

Team members

Useful links

Downloads last month
921
Safetensors
Model size
137M params
Tensor type
F32
·
U8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mrm8488/spanish-gpt2

Finetunes
1 model

Dataset used to train mrm8488/spanish-gpt2

Spaces using mrm8488/spanish-gpt2 3