File size: 1,090 Bytes
afb5f2f a5f0c60 afb5f2f 28f68d6 afb5f2f fa89a21 afb5f2f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
language: es
datasets:
- squad_es
base_model: PlanTL-GOB-ES/roberta-base-bne
---
# roberta-base es for QA
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the [squad_es(v2)](https://huggingface.co/datasets/squad_es) training dataset.
## Hyperparameters
The hyperparameters were chosen based on those used in [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2), an english-based model trained for similar purposes
```
--num_train_epochs 2
--learning_rate 3e-5
--max_seq_length 386
--doc_stride 128
```
## Performance
Evaluated on the [squad_es(v2)](https://huggingface.co/datasets/squad_es) dev set.
```
eval_exact": 62.13526733007252,
eval_f1": 69.38515019522332,
eval_HasAns_exact": 53.07017543859649,
eval_HasAns_f1": 67.57238714827123,
eval_HasAns_total": 5928,
eval_NoAns_exact": 71.19730185497471,
eval_NoAns_f1": 71.19730185497471,
eval_NoAns_total": 5930,
```
## Team
Santiago Maximo: [smaximo](https://huggingface.co/smaximo) |