metadata
library_name: transformers
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
datasets:
- biobert_json
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-spanish-wwm-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: biobert_json
type: biobert_json
config: Biobert_json
split: validation
args: Biobert_json
metrics:
- name: Precision
type: precision
value: 0.9499079600602444
- name: Recall
type: recall
value: 0.9645426224865478
- name: F1
type: f1
value: 0.9571693552920016
- name: Accuracy
type: accuracy
value: 0.977242282165256
bert-base-spanish-wwm-uncased-finetuned-ner
This model is a fine-tuned version of dccuchile/bert-base-spanish-wwm-uncased on the biobert_json dataset. It achieves the following results on the evaluation set:
- Loss: 0.1255
- Precision: 0.9499
- Recall: 0.9645
- F1: 0.9572
- Accuracy: 0.9772
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
0.0412 | 1.0 | 612 | 0.1343 | 0.9401 | 0.9624 | 0.9512 | 0.9734 |
0.0632 | 2.0 | 1224 | 0.1082 | 0.9360 | 0.9654 | 0.9505 | 0.9746 |
0.0568 | 3.0 | 1836 | 0.1070 | 0.9469 | 0.9659 | 0.9563 | 0.9765 |
0.0486 | 4.0 | 2448 | 0.1104 | 0.9477 | 0.9669 | 0.9572 | 0.9771 |
0.0334 | 5.0 | 3060 | 0.1158 | 0.9425 | 0.9643 | 0.9533 | 0.9756 |
0.0311 | 6.0 | 3672 | 0.1238 | 0.9449 | 0.9644 | 0.9546 | 0.9753 |
0.0249 | 7.0 | 4284 | 0.1178 | 0.9473 | 0.9652 | 0.9561 | 0.9767 |
0.0245 | 8.0 | 4896 | 0.1244 | 0.9483 | 0.9656 | 0.9569 | 0.9772 |
0.0185 | 9.0 | 5508 | 0.1227 | 0.9492 | 0.9643 | 0.9567 | 0.9771 |
0.0165 | 10.0 | 6120 | 0.1255 | 0.9499 | 0.9645 | 0.9572 | 0.9772 |
Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3