File size: 3,065 Bytes
d943ef8 617d038 d943ef8 617d038 d943ef8 617d038 d943ef8 617d038 d943ef8 617d038 d943ef8 617d038 f0727ec d943ef8 33c47a3 617d038 d943ef8 617d038 d943ef8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
---
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
datasets:
- cnec
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: CNEC1_1_extended_xlm-roberta-large
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cnec
type: cnec
config: default
split: validation
args: default
metrics:
- name: Precision
type: precision
value: 0.8456410256410256
- name: Recall
type: recall
value: 0.8813468733297701
- name: F1
type: f1
value: 0.8631248364302538
- name: Accuracy
type: accuracy
value: 0.9673435458971619
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CNEC1_1_extended_xlm-roberta-large
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the cnec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2299
- Precision: 0.8456
- Recall: 0.8813
- F1: 0.8631
- Accuracy: 0.9673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5516 | 0.86 | 500 | 0.1912 | 0.7007 | 0.7857 | 0.7407 | 0.9493 |
| 0.2153 | 1.72 | 1000 | 0.1856 | 0.6609 | 0.7825 | 0.7166 | 0.9461 |
| 0.1389 | 2.58 | 1500 | 0.1711 | 0.7791 | 0.8445 | 0.8105 | 0.9574 |
| 0.1098 | 3.44 | 2000 | 0.1943 | 0.8171 | 0.8642 | 0.84 | 0.9608 |
| 0.0785 | 4.3 | 2500 | 0.2197 | 0.7919 | 0.8461 | 0.8181 | 0.9579 |
| 0.0619 | 5.16 | 3000 | 0.1877 | 0.8298 | 0.8883 | 0.8580 | 0.9660 |
| 0.043 | 6.02 | 3500 | 0.2185 | 0.8412 | 0.8803 | 0.8603 | 0.9656 |
| 0.0289 | 6.88 | 4000 | 0.1898 | 0.8422 | 0.8846 | 0.8629 | 0.9674 |
| 0.0179 | 7.75 | 4500 | 0.2061 | 0.8433 | 0.8830 | 0.8627 | 0.9674 |
| 0.0112 | 8.61 | 5000 | 0.2218 | 0.8462 | 0.8819 | 0.8636 | 0.9656 |
| 0.0074 | 9.47 | 5500 | 0.2299 | 0.8456 | 0.8813 | 0.8631 | 0.9673 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|