Edit model card

5e-6_xlm-R-xl_Conspiracy_training_with_callbacks

This model is a fine-tuned version of facebook/xlm-roberta-xl on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0624
  • Macro F1: 0.9918
  • Micro F1: 0.9919
  • Accuracy: 0.9919

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Macro F1 Micro F1 Accuracy
0.6122 1.0 502 0.2712 0.9549 0.9559 0.9559
0.1503 2.0 1004 0.1047 0.9741 0.9744 0.9744
0.0508 3.0 1506 0.0734 0.9835 0.9837 0.9837
0.0226 4.0 2008 0.0837 0.9811 0.9814 0.9814
0.014 5.0 2510 0.0562 0.9882 0.9884 0.9884
0.0014 6.0 3012 0.0514 0.9894 0.9895 0.9895
0.0016 7.0 3514 0.0501 0.9918 0.9919 0.9919
0.0002 8.0 4016 0.0554 0.9918 0.9919 0.9919
0.0001 9.0 4518 0.0607 0.9906 0.9907 0.9907
0.0001 10.0 5020 0.0856 0.9859 0.9861 0.9861
0.0143 11.0 5522 0.0377 0.9929 0.9930 0.9930
0.0001 12.0 6024 0.0538 0.9918 0.9919 0.9919
0.0001 13.0 6526 0.0568 0.9918 0.9919 0.9919
0.0012 14.0 7028 0.0582 0.9918 0.9919 0.9919
0.0 15.0 7530 0.0500 0.9929 0.9930 0.9930
0.0001 16.0 8032 0.0649 0.9918 0.9919 0.9919
0.0 17.0 8534 0.0649 0.9918 0.9919 0.9919
0.0 18.0 9036 0.0648 0.9918 0.9919 0.9919
0.0009 19.0 9538 0.0621 0.9918 0.9919 0.9919
0.0 20.0 10040 0.0624 0.9918 0.9919 0.9919

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.2.2+cu121
  • Datasets 2.18.0
  • Tokenizers 0.19.1
Downloads last month
5
Safetensors
Model size
3.48B params
Tensor type
F32
·
Inference API
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Finetuned from