ShkalikovOleh commited on
Commit
5ec04a6
1 Parent(s): ccb7672

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -1
README.md CHANGED
@@ -41,4 +41,35 @@ model-index:
41
 
42
  # XLM-RoBERTa-Large-PANX-WikiAnn-en
43
 
44
- The XLM-RoBERTa-Large model finetuned on the English split of the PAN-X (WikiAnn) dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
  # XLM-RoBERTa-Large-PANX-WikiAnn-en
43
 
44
+ The XLM-RoBERTa-Large model finetuned on the English split of the PAN-X (WikiAnn) dataset.
45
+
46
+ This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the [google/xtreme](https://huggingface.co/datasets/google/xtreme) dataset (English split of the PAN-X).
47
+ It achieves the following results on the evaluation set:
48
+ - Loss: 0.2569
49
+ - Precision: 0.8347
50
+ - Recall: 0.8529
51
+ - F1: 0.8437
52
+ - Accuracy: 0.9357
53
+
54
+ ## Training procedure
55
+
56
+ ### Training hyperparameters
57
+
58
+ The following hyperparameters were used during training:
59
+ - learning_rate: 2e-05
60
+ - train_batch_size: 4
61
+ - eval_batch_size: 8
62
+ - seed: 42
63
+ - gradient_accumulation_steps: 8
64
+ - total_train_batch_size: 32
65
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
+ - lr_scheduler_type: linear
67
+ - num_epochs: 5.0
68
+ - mixed_precision_training: Native AMP
69
+
70
+ ### Framework versions
71
+
72
+ - Transformers 4.44.2
73
+ - Pytorch 2.4.1+cu121
74
+ - Datasets 3.0.1
75
+ - Tokenizers 0.19.1