File size: 7,059 Bytes
312bdcd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MSPoliBERT-12
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# MSPoliBERT-12

This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2936
- Democracy F1: 0.9392
- Democracy Accuracy: 0.9426
- Economy F1: 0.9141
- Economy Accuracy: 0.9156
- Race F1: 0.9303
- Race Accuracy: 0.9331
- Leadership F1: 0.7696
- Leadership Accuracy: 0.7688
- Development F1: 0.8747
- Development Accuracy: 0.8790
- Corruption F1: 0.9411
- Corruption Accuracy: 0.9441
- Instability F1: 0.9093
- Instability Accuracy: 0.9141
- Safety F1: 0.9291
- Safety Accuracy: 0.9305
- Administration F1: 0.8768
- Administration Accuracy: 0.8853
- Education F1: 0.9538
- Education Accuracy: 0.9557
- Religion F1: 0.9338
- Religion Accuracy: 0.9349
- Environment F1: 0.9807
- Environment Accuracy: 0.9819
- Overall F1: 0.9127
- Overall Accuracy: 0.9155

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | Democracy F1 | Democracy Accuracy | Economy F1 | Economy Accuracy | Race F1 | Race Accuracy | Leadership F1 | Leadership Accuracy | Development F1 | Development Accuracy | Corruption F1 | Corruption Accuracy | Instability F1 | Instability Accuracy | Safety F1 | Safety Accuracy | Administration F1 | Administration Accuracy | Education F1 | Education Accuracy | Religion F1 | Religion Accuracy | Environment F1 | Environment Accuracy | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:------------------:|:----------:|:----------------:|:-------:|:-------------:|:-------------:|:-------------------:|:--------------:|:--------------------:|:-------------:|:-------------------:|:--------------:|:--------------------:|:---------:|:---------------:|:-----------------:|:-----------------------:|:------------:|:------------------:|:-----------:|:-----------------:|:--------------:|:--------------------:|:----------:|:----------------:|
| 0.4282        | 1.0   | 841  | 0.2914          | 0.9080       | 0.9293             | 0.8960     | 0.9088           | 0.9066  | 0.9221        | 0.7142        | 0.7328              | 0.8409         | 0.8585               | 0.9253        | 0.9287              | 0.9013         | 0.9076               | 0.9076    | 0.9097          | 0.8349            | 0.8651                  | 0.9376       | 0.9483             | 0.9147      | 0.9233            | 0.9671         | 0.9744               | 0.8878     | 0.9007           |
| 0.2346        | 2.0   | 1682 | 0.2568          | 0.9172       | 0.9364             | 0.9016     | 0.9105           | 0.9172  | 0.9254        | 0.7547        | 0.7652              | 0.8586         | 0.8648               | 0.9265        | 0.9346              | 0.8974         | 0.9111               | 0.9272    | 0.9296          | 0.8539            | 0.8802                  | 0.9451       | 0.9519             | 0.9264      | 0.9308            | 0.9767         | 0.9786               | 0.9002     | 0.9099           |
| 0.1601        | 3.0   | 2523 | 0.2519          | 0.9260       | 0.9355             | 0.9108     | 0.9186           | 0.9228  | 0.9278        | 0.7575        | 0.7620              | 0.8748         | 0.8808               | 0.9360        | 0.9415              | 0.9067         | 0.9135               | 0.9285    | 0.9316          | 0.8609            | 0.8799                  | 0.9518       | 0.9560             | 0.9301      | 0.9337            | 0.9801         | 0.9810               | 0.9072     | 0.9135           |
| 0.1169        | 4.0   | 3364 | 0.2627          | 0.9315       | 0.9412             | 0.9120     | 0.9192           | 0.9214  | 0.9284        | 0.7637        | 0.7646              | 0.8757         | 0.8799               | 0.9411        | 0.9459              | 0.9071         | 0.9123               | 0.9296    | 0.9328          | 0.8685            | 0.8820                  | 0.9512       | 0.9542             | 0.9335      | 0.9364            | 0.9802         | 0.9810               | 0.9096     | 0.9148           |
| 0.0798        | 5.0   | 4205 | 0.2729          | 0.9368       | 0.9412             | 0.9129     | 0.9159           | 0.9284  | 0.9328        | 0.7642        | 0.7652              | 0.8760         | 0.8799               | 0.9414        | 0.9435              | 0.9078         | 0.9126               | 0.9277    | 0.9290          | 0.8703            | 0.8743                  | 0.9565       | 0.9581             | 0.9323      | 0.9349            | 0.9799         | 0.9801               | 0.9112     | 0.9140           |
| 0.0565        | 6.0   | 5046 | 0.2821          | 0.9357       | 0.9403             | 0.9144     | 0.9159           | 0.9266  | 0.9284        | 0.7687        | 0.7685              | 0.8748         | 0.8785               | 0.9384        | 0.9403              | 0.9115         | 0.9153               | 0.9266    | 0.9299          | 0.8693            | 0.8814                  | 0.9557       | 0.9578             | 0.9321      | 0.9325            | 0.9790         | 0.9813               | 0.9111     | 0.9142           |
| 0.0443        | 7.0   | 5887 | 0.2914          | 0.9375       | 0.9406             | 0.9150     | 0.9156           | 0.9293  | 0.9322        | 0.7719        | 0.7715              | 0.8727         | 0.8767               | 0.9412        | 0.9447              | 0.9103         | 0.9144               | 0.9292    | 0.9316          | 0.8761            | 0.8832                  | 0.9558       | 0.9569             | 0.9322      | 0.9334            | 0.9797         | 0.9813               | 0.9126     | 0.9152           |
| 0.0361        | 8.0   | 6728 | 0.2936          | 0.9392       | 0.9426             | 0.9141     | 0.9156           | 0.9303  | 0.9331        | 0.7696        | 0.7688              | 0.8747         | 0.8790               | 0.9411        | 0.9441              | 0.9093         | 0.9141               | 0.9291    | 0.9305          | 0.8768            | 0.8853                  | 0.9538       | 0.9557             | 0.9338      | 0.9349            | 0.9807         | 0.9819               | 0.9127     | 0.9155           |


### Framework versions

- Transformers 4.18.0
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.12.1