File size: 3,763 Bytes
4394d7b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-classification
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# roberta-base-classification

This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8665
- Accuracy: {'accuracy': 0.7342799188640974}
- F1: {'f1': 0.7306952447422118}

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy                         | F1                         |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:--------------------------:|
| No log        | 1.0   | 163  | 1.3840          | {'accuracy': 0.6024340770791075} | {'f1': 0.5642145589948825} |
| No log        | 2.0   | 326  | 1.0832          | {'accuracy': 0.6511156186612576} | {'f1': 0.6334471187444455} |
| No log        | 3.0   | 489  | 1.0334          | {'accuracy': 0.6977687626774848} | {'f1': 0.6897630671623124} |
| 1.0727        | 4.0   | 652  | 1.0970          | {'accuracy': 0.6876267748478702} | {'f1': 0.6871985325785717} |
| 1.0727        | 5.0   | 815  | 1.0281          | {'accuracy': 0.7342799188640974} | {'f1': 0.7301024691928815} |
| 1.0727        | 6.0   | 978  | 1.1807          | {'accuracy': 0.7018255578093306} | {'f1': 0.7067299604929954} |
| 0.2589        | 7.0   | 1141 | 1.2407          | {'accuracy': 0.7342799188640974} | {'f1': 0.7314658348123809} |
| 0.2589        | 8.0   | 1304 | 1.3048          | {'accuracy': 0.7403651115618661} | {'f1': 0.731151961567854}  |
| 0.2589        | 9.0   | 1467 | 1.5180          | {'accuracy': 0.718052738336714}  | {'f1': 0.7137872411382804} |
| 0.0808        | 10.0  | 1630 | 1.3989          | {'accuracy': 0.7606490872210954} | {'f1': 0.7557677624013166} |
| 0.0808        | 11.0  | 1793 | 1.5029          | {'accuracy': 0.7606490872210954} | {'f1': 0.7552919114782913} |
| 0.0808        | 12.0  | 1956 | 1.7512          | {'accuracy': 0.7241379310344828} | {'f1': 0.7171770258544846} |
| 0.0186        | 13.0  | 2119 | 1.6777          | {'accuracy': 0.7363083164300203} | {'f1': 0.7298768119446929} |
| 0.0186        | 14.0  | 2282 | 1.8128          | {'accuracy': 0.7363083164300203} | {'f1': 0.7328169574773649} |
| 0.0186        | 15.0  | 2445 | 1.7922          | {'accuracy': 0.7383367139959433} | {'f1': 0.7355194715827496} |
| 0.0039        | 16.0  | 2608 | 1.8762          | {'accuracy': 0.7281947261663286} | {'f1': 0.7221386387545444} |
| 0.0039        | 17.0  | 2771 | 1.8840          | {'accuracy': 0.7363083164300203} | {'f1': 0.7317008958800432} |
| 0.0039        | 18.0  | 2934 | 1.8368          | {'accuracy': 0.7383367139959433} | {'f1': 0.7340167563730315} |
| 0.0027        | 19.0  | 3097 | 1.8687          | {'accuracy': 0.7363083164300203} | {'f1': 0.7319705371219094} |
| 0.0027        | 20.0  | 3260 | 1.8665          | {'accuracy': 0.7342799188640974} | {'f1': 0.7306952447422118} |


### Framework versions

- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1