oscarwu commited on
Commit
7cc2e4f
·
1 Parent(s): 42a6308

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -21
README.md CHANGED
@@ -11,19 +11,18 @@ model-index:
11
  should probably proofread and complete it, then remove this comment. -->
12
 
13
  # mlcovid19-classifier
14
- - [Mulit-lingual COVID-19 Fake News Detection and Intervention](https://counterinfodemic.org/)
15
 
16
- This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on Multi-lingual COVID19 Fake News dataset. Please visite our project [website](https://counterinfodemic.org/) for more info.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.4116
19
- - F1 Macro: 0.6750
20
- - F1 Misinformation: 0.9407
21
- - F1 Factual: 0.8529
22
- - F1 Other: 0.2315
23
- - Prec Macro: 0.7057
24
- - Prec Misinformation: 0.9229
25
- - Prec Factual: 0.8958
26
- - Prec Other: 0.2983
27
 
28
  ## Model description
29
 
@@ -51,25 +50,22 @@ The following hyperparameters were used during training:
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_steps: 4367
54
- - num_epochs: 30
55
 
56
  ### Training results
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Misinformation | F1 Factual | F1 Other | Prec Macro | Prec Misinformation | Prec Factual | Prec Other |
59
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------------:|:----------:|:--------:|:----------:|:-------------------:|:------------:|:----------:|
60
- | 0.8111 | 3.67 | 500 | 0.4101 | 0.5506 | 0.9162 | 0.7356 | 0.0 | 0.5421 | 0.8969 | 0.7295 | 0.0 |
61
- | 0.3688 | 7.35 | 1000 | 0.3397 | 0.5770 | 0.9321 | 0.7988 | 0.0 | 0.5694 | 0.9111 | 0.7972 | 0.0 |
62
- | 0.3012 | 11.03 | 1500 | 0.3011 | 0.5912 | 0.9415 | 0.8322 | 0.0 | 0.5955 | 0.9104 | 0.8761 | 0.0 |
63
- | 0.249 | 14.7 | 2000 | 0.3020 | 0.5931 | 0.9404 | 0.8388 | 0.0 | 0.5841 | 0.9206 | 0.8317 | 0.0 |
64
- | 0.1957 | 18.38 | 2500 | 0.3308 | 0.6402 | 0.9406 | 0.8433 | 0.1365 | 0.7126 | 0.9234 | 0.8445 | 0.3699 |
65
- | 0.1438 | 22.06 | 3000 | 0.3502 | 0.6615 | 0.9406 | 0.8529 | 0.1911 | 0.6952 | 0.9283 | 0.8543 | 0.3030 |
66
- | 0.0996 | 25.73 | 3500 | 0.4116 | 0.6750 | 0.9407 | 0.8529 | 0.2315 | 0.7057 | 0.9229 | 0.8958 | 0.2983 |
67
- | 0.0657 | 29.41 | 4000 | 0.4413 | 0.6422 | 0.9428 | 0.8497 | 0.1342 | 0.7126 | 0.9269 | 0.8453 | 0.3655 |
68
 
69
 
70
  ### Framework versions
71
 
72
- - Transformers 4.23.0
73
  - Pytorch 1.12.1+cu113
74
  - Datasets 2.5.2
75
  - Tokenizers 0.13.1
 
11
  should probably proofread and complete it, then remove this comment. -->
12
 
13
  # mlcovid19-classifier
 
14
 
15
+ This model is a fine-tuned version of [oscarwu/mlcovid19-classifier](https://huggingface.co/oscarwu/mlcovid19-classifier) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 0.5651
18
+ - F1 Macro: 0.6566
19
+ - F1 Misinformation: 0.9336
20
+ - F1 Factual: 0.8316
21
+ - F1 Other: 0.2048
22
+ - Prec Macro: 0.6775
23
+ - Prec Misinformation: 0.9344
24
+ - Prec Factual: 0.7907
25
+ - Prec Other: 0.3075
26
 
27
  ## Model description
28
 
 
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
  - lr_scheduler_warmup_steps: 4367
53
+ - num_epochs: 60
54
 
55
  ### Training results
56
 
57
  | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Misinformation | F1 Factual | F1 Other | Prec Macro | Prec Misinformation | Prec Factual | Prec Other |
58
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------------:|:----------:|:--------:|:----------:|:-------------------:|:------------:|:----------:|
59
+ | 0.5055 | 3.67 | 500 | 0.3267 | 0.6006 | 0.9440 | 0.8517 | 0.0062 | 0.8132 | 0.9228 | 0.8502 | 0.6667 |
60
+ | 0.0876 | 7.35 | 1000 | 0.3922 | 0.6636 | 0.9412 | 0.8533 | 0.1963 | 0.6975 | 0.9255 | 0.8729 | 0.2941 |
61
+ | 0.0477 | 11.03 | 1500 | 0.4479 | 0.6715 | 0.9404 | 0.8562 | 0.2178 | 0.6939 | 0.9288 | 0.8695 | 0.2836 |
62
+ | 0.0334 | 14.7 | 2000 | 0.5123 | 0.6622 | 0.9418 | 0.8515 | 0.1935 | 0.6996 | 0.9251 | 0.8732 | 0.3007 |
63
+ | 0.0271 | 18.38 | 2500 | 0.5651 | 0.6566 | 0.9336 | 0.8316 | 0.2048 | 0.6775 | 0.9344 | 0.7907 | 0.3075 |
 
 
 
64
 
65
 
66
  ### Framework versions
67
 
68
+ - Transformers 4.23.1
69
  - Pytorch 1.12.1+cu113
70
  - Datasets 2.5.2
71
  - Tokenizers 0.13.1