mtyrrell commited on
Commit
37d83ce
·
1 Parent(s): 061b86b

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -20
README.md CHANGED
@@ -2,8 +2,6 @@
2
  license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
- metrics:
6
- - accuracy
7
  model-index:
8
  - name: IKT_classifier_economywide_best
9
  results: []
@@ -16,13 +14,8 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.1819
20
- - Precision Weighted: 0.9628
21
- - Precision Macro: 0.9639
22
- - Recall Weighted: 0.9623
23
- - Recall Samples: 0.9606
24
- - F1-score: 0.9619
25
- - Accuracy: 0.9623
26
 
27
  ## Model description
28
 
@@ -41,12 +34,12 @@ More information needed
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
- - learning_rate: 4.427532456702983e-05
45
- - train_batch_size: 3
46
- - eval_batch_size: 3
47
  - seed: 42
48
  - gradient_accumulation_steps: 2
49
- - total_train_batch_size: 6
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
  - lr_scheduler_warmup_steps: 100.0
@@ -54,13 +47,13 @@ The following hyperparameters were used during training:
54
 
55
  ### Training results
56
 
57
- | Training Loss | Epoch | Step | Validation Loss | Precision Weighted | Precision Macro | Recall Weighted | Recall Samples | F1-score | Accuracy |
58
- |:-------------:|:-----:|:----:|:---------------:|:------------------:|:---------------:|:---------------:|:--------------:|:--------:|:--------:|
59
- | No log | 1.0 | 159 | 0.1640 | 0.9628 | 0.9639 | 0.9623 | 0.9606 | 0.9619 | 0.9623 |
60
- | No log | 2.0 | 318 | 0.2042 | 0.9531 | 0.9521 | 0.9528 | 0.9533 | 0.9526 | 0.9528 |
61
- | No log | 3.0 | 477 | 0.2298 | 0.9457 | 0.9479 | 0.9434 | 0.9402 | 0.9427 | 0.9434 |
62
- | 0.1907 | 4.0 | 636 | 0.1582 | 0.9718 | 0.9723 | 0.9717 | 0.9708 | 0.9715 | 0.9717 |
63
- | 0.1907 | 5.0 | 795 | 0.1819 | 0.9628 | 0.9639 | 0.9623 | 0.9606 | 0.9619 | 0.9623 |
64
 
65
 
66
  ### Framework versions
 
2
  license: apache-2.0
3
  tags:
4
  - generated_from_trainer
 
 
5
  model-index:
6
  - name: IKT_classifier_economywide_best
7
  results: []
 
14
 
15
  This model is a fine-tuned version of [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 0.1916
18
+ - F1-score: 0.9527
 
 
 
 
 
19
 
20
  ## Model description
21
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - learning_rate: 9.375102561418467e-05
38
+ - train_batch_size: 16
39
+ - eval_batch_size: 16
40
  - seed: 42
41
  - gradient_accumulation_steps: 2
42
+ - total_train_batch_size: 32
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 100.0
 
47
 
48
  ### Training results
49
 
50
+ | Training Loss | Epoch | Step | Validation Loss | F1-score |
51
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
52
+ | No log | 1.0 | 30 | 0.4243 | 0.9150 |
53
+ | No log | 2.0 | 60 | 0.2486 | 0.9145 |
54
+ | No log | 3.0 | 90 | 0.1950 | 0.9245 |
55
+ | No log | 4.0 | 120 | 0.1953 | 0.9527 |
56
+ | No log | 5.0 | 150 | 0.1916 | 0.9527 |
57
 
58
 
59
  ### Framework versions