hyeonddu commited on
Commit
13e51d4
1 Parent(s): 58e3df6

End of training

Browse files
README.md CHANGED
@@ -4,9 +4,24 @@ tags:
4
  - generated_from_trainer
5
  datasets:
6
  - banking77
 
 
7
  model-index:
8
  - name: '11'
9
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -15,6 +30,9 @@ should probably proofread and complete it, then remove this comment. -->
15
  # 11
16
 
17
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset.
 
 
 
18
 
19
  ## Model description
20
 
@@ -42,6 +60,15 @@ The following hyperparameters were used during training:
42
  - num_epochs: 3
43
  - mixed_precision_training: Native AMP
44
 
 
 
 
 
 
 
 
 
 
45
  ### Framework versions
46
 
47
  - Transformers 4.27.1
 
4
  - generated_from_trainer
5
  datasets:
6
  - banking77
7
+ metrics:
8
+ - f1
9
  model-index:
10
  - name: '11'
11
+ results:
12
+ - task:
13
+ name: Text Classification
14
+ type: text-classification
15
+ dataset:
16
+ name: banking77
17
+ type: banking77
18
+ config: default
19
+ split: test
20
+ args: default
21
+ metrics:
22
+ - name: F1
23
+ type: f1
24
+ value: 0.9326943887584157
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
  # 11
31
 
32
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset.
33
+ It achieves the following results on the evaluation set:
34
+ - Loss: 0.2929
35
+ - F1: 0.9327
36
 
37
  ## Model description
38
 
 
60
  - num_epochs: 3
61
  - mixed_precision_training: Native AMP
62
 
63
+ ### Training results
64
+
65
+ | Training Loss | Epoch | Step | Validation Loss | F1 |
66
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
67
+ | 1.0651 | 1.0 | 626 | 0.7680 | 0.8465 |
68
+ | 0.3787 | 2.0 | 1252 | 0.3515 | 0.9241 |
69
+ | 0.1742 | 3.0 | 1878 | 0.2929 | 0.9327 |
70
+
71
+
72
  ### Framework versions
73
 
74
  - Transformers 4.27.1
logs/events.out.tfevents.1697991938.7d63e3e09e40.959.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6526468b87be51f1f112c7f8e5e83596ab2d0ba6a5ed16ab27e52a5f0a1e6604
3
- size 11615
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06c052472de89a2990d86314182d968ca5b81663aa11473a7b6de05ef3749898
3
+ size 11969
tokenizer.json CHANGED
@@ -1,7 +1,21 @@
1
  {
2
  "version": "1.0",
3
- "truncation": null,
4
- "padding": null,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  "added_tokens": [
6
  {
7
  "id": 0,
 
1
  {
2
  "version": "1.0",
3
+ "truncation": {
4
+ "direction": "Right",
5
+ "max_length": 512,
6
+ "strategy": "LongestFirst",
7
+ "stride": 0
8
+ },
9
+ "padding": {
10
+ "strategy": {
11
+ "Fixed": 512
12
+ },
13
+ "direction": "Right",
14
+ "pad_to_multiple_of": null,
15
+ "pad_id": 0,
16
+ "pad_type_id": 0,
17
+ "pad_token": "[PAD]"
18
+ },
19
  "added_tokens": [
20
  {
21
  "id": 0,