ajrayman commited on
Commit
06d550f
·
verified ·
1 Parent(s): 69d55bf

Training in progress, epoch 1

Browse files
Files changed (4) hide show
  1. README.md +13 -12
  2. config.json +5 -5
  3. model.safetensors +2 -2
  4. training_args.bin +1 -1
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
  license: mit
4
- base_model: roberta-base
5
  tags:
6
  - generated_from_trainer
7
  metrics:
@@ -10,22 +10,22 @@ metrics:
10
  - recall
11
  - f1
12
  model-index:
13
- - name: auth_scale_binary
14
  results: []
15
  ---
16
 
17
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
  should probably proofread and complete it, then remove this comment. -->
19
 
20
- # auth_scale_binary
21
 
22
- This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
23
  It achieves the following results on the evaluation set:
24
- - Loss: 0.4784
25
- - Accuracy: 0.7714
26
- - Precision: 0.3433
27
- - Recall: 0.1036
28
- - F1: 0.1592
29
 
30
  ## Model description
31
 
@@ -50,14 +50,15 @@ The following hyperparameters were used during training:
50
  - seed: 42
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
- - num_epochs: 2
54
 
55
  ### Training results
56
 
57
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
58
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
59
- | No log | 1.0 | 133 | 0.4853 | 0.7912 | 0.0 | 0.0 | 0.0 |
60
- | No log | 2.0 | 266 | 0.4784 | 0.7714 | 0.3433 | 0.1036 | 0.1592 |
 
61
 
62
 
63
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
  license: mit
4
+ base_model: roberta-large
5
  tags:
6
  - generated_from_trainer
7
  metrics:
 
10
  - recall
11
  - f1
12
  model-index:
13
+ - name: narcissism_binary
14
  results: []
15
  ---
16
 
17
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
  should probably proofread and complete it, then remove this comment. -->
19
 
20
+ # narcissism_binary
21
 
22
+ This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
23
  It achieves the following results on the evaluation set:
24
+ - Loss: 0.6560
25
+ - Accuracy: 0.7073
26
+ - Precision: 0.7090
27
+ - Recall: 0.5692
28
+ - F1: 0.6314
29
 
30
  ## Model description
31
 
 
50
  - seed: 42
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
+ - num_epochs: 3
54
 
55
  ### Training results
56
 
57
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
58
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
59
+ | No log | 1.0 | 126 | 0.5753 | 0.7183 | 0.7870 | 0.4943 | 0.6072 |
60
+ | No log | 2.0 | 252 | 0.5797 | 0.7163 | 0.8230 | 0.4535 | 0.5848 |
61
+ | No log | 3.0 | 378 | 0.6560 | 0.7073 | 0.7090 | 0.5692 | 0.6314 |
62
 
63
 
64
  ### Framework versions
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "roberta-base",
3
  "architectures": [
4
  "RobertaForSequenceClassification"
5
  ],
@@ -9,14 +9,14 @@
9
  "eos_token_id": 2,
10
  "hidden_act": "gelu",
11
  "hidden_dropout_prob": 0.1,
12
- "hidden_size": 768,
13
  "initializer_range": 0.02,
14
- "intermediate_size": 3072,
15
  "layer_norm_eps": 1e-05,
16
  "max_position_embeddings": 514,
17
  "model_type": "roberta",
18
- "num_attention_heads": 12,
19
- "num_hidden_layers": 12,
20
  "pad_token_id": 1,
21
  "position_embedding_type": "absolute",
22
  "problem_type": "single_label_classification",
 
1
  {
2
+ "_name_or_path": "roberta-large",
3
  "architectures": [
4
  "RobertaForSequenceClassification"
5
  ],
 
9
  "eos_token_id": 2,
10
  "hidden_act": "gelu",
11
  "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 1024,
13
  "initializer_range": 0.02,
14
+ "intermediate_size": 4096,
15
  "layer_norm_eps": 1e-05,
16
  "max_position_embeddings": 514,
17
  "model_type": "roberta",
18
+ "num_attention_heads": 16,
19
+ "num_hidden_layers": 24,
20
  "pad_token_id": 1,
21
  "position_embedding_type": "absolute",
22
  "problem_type": "single_label_classification",
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3f96fc93681ab1c1630e8ba4da3d1932e33f6cd9d99399e82b5e2c49a7cbfe91
3
- size 498612824
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1fbc15017337659bc063bb138d38878d232b615dfe0a98bd70d6cdb9b5ad8fde
3
+ size 1421495416
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a4f802faac4f5e9561de6e7e1f07051a709c742f86c6b1ed98f179dd2c8dc889
3
  size 4719
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3427dba6d232fa04ca7871eef5c00c98c98197e11ffe2b22207d5fce50b2f709
3
  size 4719