eskayML commited on
Commit
77d4ee2
·
verified ·
1 Parent(s): 351089e

eskayML/interview_classifier

Browse files
Files changed (4) hide show
  1. README.md +21 -0
  2. config.json +1 -0
  3. model.safetensors +1 -1
  4. training_args.bin +0 -0
README.md CHANGED
@@ -4,6 +4,8 @@ license: apache-2.0
4
  base_model: distilbert-base-uncased
5
  tags:
6
  - generated_from_trainer
 
 
7
  model-index:
8
  - name: interview_classifier
9
  results: []
@@ -15,6 +17,9 @@ should probably proofread and complete it, then remove this comment. -->
15
  # interview_classifier
16
 
17
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
 
 
 
18
 
19
  ## Model description
20
 
@@ -41,6 +46,22 @@ The following hyperparameters were used during training:
41
  - lr_scheduler_type: linear
42
  - num_epochs: 10
43
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
  ### Framework versions
45
 
46
  - Transformers 4.46.2
 
4
  base_model: distilbert-base-uncased
5
  tags:
6
  - generated_from_trainer
7
+ metrics:
8
+ - accuracy
9
  model-index:
10
  - name: interview_classifier
11
  results: []
 
17
  # interview_classifier
18
 
19
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 2.0881
22
+ - Accuracy: 0.2593
23
 
24
  ## Model description
25
 
 
46
  - lr_scheduler_type: linear
47
  - num_epochs: 10
48
 
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
+ | No log | 1.0 | 54 | 2.2885 | 0.1481 |
54
+ | No log | 2.0 | 108 | 2.2611 | 0.1481 |
55
+ | No log | 3.0 | 162 | 2.2186 | 0.2593 |
56
+ | No log | 4.0 | 216 | 2.1877 | 0.2222 |
57
+ | No log | 5.0 | 270 | 2.1593 | 0.2593 |
58
+ | No log | 6.0 | 324 | 2.1332 | 0.2593 |
59
+ | No log | 7.0 | 378 | 2.1185 | 0.2963 |
60
+ | No log | 8.0 | 432 | 2.0965 | 0.2593 |
61
+ | No log | 9.0 | 486 | 2.0914 | 0.2593 |
62
+ | 1.9418 | 10.0 | 540 | 2.0881 | 0.2593 |
63
+
64
+
65
  ### Framework versions
66
 
67
  - Transformers 4.46.2
config.json CHANGED
@@ -38,6 +38,7 @@
38
  "n_heads": 12,
39
  "n_layers": 6,
40
  "pad_token_id": 0,
 
41
  "qa_dropout": 0.1,
42
  "seq_classif_dropout": 0.2,
43
  "sinusoidal_pos_embds": false,
 
38
  "n_heads": 12,
39
  "n_layers": 6,
40
  "pad_token_id": 0,
41
+ "problem_type": "single_label_classification",
42
  "qa_dropout": 0.1,
43
  "seq_classif_dropout": 0.2,
44
  "sinusoidal_pos_embds": false,
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:21d46da7190cee32bf6b5982b0afa63c01df020db1ee3eb240e63338c06d2360
3
  size 267857176
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa594325c82c5854c2c695f89e782dff58b6e24bb8b4965d18940f62c2bc12e2
3
  size 267857176
training_args.bin CHANGED
Binary files a/training_args.bin and b/training_args.bin differ