ruba2ksa commited on
Commit
e79538b
·
1 Parent(s): 93a8ef8

Training in progress epoch 0

Browse files
Files changed (5) hide show
  1. README.md +17 -14
  2. config.json +1 -1
  3. tf_model.h5 +1 -1
  4. tokenizer.json +0 -0
  5. tokenizer_config.json +1 -1
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  license: apache-2.0
3
  base_model: distilroberta-base
4
  tags:
@@ -15,10 +16,14 @@ probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Train Loss: 0.1337
19
- - Validation Loss: 0.1703
20
- - Train Accuracy: 0.942
21
- - Epoch: 2
 
 
 
 
22
 
23
  ## Model description
24
 
@@ -37,21 +42,19 @@ More information needed
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
- - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
41
  - training_precision: float32
42
 
43
  ### Training results
44
 
45
- | Train Loss | Validation Loss | Train Accuracy | Epoch |
46
- |:----------:|:---------------:|:--------------:|:-----:|
47
- | 0.5011 | 0.2383 | 0.93 | 0 |
48
- | 0.1732 | 0.1821 | 0.941 | 1 |
49
- | 0.1337 | 0.1703 | 0.942 | 2 |
50
 
51
 
52
  ### Framework versions
53
 
54
- - Transformers 4.38.2
55
- - TensorFlow 2.15.0
56
- - Datasets 2.18.0
57
- - Tokenizers 0.15.2
 
1
  ---
2
+ library_name: transformers
3
  license: apache-2.0
4
  base_model: distilroberta-base
5
  tags:
 
16
 
17
  This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Train Loss: 0.4750
20
+ - Train Accuracy: 0.9315
21
+ - Validation Loss: 0.2028
22
+ - Validation Accuracy: 0.9315
23
+ - Train Precision: 0.9325
24
+ - Train Recall: 0.9315
25
+ - Train F1: 0.9312
26
+ - Epoch: 0
27
 
28
  ## Model description
29
 
 
42
  ### Training hyperparameters
43
 
44
  The following hyperparameters were used during training:
45
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
46
  - training_precision: float32
47
 
48
  ### Training results
49
 
50
+ | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Train Precision | Train Recall | Train F1 | Epoch |
51
+ |:----------:|:--------------:|:---------------:|:-------------------:|:---------------:|:------------:|:--------:|:-----:|
52
+ | 0.4750 | 0.9315 | 0.2028 | 0.9315 | 0.9325 | 0.9315 | 0.9312 | 0 |
 
 
53
 
54
 
55
  ### Framework versions
56
 
57
+ - Transformers 4.46.2
58
+ - TensorFlow 2.17.1
59
+ - Datasets 3.1.0
60
+ - Tokenizers 0.20.3
config.json CHANGED
@@ -35,7 +35,7 @@
35
  "num_hidden_layers": 6,
36
  "pad_token_id": 1,
37
  "position_embedding_type": "absolute",
38
- "transformers_version": "4.38.2",
39
  "type_vocab_size": 1,
40
  "use_cache": true,
41
  "vocab_size": 50265
 
35
  "num_hidden_layers": 6,
36
  "pad_token_id": 1,
37
  "position_embedding_type": "absolute",
38
+ "transformers_version": "4.46.2",
39
  "type_vocab_size": 1,
40
  "use_cache": true,
41
  "vocab_size": 50265
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5b0eca32892ffbb82e4a8c81aa9bc7c4480bd509f8b8b7cf5501dadce31bf74a
3
  size 328641624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:021287e96ab47c0c6e58940c9482849b01eec13a46f8399c3bcb7dbcd9f0376e
3
  size 328641624
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json CHANGED
@@ -43,7 +43,7 @@
43
  }
44
  },
45
  "bos_token": "<s>",
46
- "clean_up_tokenization_spaces": true,
47
  "cls_token": "<s>",
48
  "eos_token": "</s>",
49
  "errors": "replace",
 
43
  }
44
  },
45
  "bos_token": "<s>",
46
+ "clean_up_tokenization_spaces": false,
47
  "cls_token": "<s>",
48
  "eos_token": "</s>",
49
  "errors": "replace",