iman37 commited on
Commit
d81f3be
·
verified ·
1 Parent(s): 605119a

End of training

Browse files
README.md CHANGED
@@ -13,14 +13,13 @@ model-index:
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/iman3710-pravaah-consulting/huggingface/runs/d1plxq5b)
17
  # my_awesome_opus_books_model
18
 
19
  This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.2237
22
- - Bleu: 0.1936
23
- - Gen Len: 17.9779
24
 
25
  ## Model description
26
 
@@ -40,25 +39,24 @@ More information needed
40
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 2e-05
43
- - train_batch_size: 32
44
- - eval_batch_size: 32
45
  - seed: 42
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
- - num_epochs: 2
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
54
  |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
55
- | 0.2431 | 1.0 | 8720 | 0.2272 | 0.1771 | 18.0655 |
56
- | 0.2381 | 2.0 | 17440 | 0.2237 | 0.1936 | 17.9779 |
57
 
58
 
59
  ### Framework versions
60
 
61
- - Transformers 4.42.3
62
- - Pytorch 2.1.2
63
  - Datasets 2.20.0
64
  - Tokenizers 0.19.1
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
 
16
  # my_awesome_opus_books_model
17
 
18
  This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.2271
21
+ - Bleu: 0.1332
22
+ - Gen Len: 18.0428
23
 
24
  ## Model description
25
 
 
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 2e-05
42
+ - train_batch_size: 16
43
+ - eval_batch_size: 16
44
  - seed: 42
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
+ - num_epochs: 1
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
53
  |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
54
+ | 0.2471 | 1.0 | 17439 | 0.2271 | 0.1332 | 18.0428 |
 
55
 
56
 
57
  ### Framework versions
58
 
59
+ - Transformers 4.42.4
60
+ - Pytorch 2.3.1+cu121
61
  - Datasets 2.20.0
62
  - Tokenizers 0.19.1
generation_config.json CHANGED
@@ -2,5 +2,5 @@
2
  "decoder_start_token_id": 0,
3
  "eos_token_id": 1,
4
  "pad_token_id": 0,
5
- "transformers_version": "4.42.3"
6
  }
 
2
  "decoder_start_token_id": 0,
3
  "eos_token_id": 1,
4
  "pad_token_id": 0,
5
+ "transformers_version": "4.42.4"
6
  }
runs/Jul31_09-12-05_1e9c7752e70c/events.out.tfevents.1722417126.1e9c7752e70c.6177.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dba80698dbf4359d703741ed861499df7547da9b4a193a9c44e90384418363cc
3
- size 13038
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba7b5981ffe7a2c78fbda751366b200ae6c51958e97cd05bd3c094e684e51075
3
+ size 13775