MuntasirHossain commited on
Commit
2c72e41
·
verified ·
1 Parent(s): 1cc1ad0

End of training

Browse files
Files changed (3) hide show
  1. README.md +21 -13
  2. model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -3,6 +3,8 @@ license: apache-2.0
3
  base_model: google/flan-t5-base
4
  tags:
5
  - generated_from_trainer
 
 
6
  model-index:
7
  - name: flan-t5-base-dialogsum-summarization
8
  results: []
@@ -15,17 +17,12 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - eval_loss: 1.2399
19
- - eval_rouge1: 38.7981
20
- - eval_rouge2: 14.9183
21
- - eval_rougeL: 32.7218
22
- - eval_rougeLsum: 34.5266
23
- - eval_gen_len: 18.896
24
- - eval_runtime: 195.7294
25
- - eval_samples_per_second: 7.664
26
- - eval_steps_per_second: 0.639
27
- - epoch: 1.0
28
- - step: 1039
29
 
30
  ## Model description
31
 
@@ -45,12 +42,23 @@ More information needed
45
 
46
  The following hyperparameters were used during training:
47
  - learning_rate: 5e-05
48
- - train_batch_size: 12
49
- - eval_batch_size: 12
50
  - seed: 42
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - num_epochs: 4
 
 
 
 
 
 
 
 
 
 
 
54
 
55
  ### Framework versions
56
 
 
3
  base_model: google/flan-t5-base
4
  tags:
5
  - generated_from_trainer
6
+ metrics:
7
+ - rouge
8
  model-index:
9
  - name: flan-t5-base-dialogsum-summarization
10
  results: []
 
17
 
18
  This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 1.2095
21
+ - Rouge1: 39.3212
22
+ - Rouge2: 15.6335
23
+ - Rougel: 33.4773
24
+ - Rougelsum: 35.1795
25
+ - Gen Len: 18.872
 
 
 
 
 
26
 
27
  ## Model description
28
 
 
42
 
43
  The following hyperparameters were used during training:
44
  - learning_rate: 5e-05
45
+ - train_batch_size: 16
46
+ - eval_batch_size: 16
47
  - seed: 42
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - num_epochs: 4
51
+ - mixed_precision_training: Native AMP
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
56
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
57
+ | 1.1318 | 1.0 | 1558 | 1.2331 | 39.1301 | 15.2555 | 33.1115 | 35.0288 | 18.868 |
58
+ | 1.0483 | 2.0 | 3116 | 1.2095 | 39.3212 | 15.6335 | 33.4773 | 35.1795 | 18.872 |
59
+ | 0.9969 | 3.0 | 4674 | 1.2104 | 40.0115 | 16.029 | 34.0364 | 35.8358 | 18.852 |
60
+ | 0.9601 | 4.0 | 6232 | 1.2161 | 39.7403 | 15.9708 | 33.8644 | 35.5952 | 18.868 |
61
+
62
 
63
  ### Framework versions
64
 
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:471ee2a28705908776ab14887d5b098eef4113560ccc06f4eff7830f9770eb4a
3
  size 990345064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cda31816de7bfe2f43c47062e20851af21d3f9688e312dbf035f7af93d974631
3
  size 990345064
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e93315546d8b68c31e5ba89f92b8ee8776de16ba9e4891041984b70b2b532029
3
  size 5048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ce60c67e57455efc07032c2725c711b7855837744b69957484ca5f73baae9dc
3
  size 5048