abhiman23897 commited on
Commit
01307ad
1 Parent(s): 2556c5b

End of training

Browse files
Files changed (2) hide show
  1. README.md +32 -30
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,25 +1,27 @@
1
  ---
 
 
2
  tags:
3
  - generated_from_trainer
4
  metrics:
5
  - rouge
6
  model-index:
7
- - name: flan-t5-base-lamp-7t-finetuned-1
8
  results: []
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
  should probably proofread and complete it, then remove this comment. -->
13
 
14
- # flan-t5-base-lamp-7t-finetuned-1
15
 
16
- This model was trained from scratch on the None dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 1.2121
19
- - Rouge1: 0.5299
20
- - Rouge2: 0.2730
21
- - Rougel: 0.4776
22
- - Rougelsum: 0.4930
23
 
24
  ## Model description
25
 
@@ -39,8 +41,8 @@ More information needed
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 5e-05
42
- - train_batch_size: 16
43
- - eval_batch_size: 16
44
  - seed: 42
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
@@ -51,26 +53,26 @@ The following hyperparameters were used during training:
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
53
  |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
54
- | 7.0613 | 1.0 | 840 | 1.2605 | 0.5053 | 0.2462 | 0.4513 | 0.4657 |
55
- | 1.3665 | 2.0 | 1680 | 1.2271 | 0.5124 | 0.2562 | 0.4600 | 0.4747 |
56
- | 1.3009 | 3.0 | 2520 | 1.2114 | 0.5179 | 0.2613 | 0.4652 | 0.4802 |
57
- | 1.2594 | 4.0 | 3360 | 1.2039 | 0.5229 | 0.2656 | 0.4677 | 0.4839 |
58
- | 1.227 | 5.0 | 4200 | 1.1988 | 0.5247 | 0.2669 | 0.4698 | 0.4860 |
59
- | 1.1918 | 6.0 | 5040 | 1.1966 | 0.5238 | 0.2698 | 0.4701 | 0.4860 |
60
- | 1.1696 | 7.0 | 5880 | 1.1934 | 0.5261 | 0.2702 | 0.4723 | 0.4880 |
61
- | 1.1285 | 8.0 | 6720 | 1.1933 | 0.5237 | 0.2693 | 0.4701 | 0.4849 |
62
- | 1.1153 | 9.0 | 7560 | 1.1961 | 0.5263 | 0.2708 | 0.4724 | 0.4880 |
63
- | 1.0927 | 10.0 | 8400 | 1.1961 | 0.5254 | 0.2691 | 0.4720 | 0.4874 |
64
- | 1.0661 | 11.0 | 9240 | 1.2010 | 0.5234 | 0.2684 | 0.4698 | 0.4854 |
65
- | 1.0634 | 12.0 | 10080 | 1.2003 | 0.5259 | 0.2723 | 0.4729 | 0.4885 |
66
- | 1.046 | 13.0 | 10920 | 1.2019 | 0.5277 | 0.2726 | 0.4747 | 0.4907 |
67
- | 1.0273 | 14.0 | 11760 | 1.2045 | 0.5309 | 0.2749 | 0.4776 | 0.4940 |
68
- | 1.0218 | 15.0 | 12600 | 1.2077 | 0.5295 | 0.2728 | 0.4771 | 0.4925 |
69
- | 1.0208 | 16.0 | 13440 | 1.2095 | 0.5303 | 0.2728 | 0.4775 | 0.4928 |
70
- | 1.003 | 17.0 | 14280 | 1.2110 | 0.5301 | 0.2726 | 0.4772 | 0.4929 |
71
- | 1.003 | 18.0 | 15120 | 1.2099 | 0.5314 | 0.2729 | 0.4781 | 0.4941 |
72
- | 0.9893 | 19.0 | 15960 | 1.2115 | 0.5302 | 0.2737 | 0.4779 | 0.4933 |
73
- | 0.9918 | 20.0 | 16800 | 1.2121 | 0.5299 | 0.2730 | 0.4776 | 0.4930 |
74
 
75
 
76
  ### Framework versions
 
1
  ---
2
+ license: apache-2.0
3
+ base_model: google/flan-t5-small
4
  tags:
5
  - generated_from_trainer
6
  metrics:
7
  - rouge
8
  model-index:
9
+ - name: flan-t5-small-lamp-4u-finetuned-3
10
  results: []
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
+ # flan-t5-small-lamp-4u-finetuned-3
17
 
18
+ This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 2.4088
21
+ - Rouge1: 0.1634
22
+ - Rouge2: 0.0510
23
+ - Rougel: 0.1494
24
+ - Rougelsum: 0.1500
25
 
26
  ## Model description
27
 
 
41
 
42
  The following hyperparameters were used during training:
43
  - learning_rate: 5e-05
44
+ - train_batch_size: 8
45
+ - eval_batch_size: 8
46
  - seed: 42
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
 
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
55
  |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
56
+ | 2.6388 | 1.0 | 1566 | 2.5235 | 0.1413 | 0.0452 | 0.1311 | 0.1316 |
57
+ | 2.5039 | 2.0 | 3132 | 2.4539 | 0.1469 | 0.0474 | 0.1354 | 0.1359 |
58
+ | 2.401 | 3.0 | 4698 | 2.4320 | 0.1525 | 0.0486 | 0.1409 | 0.1414 |
59
+ | 2.3748 | 4.0 | 6264 | 2.4193 | 0.1528 | 0.0495 | 0.1414 | 0.1417 |
60
+ | 2.2997 | 5.0 | 7830 | 2.4120 | 0.1559 | 0.0490 | 0.1427 | 0.1430 |
61
+ | 2.2742 | 6.0 | 9396 | 2.4042 | 0.1562 | 0.0508 | 0.1436 | 0.1438 |
62
+ | 2.2404 | 7.0 | 10962 | 2.4039 | 0.1584 | 0.0515 | 0.1457 | 0.1461 |
63
+ | 2.2249 | 8.0 | 12528 | 2.4010 | 0.1624 | 0.0509 | 0.1491 | 0.1495 |
64
+ | 2.1985 | 9.0 | 14094 | 2.3993 | 0.1622 | 0.0520 | 0.1493 | 0.1501 |
65
+ | 2.1509 | 10.0 | 15660 | 2.3993 | 0.1599 | 0.0505 | 0.1454 | 0.1462 |
66
+ | 2.1226 | 11.0 | 17226 | 2.4026 | 0.1631 | 0.0519 | 0.1498 | 0.1503 |
67
+ | 2.107 | 12.0 | 18792 | 2.4040 | 0.1623 | 0.0513 | 0.1487 | 0.1491 |
68
+ | 2.0855 | 13.0 | 20358 | 2.4049 | 0.1634 | 0.0517 | 0.1493 | 0.1498 |
69
+ | 2.0678 | 14.0 | 21924 | 2.4028 | 0.1631 | 0.0515 | 0.1489 | 0.1495 |
70
+ | 2.0899 | 15.0 | 23490 | 2.4052 | 0.1628 | 0.0510 | 0.1489 | 0.1496 |
71
+ | 2.0777 | 16.0 | 25056 | 2.4050 | 0.1628 | 0.0503 | 0.1493 | 0.1498 |
72
+ | 2.0572 | 17.0 | 26622 | 2.4076 | 0.1620 | 0.0511 | 0.1481 | 0.1488 |
73
+ | 2.0408 | 18.0 | 28188 | 2.4066 | 0.1625 | 0.0510 | 0.1487 | 0.1495 |
74
+ | 2.0538 | 19.0 | 29754 | 2.4076 | 0.1635 | 0.0510 | 0.1496 | 0.1503 |
75
+ | 2.0283 | 20.0 | 31320 | 2.4088 | 0.1634 | 0.0510 | 0.1494 | 0.1500 |
76
 
77
 
78
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c5cb52ef96a55938fe18f244d179bcaefb60b8d1cf016f7c51809f0f8d08117e
3
  size 307867048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1a15fd5edb7f1a592adcdf5896b463277a222e24ce1330f5deb3b6a6c539693
3
  size 307867048