jmz2023 commited on
Commit
b79572e
·
verified ·
1 Parent(s): ca08dfa

jmz2023/t5_eli5_v1

Browse files
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: google/flan-t5-base
5
+ tags:
6
+ - generated_from_trainer
7
+ datasets:
8
+ - eli5_category
9
+ metrics:
10
+ - rouge
11
+ model-index:
12
+ - name: flan-t5-base-finetuned-t5
13
+ results:
14
+ - task:
15
+ name: Sequence-to-sequence Language Modeling
16
+ type: text2text-generation
17
+ dataset:
18
+ name: eli5_category
19
+ type: eli5_category
20
+ config: default
21
+ split: validation1
22
+ args: default
23
+ metrics:
24
+ - name: Rouge1
25
+ type: rouge
26
+ value: 10.1877
27
+ ---
28
+
29
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
+ should probably proofread and complete it, then remove this comment. -->
31
+
32
+ # flan-t5-base-finetuned-t5
33
+
34
+ This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the eli5_category dataset.
35
+ It achieves the following results on the evaluation set:
36
+ - Loss: nan
37
+ - Rouge1: 10.1877
38
+ - Rouge2: 0.0
39
+ - Rougel: 10.1808
40
+ - Rougelsum: 10.1824
41
+ - Gen Len: 9.366
42
+
43
+ ## Model description
44
+
45
+ More information needed
46
+
47
+ ## Intended uses & limitations
48
+
49
+ More information needed
50
+
51
+ ## Training and evaluation data
52
+
53
+ More information needed
54
+
55
+ ## Training procedure
56
+
57
+ ### Training hyperparameters
58
+
59
+ The following hyperparameters were used during training:
60
+ - learning_rate: 2e-05
61
+ - train_batch_size: 16
62
+ - eval_batch_size: 16
63
+ - seed: 42
64
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
+ - lr_scheduler_type: linear
66
+ - num_epochs: 3
67
+ - mixed_precision_training: Native AMP
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
72
+ |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
73
+ | 0.0 | 1.0 | 5736 | nan | 10.1877 | 0.0 | 10.1808 | 10.1824 | 9.366 |
74
+ | 0.0 | 2.0 | 11472 | nan | 10.1877 | 0.0 | 10.1808 | 10.1824 | 9.366 |
75
+ | 0.0 | 3.0 | 17208 | nan | 10.1877 | 0.0 | 10.1808 | 10.1824 | 9.366 |
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - Transformers 4.44.2
81
+ - Pytorch 2.4.1+cu121
82
+ - Datasets 3.0.1
83
+ - Tokenizers 0.19.1
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "decoder_start_token_id": 0,
3
+ "eos_token_id": 1,
4
+ "pad_token_id": 0,
5
+ "transformers_version": "4.44.2"
6
+ }
runs/Sep27_18-29-43_bf8f55ccff97/events.out.tfevents.1727462534.bf8f55ccff97.560.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1025872b763001b58029a9b40534da11c0747272c284c71660f9fdf7a139c7a7
3
- size 14197
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b05d05b1d48077fc03064d6a70f9083957225cee3fa4e78951b0b655f691741
3
+ size 15092