Atharvgarg commited on
Commit
c6fc1af
1 Parent(s): e613b33

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - summarisation
5
+ - generated_from_trainer
6
+ metrics:
7
+ - rouge
8
+ model-index:
9
+ - name: distilbart-xsum-6-6-finetuned-bbc-news
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # distilbart-xsum-6-6-finetuned-bbc-news
17
+
18
+ This model is a fine-tuned version of [sshleifer/distilbart-xsum-6-6](https://huggingface.co/sshleifer/distilbart-xsum-6-6) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.2624
21
+ - Rouge1: 62.1927
22
+ - Rouge2: 54.4754
23
+ - Rougel: 55.868
24
+ - Rougelsum: 60.9345
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+ ### Training hyperparameters
41
+
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 5.6e-05
44
+ - train_batch_size: 4
45
+ - eval_batch_size: 4
46
+ - seed: 42
47
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: linear
49
+ - num_epochs: 8
50
+
51
+ ### Training results
52
+
53
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
54
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
55
+ | 0.4213 | 1.0 | 445 | 0.2005 | 59.4886 | 51.7791 | 53.5126 | 58.3405 |
56
+ | 0.1355 | 2.0 | 890 | 0.1887 | 61.7474 | 54.2823 | 55.7324 | 60.5787 |
57
+ | 0.0891 | 3.0 | 1335 | 0.1932 | 61.1312 | 53.103 | 54.6992 | 59.8923 |
58
+ | 0.0571 | 4.0 | 1780 | 0.2141 | 60.8797 | 52.6195 | 54.4402 | 59.5298 |
59
+ | 0.0375 | 5.0 | 2225 | 0.2318 | 61.7875 | 53.8753 | 55.5068 | 60.5448 |
60
+ | 0.0251 | 6.0 | 2670 | 0.2484 | 62.3535 | 54.6029 | 56.2804 | 61.031 |
61
+ | 0.0175 | 7.0 | 3115 | 0.2542 | 61.6351 | 53.8248 | 55.6399 | 60.3765 |
62
+ | 0.0133 | 8.0 | 3560 | 0.2624 | 62.1927 | 54.4754 | 55.868 | 60.9345 |
63
+
64
+
65
+ ### Framework versions
66
+
67
+ - Transformers 4.21.2
68
+ - Pytorch 1.12.1+cu113
69
+ - Datasets 2.4.0
70
+ - Tokenizers 0.12.1