silmi224 commited on
Commit
971597c
·
verified ·
1 Parent(s): 4135284

Training complete

Browse files
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: silmi224/finetune-led-35000
3
+ tags:
4
+ - summarization
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: led-risalah_data_v14
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # led-risalah_data_v14
15
+
16
+ This model is a fine-tuned version of [silmi224/finetune-led-35000](https://huggingface.co/silmi224/finetune-led-35000) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 1.7199
19
+ - Rouge1 Precision: 0.6769
20
+ - Rouge1 Recall: 0.1724
21
+ - Rouge1 Fmeasure: 0.2744
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 2e-05
41
+ - train_batch_size: 1
42
+ - eval_batch_size: 1
43
+ - seed: 42
44
+ - gradient_accumulation_steps: 4
45
+ - total_train_batch_size: 4
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: linear
48
+ - num_epochs: 10
49
+ - mixed_precision_training: Native AMP
50
+
51
+ ### Training results
52
+
53
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure |
54
+ |:-------------:|:------:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
55
+ | 1.6706 | 0.9714 | 17 | 1.8400 | 0.6261 | 0.1547 | 0.2478 |
56
+ | 1.5177 | 2.0 | 35 | 1.7573 | 0.6586 | 0.1669 | 0.266 |
57
+ | 1.4016 | 2.9714 | 52 | 1.7266 | 0.6597 | 0.1689 | 0.2682 |
58
+ | 1.3182 | 4.0 | 70 | 1.7403 | 0.6564 | 0.1667 | 0.2653 |
59
+ | 1.217 | 4.9714 | 87 | 1.7272 | 0.657 | 0.1663 | 0.265 |
60
+ | 1.1559 | 6.0 | 105 | 1.7288 | 0.6493 | 0.1698 | 0.2687 |
61
+ | 1.1675 | 6.9714 | 122 | 1.7114 | 0.6727 | 0.1705 | 0.2717 |
62
+ | 1.1193 | 8.0 | 140 | 1.7118 | 0.6764 | 0.1734 | 0.2758 |
63
+ | 1.1101 | 8.9714 | 157 | 1.7232 | 0.6705 | 0.1726 | 0.2739 |
64
+ | 1.147 | 9.7143 | 170 | 1.7199 | 0.6769 | 0.1724 | 0.2744 |
65
+
66
+
67
+ ### Framework versions
68
+
69
+ - Transformers 4.41.2
70
+ - Pytorch 2.1.2
71
+ - Datasets 2.19.2
72
+ - Tokenizers 0.19.1
generation_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 0,
3
+ "decoder_start_token_id": 2,
4
+ "early_stopping": true,
5
+ "eos_token_id": 2,
6
+ "length_penalty": 2.0,
7
+ "max_length": 128,
8
+ "min_length": 40,
9
+ "no_repeat_ngram_size": 3,
10
+ "num_beams": 2,
11
+ "pad_token_id": 1,
12
+ "transformers_version": "4.41.2",
13
+ "use_cache": false
14
+ }
runs/Jul10_07-05-56_f20029054ef3/events.out.tfevents.1720595161.f20029054ef3.34.1 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7e1f5ffc122abd4a6af45fca7712bd73d3a292015fc7269ef51dca0da290b044
3
- size 13124
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e378e973ac2f9f48e3dad9e480c8d89acec8af67a82a377f06abc0467c722124
3
+ size 13925
runs/Jul10_07-05-56_f20029054ef3/events.out.tfevents.1720599150.f20029054ef3.34.2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:994113770d4b36302f774fd5ca1ecc070850556d8adf51fdbfe054426e620e0e
3
+ size 535