gaduhhartawan commited on
Commit
0502256
1 Parent(s): 1e4f25f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -1
README.md CHANGED
@@ -10,4 +10,29 @@ pipeline_tag: summarization
10
  tags:
11
  - bart
12
  - text2text-generation
13
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  tags:
11
  - bart
12
  - text2text-generation
13
+ ---
14
+
15
+ # bart-indo-small
16
+ This model is a fine-tuned version of [bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on [Liputan6](https://paperswithcode.com/dataset/liputan6) dataset.
17
+
18
+ ## Training procedure
19
+ ### Training hyperparameters
20
+ - learning_rate: 0.0001
21
+ - train_batch_size: 4
22
+ - eval_batch_size: 4
23
+ - seed: 42
24
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
25
+ - lr_scheduler_type: linear
26
+ - num_epochs: 1
27
+
28
+ ### Training results
29
+
30
+ | Training Loss | Epoch | Step | R1 Precision | R1 Recall | R1 Fmeasure | R2 Precision | R2 Recall | R2 Fmeasure | Rl Precision | Rl Recall | Rl Fmeasure |
31
+ |:-------------:|:-----:|:-----:|:------------:|:---------:|:-----------:|:------------:|:---------:|:-----------:|:------------:|:---------:|:-----------:|
32
+ | 0.2443 | 1.0 | 48000 | 0.3579 | 0.6416 | 0.4468 | 0.1163 | 0.2467 | 0.1551 | 0.3499 | 0.625 | 0.4359 |
33
+
34
+ ## Framework versions
35
+ - Transformers 4.40.0
36
+ - Pytorch 2.2.1+cu121
37
+ - Datasets 2.19.0
38
+ - Tokenizers 0.19.1