bart-indo-small / README.md
gaduhhartawan's picture
Update README.md
0502256 verified
metadata
license: mit
datasets:
  - id_liputan6
language:
  - id
metrics:
  - rouge
pipeline_tag: summarization
tags:
  - bart
  - text2text-generation

bart-indo-small

This model is a fine-tuned version of bart-large-cnn on Liputan6 dataset.

Training procedure

Training hyperparameters

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step R1 Precision R1 Recall R1 Fmeasure R2 Precision R2 Recall R2 Fmeasure Rl Precision Rl Recall Rl Fmeasure
0.2443 1.0 48000 0.3579 0.6416 0.4468 0.1163 0.2467 0.1551 0.3499 0.625 0.4359

Framework versions

  • Transformers 4.40.0
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1