indiejoseph's picture
Training in progress, step 500
f31f0bc
|
raw
history blame
1.82 kB
metadata
language:
  - zh
  - yue
base_model: indiejoseph/bart-base-cantonese
tags:
  - generated_from_trainer
metrics:
  - bleu
model-index:
  - name: bart-translation-zh-yue
    results: []

bart-translation-zh-yue

This model is a fine-tuned version of indiejoseph/bart-base-cantonese on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1612
  • Bleu: 88.1708
  • Gen Len: 41.6859

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 6.0

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
0.2219 1.0 2336 0.1816 21.2068 19.2422
0.1479 2.0 4672 0.1609 21.6783 19.2471
0.1092 3.0 7008 0.1534 21.8526 19.2463
0.0856 4.0 9344 0.1525 22.0841 19.2482
0.0667 5.0 11680 0.1588 22.1943 19.2467
0.0549 6.0 14016 0.1612 22.2237 19.2467

Framework versions

  • Transformers 4.35.0.dev0
  • Pytorch 2.1.1+cu121
  • Datasets 2.14.6
  • Tokenizers 0.14.1