ArabicTranslator / README.md
PontifexMaximus's picture
update model card README.md
4eac235
|
raw
history blame
2.45 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - opus_infopankki
metrics:
  - bleu
model-index:
  - name: opus-mt-ar-en-finetuned-ar-to-en
    results:
      - task:
          name: Sequence-to-sequence Language Modeling
          type: text2text-generation
        dataset:
          name: opus_infopankki
          type: opus_infopankki
          args: ar-en
        metrics:
          - name: Bleu
            type: bleu
            value: 28.9919

opus-mt-ar-en-finetuned-ar-to-en

This model is a fine-tuned version of Helsinki-NLP/opus-mt-ar-en on the opus_infopankki dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6629
  • Bleu: 28.9919
  • Gen Len: 15.6512

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-06
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 11
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
No log 1.0 159 2.1441 22.0728 16.5805
No log 2.0 318 2.0141 23.888 16.3754
No log 3.0 477 1.9228 25.2541 15.835
2.2356 4.0 636 1.8523 26.299 15.7874
2.2356 5.0 795 1.7971 26.8646 16.0247
2.2356 6.0 954 1.7536 27.2391 16.013
1.9609 7.0 1113 1.7200 27.7471 16.0237
1.9609 8.0 1272 1.6945 28.4924 15.6563
1.9609 9.0 1431 1.6771 28.8024 15.6445
1.8388 10.0 1590 1.6665 29.0016 15.6429
1.8388 11.0 1749 1.6629 28.9919 15.6512

Framework versions

  • Transformers 4.19.2
  • Pytorch 1.7.1+cu110
  • Datasets 2.2.2
  • Tokenizers 0.12.1