ft-tatoeba-ar-en / README.md
Abdulwahab Sahyoun
Update README.md
4c5366a
|
raw
history blame
1.88 kB
metadata
license: mit
tags:
  - translation
  - generated_from_trainer
datasets:
  - tatoeba
metrics:
  - bleu
model-index:
  - name: ft-tatoeba-ar-en
    results:
      - task:
          name: Sequence-to-sequence Language Modeling
          type: text2text-generation
        dataset:
          name: tatoeba
          type: tatoeba
          args: ar-en
        metrics:
          - name: Bleu
            type: bleu
            value: 49.84455855787226
widget:
  - text: كريستيانو رونالدو يلعب مع نادي يوفنتوس
    example_title: Sentence 1
  - text: تخرج أحمد من الجامعة الأمريكية في الشارقة الشهر الماضي
    example_title: Sentence 2
  - text: لا يزال ديبالا يلعب لفريق يوفنتوس
    example_title: Sentence 3
  - text: شو عملتوا امس ؟
    example_title: Sentence 4

ft-tatoeba-ar-en

This model is a fine-tuned version of facebook/m2m100_418M on the tatoeba dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7431
  • Bleu: 49.8446

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.10.0+cu111
  • Datasets 2.0.0
  • Tokenizers 0.11.6