Edit model card

es_fi_all_quy

This model is a fine-tuned version of nouman-10/es_fi_all_quy on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4858
  • Bleu: 1.9345
  • Chrf: 34.1553
  • Gen Len: 38.1579

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Chrf Gen Len
0.2239 0.09 1000 0.4842 1.613 34.0838 37.4266
0.2256 0.17 2000 0.4808 1.5738 32.9285 43.005
0.2178 0.26 3000 0.4820 1.499 32.8545 41.5795
0.2175 0.34 4000 0.4828 1.6725 33.3302 39.0342
0.223 0.43 5000 0.4864 2.343 34.8172 35.6197
0.2286 0.51 6000 0.4743 1.7218 33.6246 42.0755
0.2281 0.6 7000 0.4794 1.7699 34.7209 38.7716
0.2243 0.68 8000 0.4802 1.7605 34.16 37.6258
0.236 0.77 9000 0.4772 1.5908 33.9113 39.0543
0.2309 0.85 10000 0.4756 2.0459 34.2757 38.2404
0.2303 0.94 11000 0.4762 1.8315 34.3413 37.7465
0.2218 1.02 12000 0.4860 1.8359 33.5176 38.5392
0.217 1.11 13000 0.4811 1.9919 34.0441 37.3702
0.2158 1.19 14000 0.4845 1.6391 34.5521 37.0493
0.2211 1.28 15000 0.4858 1.9345 34.1553 38.1579

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.0+cu117
  • Datasets 2.11.0
  • Tokenizers 0.13.3
Downloads last month
3
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.