Edit model card

franco-arabics

This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2175

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 12 1.1244
No log 2.0 24 0.8089
No log 3.0 36 0.6188
No log 4.0 48 0.5041
No log 5.0 60 0.4299
No log 6.0 72 0.3806
No log 7.0 84 0.3308
No log 8.0 96 0.3113
No log 9.0 108 0.2996
No log 10.0 120 0.2801
No log 11.0 132 0.2607
No log 12.0 144 0.2541
No log 13.0 156 0.2543
No log 14.0 168 0.2473
No log 15.0 180 0.2464
No log 16.0 192 0.2437
No log 17.0 204 0.2457
No log 18.0 216 0.2365
No log 19.0 228 0.2385
No log 20.0 240 0.2364
No log 21.0 252 0.2310
No log 22.0 264 0.2214
No log 23.0 276 0.2156
No log 24.0 288 0.2153
No log 25.0 300 0.2177
No log 26.0 312 0.2195
No log 27.0 324 0.2195
No log 28.0 336 0.2183
No log 29.0 348 0.2174
No log 30.0 360 0.2175

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.4
  • Tokenizers 0.13.3
Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mohamedtolba/franco-arabics

Base model

google-t5/t5-small
Finetuned
(1526)
this model