Edit model card

AraBART-finetuned-xlsum-ar

This model is a fine-tuned version of moussaKam/AraBART on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.4655
  • Rouge1: 24.4029
  • Rouge2: 10.6961
  • Rougel: 21.8597
  • Rougelsum: 21.9193
  • Gen Len: 19.6173

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
2.8881 1.0 2111 2.5078 23.0537 9.805 20.6712 20.7358 19.4371
2.7229 2.0 4222 2.4712 23.4792 10.0638 21.0179 21.0808 19.5933
2.6235 3.0 6333 2.4606 23.793 10.2551 21.2806 21.3525 19.5784
2.5475 4.0 8444 2.4557 23.8559 10.2547 21.3093 21.383 19.6013
2.4579 5.0 10555 2.4567 24.3906 10.6549 21.8215 21.8672 19.6471
2.4124 6.0 12666 2.4578 24.3648 10.6614 21.8584 21.9202 19.6018
2.38 7.0 14777 2.4606 24.3488 10.722 21.8546 21.9218 19.5938
2.3422 8.0 16888 2.4605 24.4836 10.7873 21.9424 21.9996 19.6215
2.3185 9.0 18999 2.4630 24.2878 10.6124 21.8332 21.8687 19.5949
2.2988 10.0 21110 2.4655 24.4029 10.6961 21.8597 21.9193 19.6173

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.1.2
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
3
Safetensors
Model size
139M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Osame1/AraBART-finetuned-xlsum-ar

Base model

moussaKam/AraBART
Finetuned
(8)
this model