Edit model card

longt5_xl_sfd_bp_40

This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.6048

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 256
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • num_epochs: 25.0

Training results

Training Loss Epoch Step Validation Loss
0.1855 0.97 14 2.5320
0.1635 1.95 28 2.4299
0.1272 2.99 43 2.9443
0.1113 3.97 57 2.8813
0.0819 4.94 71 3.0005
0.0782 5.98 86 3.0224
0.0588 6.96 100 3.1903
0.0729 8.0 115 2.5871
0.0473 8.97 129 3.2830
0.113 9.95 143 3.3443
0.0364 10.99 158 3.3243
0.0321 11.97 172 3.3962
0.0302 12.94 186 3.4508
0.0717 13.98 201 3.4166
0.0746 14.96 215 2.8975
0.0548 16.0 230 3.0853
0.0507 16.97 244 3.0706
0.0442 17.95 258 3.2759
0.0396 18.99 273 3.1962
0.0351 19.97 287 3.3108
0.0306 20.94 301 3.2607
0.0267 21.98 316 3.4015
0.1454 22.96 330 2.6912
0.0252 24.0 345 3.4576
0.0187 24.35 350 3.6048

Framework versions

  • Transformers 4.38.1
  • Pytorch 2.2.1+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2
Downloads last month
6
Safetensors
Model size
2.85B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.