afaji's picture
fresh-2-layer-medmcqa10000-distill-of-fresh-2-layer-mmlu_EVAL_mmlu
5b50cc2 verified
metadata
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: fresh-2-layer-medmcqa10000-distill-of-fresh-2-layer-mmlu_EVAL_mmlu
    results: []

fresh-2-layer-medmcqa10000-distill-of-fresh-2-layer-mmlu_EVAL_mmlu

This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 182.8575
  • Accuracy: 0.4225

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 321
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.32 100 200.8704 0.264
No log 0.64 200 196.7610 0.376
No log 0.96 300 208.2900 0.39
No log 1.28 400 191.1668 0.382
139.6607 1.6 500 200.9705 0.398
139.6607 1.92 600 189.0896 0.406
139.6607 2.24 700 185.7183 0.41
139.6607 2.56 800 174.6011 0.426
139.6607 2.88 900 186.1249 0.422
79.571 3.19 1000 185.1113 0.424
79.571 3.51 1100 181.1421 0.398
79.571 3.83 1200 186.5035 0.412
79.571 4.15 1300 184.2203 0.432
79.571 4.47 1400 189.4636 0.396
56.051 4.79 1500 188.4894 0.412
56.051 5.11 1600 190.0390 0.43
56.051 5.43 1700 191.3842 0.416
56.051 5.75 1800 195.1680 0.406

Framework versions

  • Transformers 4.34.0.dev0
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.5
  • Tokenizers 0.14.0