alexandrualexandru's picture
End of training
6b6e221 verified
metadata
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
  - generated_from_trainer
model-index:
  - name: code-llama-instruct-7b-text-to-sparql-axiom-prefix
    results: []

code-llama-instruct-7b-text-to-sparql-axiom-prefix

This model is a fine-tuned version of codellama/CodeLlama-7b-Instruct-hf on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0988

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 400
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
1.138 0.0710 20 1.0843
0.6257 0.1421 40 0.3315
0.1388 0.2131 60 0.1390
0.1293 0.2842 80 0.1269
0.1174 0.3552 100 0.1205
0.1097 0.4263 120 0.1176
0.1102 0.4973 140 0.1131
0.1073 0.5684 160 0.1083
0.1064 0.6394 180 0.1064
0.1079 0.7105 200 0.1053
0.1025 0.7815 220 0.1042
0.1038 0.8526 240 0.1029
0.0962 0.9236 260 0.1023
0.1021 0.9947 280 0.1013
0.098 1.0657 300 0.1008
0.0964 1.1368 320 0.1003
0.0961 1.2078 340 0.0997
0.0948 1.2789 360 0.0994
0.0955 1.3499 380 0.0989
0.0988 1.4210 400 0.0988

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.10.1
  • Tokenizers 0.19.1