llama2_instruct_generation
This model is a fine-tuned version of NousResearch/Llama-2-7b-hf on the generator dataset. It achieves the following results on the evaluation set:
- Loss: 1.6759
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 500
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.9468 | 0.0027 | 20 | 1.8141 |
1.8734 | 0.0054 | 40 | 1.7849 |
1.8769 | 0.0081 | 60 | 1.7718 |
1.8633 | 0.0108 | 80 | 1.7599 |
1.8583 | 0.0135 | 100 | 1.7472 |
1.8264 | 0.0163 | 120 | 1.7176 |
1.8714 | 0.0190 | 140 | 1.7053 |
1.831 | 0.0217 | 160 | 1.7012 |
1.7957 | 0.0244 | 180 | 1.6947 |
1.8613 | 0.0271 | 200 | 1.6934 |
1.81 | 0.0298 | 220 | 1.6915 |
1.7995 | 0.0325 | 240 | 1.6893 |
1.9067 | 0.0352 | 260 | 1.6872 |
1.8261 | 0.0379 | 280 | 1.6860 |
1.8609 | 0.0406 | 300 | 1.6843 |
1.7725 | 0.0433 | 320 | 1.6835 |
1.8061 | 0.0461 | 340 | 1.6819 |
1.8842 | 0.0488 | 360 | 1.6804 |
1.7648 | 0.0515 | 380 | 1.6799 |
1.8121 | 0.0542 | 400 | 1.6796 |
1.8056 | 0.0569 | 420 | 1.6777 |
1.7423 | 0.0596 | 440 | 1.6780 |
1.8971 | 0.0623 | 460 | 1.6782 |
1.8234 | 0.0650 | 480 | 1.6771 |
1.8978 | 0.0677 | 500 | 1.6759 |
Framework versions
- PEFT 0.13.0
- Transformers 4.45.0
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
- Downloads last month
- 6
Model tree for ericrisco/llama2_instruct_generation
Base model
NousResearch/Llama-2-7b-hf