This is a fine-tuned version of TinyLlama-1.1B-intermediate-step-240k-503b using the sam-mosaic/orca-gpt4-chatml dataset.
Training
- Method: QLORA
- Quantization: fp16
- Time: 20h on a RTX 4090 (from runpod.io)
- Cost: About $15
- Based on: https://colab.research.google.com/drive/1Zmaceu65d7w4Tcd-cfnZRb6k_Tcv2b8g
- Downloads last month
- 157
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.