metadata
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- sam-mosaic/orca-gpt4-chatml
language:
- en
This is a fine-tuned version of TinyLlama-1.1B-intermediate-step-240k-503b using the sam-mosaic/orca-gpt4-chatml dataset.
Training
- Method: QLORA
- Quantization: fp16
- Time: 20h on a RTX 4090 (from runpod.io)
- Cost: About $15
- Based on: https://colab.research.google.com/drive/1Zmaceu65d7w4Tcd-cfnZRb6k_Tcv2b8g