Built with Axolotl

Base model:

PY007/TinyLlama-1.1B-intermediate-step-480k-1T

Dataset:

Fine tuned on OpenOrca GPT4 subset for 1 epoch,Using CHATML format

Model License:

Apache 2.0, following the TinyLlama base model.

Quantisation:

Hardware and training details:

Hardware: 1*RTX A5000, ~16 hours to complete 1 epoch. GPU from autodl.com, cost around $3 for this finetuning. https://wandb.ai/jeff200402/TinyLlama-Orca?workspace= for more details.

Downloads last month
130
Safetensors
Model size
1.1B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for jeff31415/TinyLlama-1.1B-1T-OpenOrca

Finetunes
4 models
Quantizations
6 models

Datasets used to train jeff31415/TinyLlama-1.1B-1T-OpenOrca