File size: 1,236 Bytes
c083cba
 
53f1a1a
 
 
 
 
 
c083cba
53f1a1a
 
 
c337449
 
 
53f1a1a
 
 
 
 
 
 
c337449
 
 
53f1a1a
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
- bigcode/starcoderdata
- cerebras/SlimPajama-627B
language:
- en
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)

#### Base model:
https://huggingface.co/TinyLlama/tinyLlama-intermediate-checkpoints/tree/step-720k-token-1510B 
This fine tune was done on the "early" version of tinyllama-1.5T which suffers from a bug in dataset processing. See https://github.com/jzhang38/TinyLlama/issues/67.
Through it suffers from the glitch, its performance seems not being damaged and still showing improvement(metrics needed)

#### Dataset:
Fine tuned on OpenOrca GPT4 subset for 1 epoch,Using CHATML format

#### Model License:
Apache 2.0, following the TinyLlama base model.

#### Quantisation:
GGUF format:https://huggingface.co/s3nh/jeff31415-TinyLlama-1.1B-1.5T-OpenOrca-Alpha-GGUF

#### Hardware and training details:
Hardware: 1*RTX A5000, ~16 hours to complete 1 epoch. GPU from autodl.com, cost around $3 for this finetuning. 
https://wandb.ai/jeff200402/TinyLlama-1.5T-alpha-Orca?workspace= for more details.