File size: 4,669 Bytes
cec0193
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
base_model: habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1
datasets:
- OpenAssistant/oasst_top1_2023-08-25
inference: false
language:
- en
license: apache-2.0
model_creator: habanoz
model_name: TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1-GGUF

Quantized GGUF model files for [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1](https://huggingface.co/habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1) from [habanoz](https://huggingface.co/habanoz)


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [tinyllama-1.1b-intermediate-step-715k-1.5t-lr-5-4epochs-oasst1-top1-instruct-v1.fp16.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-intermediate-step-715k-1.5t-lr-5-4epochs-oasst1-top1-instruct-v1.fp16.gguf) | fp16 | 2.20 GB  |
| [tinyllama-1.1b-intermediate-step-715k-1.5t-lr-5-4epochs-oasst1-top1-instruct-v1.q2_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-intermediate-step-715k-1.5t-lr-5-4epochs-oasst1-top1-instruct-v1.q2_k.gguf) | q2_k | 482.14 MB  |
| [tinyllama-1.1b-intermediate-step-715k-1.5t-lr-5-4epochs-oasst1-top1-instruct-v1.q3_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-intermediate-step-715k-1.5t-lr-5-4epochs-oasst1-top1-instruct-v1.q3_k_m.gguf) | q3_k_m | 549.85 MB  |
| [tinyllama-1.1b-intermediate-step-715k-1.5t-lr-5-4epochs-oasst1-top1-instruct-v1.q4_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-intermediate-step-715k-1.5t-lr-5-4epochs-oasst1-top1-instruct-v1.q4_k_m.gguf) | q4_k_m | 667.81 MB  |
| [tinyllama-1.1b-intermediate-step-715k-1.5t-lr-5-4epochs-oasst1-top1-instruct-v1.q5_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-intermediate-step-715k-1.5t-lr-5-4epochs-oasst1-top1-instruct-v1.q5_k_m.gguf) | q5_k_m | 782.04 MB  |
| [tinyllama-1.1b-intermediate-step-715k-1.5t-lr-5-4epochs-oasst1-top1-instruct-v1.q6_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-intermediate-step-715k-1.5t-lr-5-4epochs-oasst1-top1-instruct-v1.q6_k.gguf) | q6_k | 903.41 MB  |
| [tinyllama-1.1b-intermediate-step-715k-1.5t-lr-5-4epochs-oasst1-top1-instruct-v1.q8_0.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-4epochs-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-intermediate-step-715k-1.5t-lr-5-4epochs-oasst1-top1-instruct-v1.q8_0.gguf) | q8_0 | 1.17 GB  |



## Original Model Card:
TinyLlama-1.1B-intermediate-step-715k-1.5T finetuned using OpenAssistant/oasst_top1_2023-08-25 dataset. 

Qlora is used. Adapter is merged.

SFT code:
https://github.com/habanoz/qlora.git

Command used:
```bash
accelerate launch $BASE_DIR/qlora/train.py \
  --model_name_or_path $BASE_MODEL \
  --working_dir $BASE_DIR/$OUTPUT_NAME-checkpoints \
  --output_dir $BASE_DIR/$OUTPUT_NAME-peft \
  --merged_output_dir $BASE_DIR/$OUTPUT_NAME \
  --final_output_dir $BASE_DIR/$OUTPUT_NAME-final \
  --num_train_epochs 4 \
  --logging_steps 1 \
  --save_strategy steps \
  --save_steps 75 \
  --save_total_limit 2 \
  --data_seed 11422 \
  --evaluation_strategy steps \
  --per_device_eval_batch_size 4 \
  --eval_dataset_size 0.01 \
  --eval_steps 75 \
  --max_new_tokens 1024 \
  --dataloader_num_workers 3 \
  --logging_strategy steps \
  --do_train \
  --do_eval \
  --lora_r 64 \
  --lora_alpha 16 \
  --lora_modules all \
  --bits 4 \
  --double_quant \
  --quant_type nf4 \
  --lr_scheduler_type constant \
  --dataset oasst1-top1 \
  --dataset_format oasst1 \
  --model_max_len 1024 \
  --per_device_train_batch_size 4 \
  --gradient_accumulation_steps 4 \
  --learning_rate 1e-5 \
  --adam_beta2 0.999 \
  --max_grad_norm 0.3 \
  --lora_dropout 0.0 \
  --weight_decay 0.0 \
  --seed 11422 \
  --gradient_checkpointing \
  --use_flash_attention_2 \
  --ddp_find_unused_parameters False
```