File size: 4,206 Bytes
f531cf7 1190d1e f531cf7 30d9678 f531cf7 30d9678 f531cf7 30d9678 f531cf7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
---
base_model: Tensoic/Tiny-Llama-openhermes-1.1B-step-715k-1.5T
inference: false
license: apache-2.0
model-index:
- name: out
results: []
model_creator: Tensoic
model_name: Tiny-Llama-openhermes-1.1B-step-715k-1.5T
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- generated_from_trainer
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
datasets:
- teknium/openhermes
---
# Tensoic/Tiny-Llama-openhermes-1.1B-step-715k-1.5T-GGUF
Quantized GGUF model files for [Tiny-Llama-openhermes-1.1B-step-715k-1.5T](https://huggingface.co/Tensoic/Tiny-Llama-openhermes-1.1B-step-715k-1.5T) from [Tensoic](https://huggingface.co/Tensoic)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [tiny-llama-openhermes-1.1b-step-715k-1.5t.q2_k.gguf](https://huggingface.co/afrideva/Tiny-Llama-openhermes-1.1B-step-715k-1.5T-GGUF/resolve/main/tiny-llama-openhermes-1.1b-step-715k-1.5t.q2_k.gguf) | q2_k | 482.14 MB |
| [tiny-llama-openhermes-1.1b-step-715k-1.5t.q3_k_m.gguf](https://huggingface.co/afrideva/Tiny-Llama-openhermes-1.1B-step-715k-1.5T-GGUF/resolve/main/tiny-llama-openhermes-1.1b-step-715k-1.5t.q3_k_m.gguf) | q3_k_m | 549.85 MB |
| [tiny-llama-openhermes-1.1b-step-715k-1.5t.q4_k_m.gguf](https://huggingface.co/afrideva/Tiny-Llama-openhermes-1.1B-step-715k-1.5T-GGUF/resolve/main/tiny-llama-openhermes-1.1b-step-715k-1.5t.q4_k_m.gguf) | q4_k_m | 667.81 MB |
| [tiny-llama-openhermes-1.1b-step-715k-1.5t.q5_k_m.gguf](https://huggingface.co/afrideva/Tiny-Llama-openhermes-1.1B-step-715k-1.5T-GGUF/resolve/main/tiny-llama-openhermes-1.1b-step-715k-1.5t.q5_k_m.gguf) | q5_k_m | 782.04 MB |
| [tiny-llama-openhermes-1.1b-step-715k-1.5t.q6_k.gguf](https://huggingface.co/afrideva/Tiny-Llama-openhermes-1.1B-step-715k-1.5T-GGUF/resolve/main/tiny-llama-openhermes-1.1b-step-715k-1.5t.q6_k.gguf) | q6_k | 903.41 MB |
| [tiny-llama-openhermes-1.1b-step-715k-1.5t.q8_0.gguf](https://huggingface.co/afrideva/Tiny-Llama-openhermes-1.1B-step-715k-1.5T-GGUF/resolve/main/tiny-llama-openhermes-1.1b-step-715k-1.5t.q8_0.gguf) | q8_0 | 1.17 GB |
# Prompt Format: Alpaca
```
### Instruction:
{Instruction/Question}
### Response:
```
## Original Model Card:
This model is a fine-tuned version of [PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T) on the [openhermes](https://huggingface.co/datasets/teknium/openhermes) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2355
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.4654 | 0.0 | 1 | 3.5326 |
| 1.2162 | 0.05 | 1503 | 1.9335 |
| 1.1918 | 0.1 | 3006 | 1.7391 |
| 1.4188 | 0.15 | 4509 | 1.7574 |
| 1.8281 | 0.2 | 6012 | 1.6704 |
| 0.8639 | 0.25 | 7515 | 1.7459 |
| 1.3764 | 0.3 | 9018 | 1.6832 |
| 2.1172 | 0.35 | 10521 | 1.6398 |
| 1.1855 | 0.4 | 12024 | 1.6007 |
| 1.5604 | 0.45 | 13527 | 1.5256 |
| 1.0224 | 0.5 | 15030 | 1.4891 |
| 1.5582 | 0.55 | 16533 | 1.4903 |
| 0.9489 | 0.6 | 18036 | 1.4179 |
| 1.67 | 0.65 | 19539 | 1.4585 |
| 0.8542 | 0.7 | 21042 | 1.3810 |
| 1.5301 | 0.75 | 22545 | 1.3645 |
| 0.951 | 0.8 | 24048 | 1.3087 |
| 1.1791 | 0.85 | 25551 | 1.3018 |
| 1.3342 | 0.9 | 27054 | 1.2595 |
| 1.1221 | 0.95 | 28557 | 1.2355 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1 |