|
--- |
|
inference: false |
|
license: mit |
|
language: |
|
- en |
|
library_name: transformers |
|
datasets: |
|
- psmathur/alpaca_orca |
|
- psmathur/dolly-v2_orca |
|
- psmathur/WizardLM_Orca |
|
--- |
|
|
|
<!-- header start --> |
|
<div style="width: 100%;"> |
|
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> |
|
</div> |
|
<div style="display: flex; justify-content: space-between; width: 100%;"> |
|
<div style="display: flex; flex-direction: column; align-items: flex-start;"> |
|
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> |
|
</div> |
|
<div style="display: flex; flex-direction: column; align-items: flex-end;"> |
|
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> |
|
</div> |
|
</div> |
|
<!-- header end --> |
|
|
|
# Pankaj Mathur's Orca Mini 13B GGML |
|
|
|
These files are GGML format model files for [Pankaj Mathur's Orca Mini 13B](https://huggingface.co/psmathur/orca_mini_13b). |
|
|
|
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as: |
|
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui) |
|
* [KoboldCpp](https://github.com/LostRuins/koboldcpp) |
|
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) |
|
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) |
|
* [ctransformers](https://github.com/marella/ctransformers) |
|
|
|
## Repositories available |
|
|
|
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/orca_mini_13B-GPTQ) |
|
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_13B-GGML) |
|
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_13b) |
|
|
|
## Prompt template: |
|
|
|
``` |
|
### System: |
|
You are an AI assistant that follows instruction extremely well. Help as much as you can. |
|
|
|
### User: |
|
prompt |
|
|
|
### Response: |
|
``` |
|
or |
|
``` |
|
### System: |
|
You are an AI assistant that follows instruction extremely well. Help as much as you can. |
|
|
|
### User: |
|
prompt |
|
|
|
### Input: |
|
input |
|
|
|
### Response: |
|
``` |
|
|
|
<!-- compatibility_ggml start --> |
|
## Compatibility |
|
|
|
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0` |
|
|
|
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`. |
|
|
|
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May. |
|
|
|
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K` |
|
|
|
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`. |
|
|
|
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt. |
|
|
|
## Explanation of the new k-quant methods |
|
|
|
The new methods available are: |
|
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) |
|
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. |
|
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. |
|
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw |
|
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw |
|
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. |
|
|
|
Refer to the Provided Files table below to see what files use which methods, and how. |
|
<!-- compatibility_ggml end --> |
|
|
|
## Provided files |
|
| Name | Quant method | Bits | Size | Max RAM required | Use case | |
|
| ---- | ---- | ---- | ---- | ---- | ----- | |
|
| orca-mini-13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB | 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | |
|
| orca-mini-13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB | 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | |
|
| orca-mini-13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB | 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | |
|
| orca-mini-13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB | 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | |
|
| orca-mini-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. | |
|
| orca-mini-13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | |
|
| orca-mini-13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB | 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | |
|
| orca-mini-13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB | 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | |
|
| orca-mini-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | |
|
| orca-mini-13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | |
|
| orca-mini-13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB | 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | |
|
| orca-mini-13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB | 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | |
|
| orca-mini-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | |
|
| orca-mini-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | |
|
|
|
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. |
|
|
|
## How to run in `llama.cpp` |
|
|
|
I use the following command line; adjust for your tastes and needs: |
|
|
|
``` |
|
./main -t 10 -ngl 32 -m orca-mini-13b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\nYou are an story writing assistant who writes very long, detailed and interesting stories\n\n### User:\nWrite a story about llamas\n\n### Input:\n{input}\n\n### Response:\n" |
|
``` |
|
If you're able to use full GPU offloading, you should use `-t 1` to get best performance. |
|
|
|
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance. |
|
|
|
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. |
|
|
|
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` |
|
|
|
## How to run in `text-generation-webui` |
|
|
|
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md). |
|
|
|
<!-- footer start --> |
|
## Discord |
|
|
|
For further support, and discussions on these models and AI in general, join us at: |
|
|
|
[TheBloke AI's Discord server](https://discord.gg/theblokeai) |
|
|
|
## Thanks, and how to contribute. |
|
|
|
Thanks to the [chirper.ai](https://chirper.ai) team! |
|
|
|
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. |
|
|
|
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. |
|
|
|
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. |
|
|
|
* Patreon: https://patreon.com/TheBlokeAI |
|
* Ko-Fi: https://ko-fi.com/TheBlokeAI |
|
|
|
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. |
|
|
|
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire. |
|
|
|
Thank you to all my generous patrons and donaters! |
|
|
|
<!-- footer end --> |
|
|
|
# Original model card: Pankaj Mathur's Orca Mini 13B |
|
|
|
# orca_mini_13b |
|
An [OpenLLaMa-13B model](https://github.com/openlm-research/open_llama) model trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches. |
|
|
|
|
|
# Dataset |
|
|
|
We build explain tuned [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707). |
|
|
|
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets. |
|
|
|
This helps student model aka this model to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version). |
|
|
|
Please see below example usage how the **System** prompt is added before each **instruction**. |
|
|
|
# Training |
|
|
|
The training configurations are provided in the table below. |
|
|
|
The training takes on 8x A100(80G) GPUs and lasts for around 15 Hours for cost of $180 using [Lambda Labs](https://lambdalabs.com) |
|
|
|
We used DeepSpeed with fully sharded data parallelism, also know as [ZeRO stage 3](https://engineering.fb.com/2021/07/15/open-source/fsdp/) by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [OpenAlpaca repo](https://github.com/yxuansu/OpenAlpaca) |
|
|
|
Here are some of params used during training: |
|
|
|
||| |
|
|:-------------:|:-------------:| |
|
|*batch_size*|16| |
|
|*train_micro_batch_size_per_gpu*|2| |
|
|*gradient_accumulation_steps*|1| |
|
|*Learning rate*|2e-5| |
|
|*Max length*|1024| |
|
|*Epochs*|3| |
|
|*Optimizer*|AdamW| |
|
|
|
|
|
|
|
# Example Usage |
|
|
|
Below shows an example on how to use this model |
|
|
|
```python |
|
import torch |
|
from transformers import LlamaForCausalLM, LlamaTokenizer |
|
|
|
# Hugging Face model_path |
|
model_path = 'psmathur/orca_mini_13b' |
|
tokenizer = LlamaTokenizer.from_pretrained(model_path) |
|
model = LlamaForCausalLM.from_pretrained( |
|
model_path, torch_dtype=torch.float16, device_map='auto', |
|
) |
|
|
|
|
|
#generate text function |
|
def generate_text(system, instruction, input=None): |
|
|
|
if input: |
|
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n" |
|
else: |
|
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n" |
|
|
|
tokens = tokenizer.encode(prompt) |
|
tokens = torch.LongTensor(tokens).unsqueeze(0) |
|
tokens = tokens.to('cuda') |
|
|
|
instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50} |
|
|
|
length = len(tokens[0]) |
|
with torch.no_grad(): |
|
rest = model.generate( |
|
input_ids=tokens, |
|
max_length=length+instance['generate_len'], |
|
use_cache=True, |
|
do_sample=True, |
|
top_p=instance['top_p'], |
|
temperature=instance['temperature'], |
|
top_k=instance['top_k'] |
|
) |
|
output = rest[0][length:] |
|
string = tokenizer.decode(output, skip_special_tokens=True) |
|
return f'[!] Response: {string}' |
|
|
|
# Sample Test Instruction Used by Youtuber Sam Witteveen https://www.youtube.com/@samwitteveenai |
|
system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.' |
|
instruction = 'Write a letter to Sam Altman, CEO of OpenAI, requesting him to convert GPT4 a private model by OpenAI to an open source project' |
|
print(generate_text(system, instruction)) |
|
|
|
``` |
|
|
|
``` |
|
|
|
[!] Response: |
|
Dear Sam Altman, |
|
|
|
I am writing to request that you convert the GPT4 private model developed by OpenAI to an open source project. As a user of OpenAI, I have been waiting for the day when I can use the advanced natural language processing capabilities of GPT4 in a more open and accessible way. |
|
|
|
While OpenAI has made significant progress in developing AI applications, it has primarily focused on building private models that are not accessible to the general public. However, with the recent release of GPT-3, there is a growing demand for more open and accessible AI tools. |
|
|
|
Converting GPT4 to an open source project would allow for greater transparency, collaboration, and innovation. It would also help to build trust in the technology and ensure that it is used ethically and responsibly. |
|
|
|
I urge you to consider converting GPT4 to an open source project. This would be a significant contribution to the AI community and would help to create a more open and accessible future. |
|
|
|
Thank you for your consideration. |
|
|
|
Sincerely, |
|
|
|
[Your Name] |
|
|
|
``` |
|
|
|
**P.S. I am #opentowork and #collaboration, if you can help, please reach out to me at psmathur.public@gmail.com** |
|
|
|
Next Goals: |
|
1) Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions) |
|
2) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui) |
|
3) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here) |
|
|
|
|
|
|
|
|
|
Limitations & Biases: |
|
|
|
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information. |
|
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. |
|
|
|
Disclaimer: |
|
|
|
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. |
|
Please cosult an attorney before using this model for commercial purposes. |
|
|
|
|
|
Citiation: |
|
|
|
If you found wizardlm_alpaca_dolly_orca_open_llama_13b useful in your research or applications, please kindly cite using the following BibTeX: |
|
|
|
``` |
|
@misc{wizardlm_alpaca_dolly_orca_open_llama_13b, |
|
author = {Pankaj Mathur}, |
|
title = {wizardlm_alpaca_dolly_orca_open_llama_13b: An explain tuned OpenLLaMA-13b model on custom wizardlm, alpaca, & dolly datasets}, |
|
year = {2023}, |
|
publisher = {GitHub, HuggingFace}, |
|
journal = {GitHub repository, HuggingFace repository}, |
|
howpublished = {\url{https://github.com/pankajarm/wizardlm_alpaca_dolly_orca_open_llama_13b}, \url{https://https://huggingface.co/psmathur/wizardlm_alpaca_dolly_orca_open_llama_13b}}, |
|
} |
|
``` |
|
``` |
|
@software{openlm2023openllama, |
|
author = {Xinyang Geng and Hao Liu}, |
|
title = {OpenLLaMA: An Open Reproduction of LLaMA}, |
|
month = May, |
|
year = 2023, |
|
url = {https://github.com/openlm-research/open_llama} |
|
} |
|
``` |
|
``` |
|
@misc{openalpaca, |
|
author = {Yixuan Su and Tian Lan and Deng Cai}, |
|
title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA}, |
|
year = {2023}, |
|
publisher = {GitHub}, |
|
journal = {GitHub repository}, |
|
howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}}, |
|
} |
|
``` |
|
``` |
|
@misc{alpaca, |
|
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto }, |
|
title = {Stanford Alpaca: An Instruction-following LLaMA model}, |
|
year = {2023}, |
|
publisher = {GitHub}, |
|
journal = {GitHub repository}, |
|
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}}, |
|
} |
|
``` |
|
|