|
--- |
|
license: other |
|
inference: false |
|
--- |
|
<!-- header start --> |
|
<div style="width: 100%;"> |
|
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> |
|
</div> |
|
<div style="display: flex; justify-content: space-between; width: 100%;"> |
|
<div style="display: flex; flex-direction: column; align-items: flex-start;"> |
|
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> |
|
</div> |
|
<div style="display: flex; flex-direction: column; align-items: flex-end;"> |
|
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> |
|
</div> |
|
</div> |
|
<!-- header end --> |
|
|
|
# gpt4-x-vicuna-13B-GGML |
|
|
|
These files are GGML format model files of [NousResearch's gpt4-x-vicuna-13b](https://huggingface.co/NousResearch/gpt4-x-vicuna-13b). |
|
|
|
GGML files are for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp). |
|
|
|
## Repositories available |
|
|
|
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-GPTQ). |
|
* [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-GGML). |
|
* [float16 HF model for unquantised and 8bit GPU inference](https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-HF). |
|
|
|
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)! |
|
|
|
llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508 |
|
|
|
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them. |
|
|
|
For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`. |
|
|
|
## Provided files |
|
| Name | Quant method | Bits | Size | RAM required | Use case | |
|
| ---- | ---- | ---- | ---- | ---- | ----- | |
|
`gpt4-x-vicuna-13B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10GB | 4-bit. | |
|
`gpt4-x-vicuna-13B.ggmlv3.q4_1.bin` | q4_1 | 4bit | 8.95GB | 10GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.| |
|
`gpt4-x-vicuna-13B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11GB | 5-bit. Higher accuracy, higher resource usage and slower inference. | |
|
`gpt4-x-vicuna-13B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12GB | 5-bit. Even higher accuracy, higher resource usage and slower inference. | |
|
`gpt4-x-vicuna-13B.ggmlv3.q8_0.bin` | q8_0 | 8bit | 16GB | 18GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use.| |
|
|
|
## How to run in `llama.cpp` |
|
|
|
I use the following command line; adjust for your tastes and needs: |
|
|
|
``` |
|
./main -t 12 -m gpt4-x-vicuna-13B.ggmlv3.q4_2.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request. |
|
### Instruction: |
|
Write a story about llamas |
|
### Response:" |
|
``` |
|
Change `-t 12` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. |
|
|
|
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` |
|
|
|
## How to run in `text-generation-webui` |
|
|
|
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md). |
|
|
|
<!-- footer start --> |
|
## Discord |
|
|
|
For further support, and discussions on these models and AI in general, join us at: |
|
|
|
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) |
|
|
|
## Thanks, and how to contribute. |
|
|
|
Thanks to the [chirper.ai](https://chirper.ai) team! |
|
|
|
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. |
|
|
|
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. |
|
|
|
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. |
|
|
|
* Patreon: https://patreon.com/TheBlokeAI |
|
* Ko-Fi: https://ko-fi.com/TheBlokeAI |
|
|
|
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman. |
|
|
|
Thank you to all my generous patrons and donaters! |
|
<!-- footer end --> |
|
# Original model card |
|
|
|
As a base model used https://huggingface.co/eachadea/vicuna-13b-1.1 |
|
|
|
Finetuned on Teknium's GPTeacher dataset, unreleased Roleplay v2 dataset, GPT-4-LLM dataset, and Nous Research Instruct Dataset |
|
|
|
Approx 180k instructions, all from GPT-4, all cleaned of any OpenAI censorship/"As an AI Language Model" etc. |
|
|
|
Base model still has OpenAI censorship. Soon, a new version will be released with cleaned vicuna from https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltere |
|
|
|
Trained on 8 A100-80GB GPUs for 5 epochs following Alpaca deepspeed training code. |
|
|
|
Nous Research Instruct Dataset will be released soon. |
|
|
|
GPTeacher, Roleplay v2 by https://huggingface.co/teknium |
|
|
|
Wizard LM by https://github.com/nlpxucan |
|
|
|
Nous Research Instruct Dataset by https://huggingface.co/karan4d and https://huggingface.co/huemin |
|
|
|
Compute provided by our project sponsor https://redmond.ai/ |
|
|