File size: 6,386 Bytes
e071d24
 
 
bc7966b
 
e071d24
cb604fd
1621fd9
 
 
 
 
cb604fd
1621fd9
 
cb604fd
1621fd9
 
cb604fd
e071d24
 
 
 
 
 
 
 
 
bc7966b
bb3282f
e071d24
 
4ac4bd2
e071d24
4ac4bd2
e071d24
4ac4bd2
e071d24
4ac4bd2
e071d24
 
 
4ac4bd2
 
 
 
 
e071d24
 
 
 
 
 
4ac4bd2
e071d24
 
 
 
 
 
 
 
 
 
cb604fd
 
e071d24
cb604fd
1621fd9
cb604fd
1621fd9
cb604fd
1621fd9
cb604fd
 
 
 
 
 
 
 
 
1621fd9
cb604fd
 
 
 
 
e071d24
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
---
datasets:
- gozfarb/ShareGPT_Vicuna_unfiltered
license: other
inference: false
---
<!-- header start -->
<div style="width: 100%;">
    <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
    <div style="display: flex; flex-direction: column; align-items: flex-start;">
        <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
    </div>
    <div style="display: flex; flex-direction: column; align-items: flex-end;">
        <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
    </div>
</div>
<!-- header end -->

# VicUnlocked-30B-LoRA GGML

This is GGML format quantised 4-bit, 5-bit and 8-bit models of [Neko Institute of Science's VicUnLocked 30B LoRA](https://huggingface.co/Neko-Institute-of-Science/VicUnLocked-30b-LoRA).

The files in this repo are the result of merging the above LoRA with the original LLaMA 30B, then converting to GGML for CPU (+ CUDA) inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).

## Repositories available

* [4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/VicUnlocked-30B-LoRA-GGML).
* [4-bit GPTQ model for GPU inference](https://huggingface.co/TheBloke/VicUnlocked-30B-LoRA-GPTQ).
* [float16 HF format model for GPU inference and further conversions](https://huggingface.co/TheBloke/VicUnlocked-30B-LoRA-HF).

## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!

llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508

I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.

For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
## Provided files
| Name | Quant method | Bits | Size | RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
`VicUnlocked-30B-LoRA.ggmlv3.q4_0.bin` | q4_0 | 4bit | 20.3GB | 23GB | 4-bit. |
`VicUnlocked-30B-LoRA.ggmlv3.q4_1.bin` | q4_1 | 5bit | 24.4GB | 27GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
`VicUnlocked-30B-LoRA.ggmlv3.q5_0.bin` | q5_0 | 5bit | 22.4GB | 25GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
`VicUnlocked-30B-LoRA.ggmlv3.q5_1.bin` | q5_1 | 5bit | 24.4GB | 27GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
`VicUnlocked-30B-LoRA.ggmlv3.q8_0.bin` | q8_0 | 8bit | 36.6GB | 39GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |

## How to run in `llama.cpp`

I use the following command line; adjust for your tastes and needs:

```
./main -t 8 -m VicUnlocked-30B-LoRA.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:"
```

Change `-t 8` to the number of physical CPU cores you have.

## How to run in `text-generation-webui`

GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.

Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).

<!-- footer start -->
## Discord

For further support, and discussions on these models and AI in general, join us at:

[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)

## Thanks, and how to contribute.

Thanks to the [chirper.ai](https://chirper.ai) team!

I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI

**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.

Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card

# Convert tools
https://github.com/practicaldreamer/vicuna_to_alpaca

# Training tool
https://github.com/oobabooga/text-generation-webui

ATM I'm using 2023.05.04v0 of the dataset and training full context.

# Notes:
So I will only be training 1 epoch, as full context 30b takes so long to train.
This 1 epoch will take me 8 days lol but luckily these LoRA feels fully functinal at epoch 1 as shown on my 13b one.
Also I will be uploading checkpoints almost everyday. I could train another epoch if there's enough want for it.

Update: Since I will not be training over 1 epoch @Aeala is training for the full 3 https://huggingface.co/Aeala/VicUnlocked-alpaca-half-30b-LoRA but it's half ctx if you care about that. Also @Aeala's just about done.

Update: Training Finished at Epoch 1, These 8 days sure felt long. I only have one A6000 lads there's only so much I can do. Also RIP gozfarb IDK what happened to him.

# How to test?
1. Download LLaMA-30B-HF if you have not: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF
2. Make a folder called VicUnLocked-30b-LoRA in the loras folder.
3. Download adapter_config.json and adapter_model.bin into VicUnLocked-30b-LoRA.
4. Load ooba: ```python server.py --listen --model LLaMA-30B-HF --load-in-8bit --chat --lora VicUnLocked-30b-LoRA```
5. Select instruct and chose Vicuna-v1.1 template.


# Training Log
https://wandb.ai/neko-science/VicUnLocked/runs/vx8yzwi7