TheBloke commited on
Commit
95c93be
·
1 Parent(s): be8f222

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md CHANGED
@@ -1,3 +1,84 @@
1
  ---
2
  license: other
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ language:
4
+ - en
5
+ pipeline_tag: text2text-generation
6
+ tags:
7
+ - alpaca
8
+ - llama
9
+ - chat
10
+ - gpt4
11
  ---
12
+ # GPT4 Alpaca Lora 30B - GPTQ 4bit 128g
13
+
14
+ This is a 4-bit GPTQ version of the [Chansung GPT4 Alpaca 30B LoRA model](https://huggingface.co/chansung/gpt4-alpaca-lora-30b).
15
+
16
+ It was created by merging the deltas provided in the above repo with the original Llama 30B model.
17
+
18
+ It was then quantized to 4bit, groupsize 128g, using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
19
+
20
+ In my testing this model uses 19 - 21GB of VRAM for inference and therefore should run on any 24GB VRAM card.
21
+
22
+ RAM and VRAM usage at the end of a 2000 token response in `text-generation-webui` : **5.2GB RAM, 20.7GB VRAM**
23
+ ![Screenshot of RAM and VRAM Usage](https://i.imgur.com/Sl8SmBH.png)
24
+
25
+ ## Provided files
26
+
27
+ Currently one model file is provided, a `safetensors`. This file requires the latest GPTQ-for-LLaMa code to run inside [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
28
+
29
+ Tomorrow I will try to add another file that does not use `--act-order` and therefore can be run in text-generation-webui without needing to update GPTQ-for-LLaMa (at the cost of possibly having slightly lower inference quality.)
30
+
31
+ Details of the files provided:
32
+ * `gpt4-alpaca-lora-30B-GPTQ-4bit-128g.safetensors`
33
+ * `safetensors` format, with improved file security, created with the latest [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) code.
34
+ * Command to create:
35
+ * `python3 llama.py gpt4-alpaca-lora-30B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors gpt4-alpaca-lora-30B-GPTQ-4bit-128g.safetensors`
36
+
37
+ ## How to run in `text-generation-webui`
38
+
39
+ The `safetensors` model file was created with the latest GPTQ code, and uses `--act-order` to give the maximum possible quantisation quality. This means it requires that the latest GPTQ-for-LLaMa is used inside the UI.
40
+
41
+ Here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
42
+ ```
43
+ git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
44
+ git clone https://github.com/oobabooga/text-generation-webui
45
+ mkdir -p text-generation-webui/repositories
46
+ ln -s GPTQ-for-LLaMa text-generation-webui/repositories/GPTQ-for-LLaMa
47
+ ```
48
+
49
+ Then install this model into `text-generation-webui/models` and launch the UI as follows:
50
+ ```
51
+ cd text-generation-webui
52
+ python server.py --model gpt4-alpaca-lora-30B-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
53
+ ```
54
+
55
+ The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
56
+
57
+ If you are on Windows, or cannot use the Triton branch of GPTQ for any other reason, you can instead use the CUDA branch:
58
+ ```
59
+ git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda
60
+ cd GPTQ-for-LLaMa
61
+ python setup_cuda.py install --force
62
+ ```
63
+ Then link that into `text-generation-webui/repositories` as described above.
64
+
65
+ # Original GPT4 Alpaca Lora model card
66
+
67
+ This repository comes with LoRA checkpoint to make LLaMA into a chatbot like language model. The checkpoint is the output of instruction following fine-tuning process with the following settings on 8xA100(40G) DGX system.
68
+ - Training script: borrowed from the official [Alpaca-LoRA](https://github.com/tloen/alpaca-lora) implementation
69
+ - Training script:
70
+ ```shell
71
+ python finetune.py \
72
+ --base_model='decapoda-research/llama-30b-hf' \
73
+ --data_path='alpaca_data_gpt4.json' \
74
+ --num_epochs=10 \
75
+ --cutoff_len=512 \
76
+ --group_by_length \
77
+ --output_dir='./gpt4-alpaca-lora-30b' \
78
+ --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
79
+ --lora_r=16 \
80
+ --batch_size=... \
81
+ --micro_batch_size=...
82
+ ```
83
+
84
+ You can find how the training went from W&B report [here](https://wandb.ai/chansung18/gpt4_alpaca_lora/runs/w3syd157?workspace=user-chansung18).