TheBlokeAI

TheBloke's LLM work is generously supported by a grant from andreessen horowitz (a16z)


# GPT4 Alpaca LoRA 30B - GPTQ 4bit 128g

This is a 4-bit GPTQ version of the Chansung GPT4 Alpaca 30B LoRA model.

It was created by merging the LoRA provided in the above repo with the original Llama 30B model, producing unquantised model GPT4-Alpaca-LoRA-30B-HF

It was then quantized to 4bit, groupsize 128g, using GPTQ-for-LLaMa.

VRAM usage will depend on the tokens returned. Below approximately 1000 tokens returned it will use <24GB VRAM, but at 1000+ tokens it will exceed the VRAM of a 24GB card.

RAM and VRAM usage at the end of a 670 token response in text-generation-webui : 5.2GB RAM, 20.7GB VRAM Screenshot of RAM and VRAM Usage RAM and VRAM usage after about 1500 tokens: 5.2GB RAM, 30.0GB VRAM screenshot

If you want a model that should always stay under 24GB, use this one, provided by MetalX, instead: GPT4 Alpaca Lora 30B GPTQ 4bit without groupsize

Provided files

Currently one model file is provided, a safetensors. This file requires the latest GPTQ-for-LLaMa code to run inside oobaboogas text-generation-webui.

Tomorrow I will try to add another file that does not use --act-order and therefore can be run in text-generation-webui without needing to update GPTQ-for-LLaMa (at the cost of possibly having slightly lower inference quality.)

Details of the files provided:

  • gpt4-alpaca-lora-30B-GPTQ-4bit-128g.safetensors
    • safetensors format, with improved file security, created with the latest GPTQ-for-LLaMa code.
    • Command to create:
      • python3 llama.py gpt4-alpaca-lora-30B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors gpt4-alpaca-lora-30B-GPTQ-4bit-128g.safetensors

How to run in text-generation-webui

The safetensors model file was created with the GPTQ-for-LLaMa code as of April 13th, and uses --act-order to give the maximum possible quantisation quality. This means it requires that this same version of GPTQ-for-LLaMa is used inside the UI.

Here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:

# Since April 14th we can't clone the latest GPTQ-for-LLaMa as it's in the middle of a refactoring
git clone -n https://github.com/qwopqwop200/GPTQ-for-LLaMa gptq-working
cd gptq-working && git checkout 58c8ab4c7aaccc50f507fd08cce941976affe5e0 # Later commits are currently broken due to ongoing refactoring

git clone https://github.com/oobabooga/text-generation-webui
mkdir -p text-generation-webui/repositories
ln -s gptq-working text-generation-webui/repositories/GPTQ-for-LLaMa

Then install this model into text-generation-webui/models and launch the UI as follows:

cd text-generation-webui
python server.py --model gpt4-alpaca-lora-30B-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want

The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.

If you are on Windows, or cannot use the Triton branch of GPTQ for any other reason, you can instead try the CUDA branch:

pip uninstall -y quant_cuda
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda
cd GPTQ-for-LLaMa
python setup_cuda.py install --force

Then link that into text-generation-webui/repositories as described above.

Discord

For further support, and discussions on these models and AI in general, join us at:

TheBloke AI's Discord server

Thanks, and how to contribute.

Thanks to the chirper.ai team!

I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

Special thanks to: Aemon Algiz.

Patreon special mentions: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 闃挎槑, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikie艂, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter

Thank you to all my generous patrons and donaters!

And thank you again to a16z for their generous grant.

Original GPT4 Alpaca Lora model card

This repository comes with LoRA checkpoint to make LLaMA into a chatbot like language model. The checkpoint is the output of instruction following fine-tuning process with the following settings on 8xA100(40G) DGX system.

  • Training script: borrowed from the official Alpaca-LoRA implementation
  • Training script:
python finetune.py \
    --base_model='decapoda-research/llama-30b-hf' \
    --data_path='alpaca_data_gpt4.json' \
    --num_epochs=10 \
    --cutoff_len=512 \
    --group_by_length \
    --output_dir='./gpt4-alpaca-lora-30b' \
    --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
    --lora_r=16 \
    --batch_size=... \
    --micro_batch_size=...

You can find how the training went from W&B report here.

Downloads last month
31
Safetensors
Model size
4.73B params
Tensor type
F32
I32
FP16
Inference Examples
Inference API (serverless) has been turned off for this model.