Edit model card

Wizard-Vicuna-13B-GPTQ-8bit-128g

This repository contains 8-bit quantized models in GPTQ format of TheBlokes's wizard-vicuna 13B in FP16 HF format.

These models are the result of quantization to 8-bit using GPTQ-for-LLaMa.

While most metrics suggest that 8-bit is only marginally better than 4-bit, I have found that the 8-bit model follows instructions significantly better. The responses from the 8-bit model feel very close to the quality of GPT-3, whereas the 4-bit model lacks some "intelligence".

With this quantized model, I can replace GPT-3 for most of my work. However, a drawback is that it requires approximately 15GB of VRAM, so you need a GPU with at least 16GB of VRAM.

The content below is straight copy and paste from TheBloke's README with the 4 bit content changed to 8 bit and referencing this model.

How to easily download and use this model in text-generation-webui

Open the text-generation-webui UI as normal.

  1. Click the Model tab.
  2. Under Download custom model or LoRA, enter deetungsten/wizard-vicuna-13B-GPTQ-8bit-128g.
  3. Click Download.
  4. Wait until it says it's finished downloading.
  5. Click the Refresh icon next to Model in the top left.
  6. In the Model drop-down: choose the model you just downloaded, wizard-vicuna-13B-GPTQ-8bit-128g.
  7. If you see an error in the bottom right, ignore it - it's temporary.
  8. Fill out the GPTQ parameters on the right: Bits = 8, Groupsize = 128, model_type = Llama
  9. Click Save settings for this model in the top right.
  10. Click Reload the Model in the top right.
  11. Once it says it's loaded, click the Text Generation tab and enter a prompt!

Provided files

Compatible file - wizard-vicuna-13B-GPTQ-8bit-128g.no-act-order.safetensors

In the main branch - the default one - you will find wizard-vicuna-13B-GPTQ-8bit-128g.no-act-order.safetensors

This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility

It was created without the --act-order parameter. It may have slightly lower inference quality compared to the other file, but is guaranteed to work on all versions of GPTQ-for-LLaMa and text-generation-webui.

  • wizard-vicuna-13B-GPTQ-8bit-128g.no-act-order.safetensors
    • Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
    • Works with text-generation-webui one-click-installers
    • Parameters: Groupsize = 128g. No act-order.
    • Command used to create the GPTQ:
      CUDA_VISIBLE_DEVICES=0 python3 llama.py wizard-vicuna-13B-HF c4 --wbits 8 --true-sequential --groupsize 128 --save_safetensors wizard-vicuna-13B-GPTQ-8bit.compat.no-act-order.safetensors
      

Original WizardVicuna-13B model card

Github page: https://github.com/melodysdreamj/WizardVicunaLM

WizardVicunaLM

Wizard's dataset + ChatGPT's conversation extension + Vicuna's tuning method

I am a big fan of the ideas behind WizardLM and VicunaLM. I particularly like the idea of WizardLM handling the dataset itself more deeply and broadly, as well as VicunaLM overcoming the limitations of single-turn conversations by introducing multi-round conversations. As a result, I combined these two ideas to create WizardVicunaLM. This project is highly experimental and designed for proof of concept, not for actual usage.

Benchmark

Approximately 7% performance improvement over VicunaLM

Detail

The questions presented here are not from rigorous tests, but rather, I asked a few questions and requested GPT-4 to score them. The models compared were ChatGPT 3.5, WizardVicunaLM, VicunaLM, and WizardLM, in that order.

gpt3.5 wizard-vicuna-13b vicuna-13b wizard-7b link
Q1 95 90 85 88 link
Q2 95 97 90 89 link
Q3 85 90 80 65 link
Q4 90 85 80 75 link
Q5 90 85 80 75 link
Q6 92 85 87 88 link
Q7 95 90 85 92 link
Q8 90 85 75 70 link
Q9 92 85 70 60 link
Q10 90 80 75 85 link
Q11 90 85 75 65 link
Q12 85 90 80 88 link
Q13 90 95 88 85 link
Q14 94 89 90 91 link
Q15 90 85 88 87 link
91 88 82 80

Principle

We adopted the approach of WizardLM, which is to extend a single problem more in-depth. However, instead of using individual instructions, we expanded it using Vicuna's conversation format and applied Vicuna's fine-tuning techniques.

Turning a single command into a rich conversation is what we've done here.

After creating the training data, I later trained it according to the Vicuna v1.1 training method.

Detailed Method

First, we explore and expand various areas in the same topic using the 7K conversations created by WizardLM. However, we made it in a continuous conversation format instead of the instruction format. That is, it starts with WizardLM's instruction, and then expands into various areas in one conversation using ChatGPT 3.5.

After that, we applied the following model using Vicuna's fine-tuning format.

Training Process

Trained with 8 A100 GPUs for 35 hours.

Weights

You can see the dataset we used for training and the 13b model in the huggingface.

Conclusion

If we extend the conversation to gpt4 32K, we can expect a dramatic improvement, as we can generate 8x more, more accurate and richer conversations.

License

The model is licensed under the LLaMA model, and the dataset is licensed under the terms of OpenAI because it uses ChatGPT. Everything else is free.

Author

JUNE LEE - He is active in Songdo Artificial Intelligence Study and GDG Songdo.

Downloads last month
13
Inference Examples
Inference API (serverless) has been turned off for this model.