koala13b_test / README.md
disarmyouwitha's picture
Upload 9 files
2a92bce
metadata
license: other
library_name: transformers
pipeline_tag: text-generation
datasets:
  - RyokoAI/ShareGPT52K
  - Hello-SimpleAI/HC3
tags:
  - koala
  - ShareGPT
  - llama
  - gptq
inference: false
TheBlokeAI
# Koala: A Dialogue Model for Academic Research This repo contains the weights of the Koala 13B model produced at Berkeley. It is the result of combining the diffs from https://huggingface.co/young-geng/koala with the original Llama 13B model.

This version has then been quantized to 4-bit using GPTQ-for-LLaMa.

My Koala repos

I have the following Koala model repositories available:

13B models:

7B models:

Provided files

Three model files are provided. You don't need all three - choose the one that suits your needs best!

Details of the files provided:

  • koala-13B-4bit-128g.pt
    • pt format file, created with the latest GPTQ-for-LLaMa code.
    • Command to create:
      • python3 llama.py koala-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save koala-13B-4bit-128g.pt
  • koala-13B-4bit-128g.safetensors
    • newer safetensors format, with improved file security, created with the latest GPTQ-for-LLaMa code.
    • Command to create:
      • python3 llama.py koala-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors koala-13B-4bit-128g.safetensors
  • koala-13B-4bit-128g.no-act-order.ooba.pt
    • pt format file, created with oobabooga's older CUDA fork of GPTQ-for-LLaMa.
    • This file is included primarily for Windows users, as it can be used without needing to compile the latest GPTQ-for-LLaMa code.
    • It should hopefully therefore work with one-click-installers on Windows, which include the older GPTQ-for-LLaMa code.
    • The older GPTQ code does not support all the latest features, so the quality may be fractionally lower.
    • Command to create:
      • python3 llama.py koala-13B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save koala-13B-4bit-128g.no-act-order.ooba.pt

How to run in text-generation-webui

File koala-13B-4bit-128g.no-act-order.ooba.pt can be loaded the same as any other GPTQ file, without requiring any updates to oobaboogas text-generation-webui.

The other two model files were created with the latest GPTQ code, and require that the latest GPTQ-for-LLaMa is used inside the UI.

Here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:

git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
git clone https://github.com/oobabooga/text-generation-webui
mkdir -p text-generation-webui/repositories
ln -s GPTQ-for-LLaMa text-generation-webui/repositories/GPTQ-for-LLaMa

Then install this model into text-generation-webui/models and launch the UI as follows:

cd text-generation-webui
python server.py --model koala-13B-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want

The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.

If you are on Windows, or cannot use the Triton branch of GPTQ for any other reason, you can instead use the CUDA branch:

git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda
cd GPTQ-for-LLaMa
python setup_cuda.py install

Then link that into text-generation-webui/repositories as described above.

Or just use koala-13B-4bit-128g.no-act-order.ooba.pt as mentioned above.

How the Koala delta weights were merged

The Koala delta weights were originally merged using the following commands, producing koala-13B-HF:

git clone https://github.com/young-geng/EasyLM

git clone https://huggingface.co/TheBloke/llama-13b

mkdir koala_diffs && cd koala_diffs && wget https://huggingface.co/young-geng/koala/resolve/main/koala_13b_diff_v2

cd EasyLM

PYTHON_PATH="${PWD}:$PYTHONPATH" python \
-m EasyLM.models.llama.convert_torch_to_easylm \
--checkpoint_dir=/content/llama-13b \
--output_file=/content/llama-13b-LM \
--streaming=True

PYTHON_PATH="${PWD}:$PYTHONPATH" python \
-m EasyLM.scripts.diff_checkpoint --recover_diff=True \
--load_base_checkpoint='params::/content/llama-13b-LM' \
--load_target_checkpoint='params::/content/koala_diffs/koala_13b_diff_v2' \
--output_file=/content/koala_13b.diff.weights \
--streaming=True

PYTHON_PATH="${PWD}:$PYTHONPATH" python \
-m EasyLM.models.llama.convert_easylm_to_hf --model_size=13b \
--output_dir=/content/koala-13B-HF \
--load_checkpoint='params::/content/koala_13b.diff.weights' \
--tokenizer_path=/content/llama-13b/tokenizer.model

Want to support my work?

I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills.

So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI projects.

Donaters will get priority support on any and all AI/LLM/model questions, and I'll gladly quantise any model you'd like to try.

Further info

Check out the following links to learn more about the Berkeley Koala model.

License

The model weights are intended for academic research only, subject to the model License of LLaMA, Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited.