TheBloke's LLM work is generously supported by a grant from andreessen horowitz (a16z)
SauerkrautLM 3B v1 - GPTQ
- Model creator: VAGO solutions
- Original model: SauerkrautLM 3B v1
Description
This repo contains GPTQ model files for VAGO solutions's SauerkrautLM 3B v1.
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
Repositories available
- GPTQ models for GPU inference, with multiple quantisation parameter options.
- 2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference
- VAGO solutions's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions
Prompt template: Sauerkraut
Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten.
User: {prompt}
Assistant:
Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
Explanation of GPTQ parameters
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as
desc_act
. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
---|---|---|---|---|---|---|---|---|---|
main | 4 | 32 | Yes | 0.1 | German Quad | 2048 | 2.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
gptq-8bit--1g-actorder_True | 8 | None | Yes | 0.1 | German Quad | 2048 | 3.64 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
gptq-8bit-128g-actorder_True | 8 | 128 | Yes | 0.1 | German Quad | 2048 | 3.71 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
gptq-8bit-32g-actorder_True | 8 | 32 | Yes | 0.1 | German Quad | 2048 | 3.94 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
gptq-4bit-64g-actorder_True | 4 | 64 | Yes | 0.1 | German Quad | 2048 | 2.15 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
How to download, including from branches
In text-generation-webui
To download from the main
branch, enter TheBloke/SauerkrautLM-3B-v1-GPTQ
in the "Download model" box.
To download from another branch, add :branchname
to the end of the download name, eg TheBloke/SauerkrautLM-3B-v1-GPTQ:gptq-8bit--1g-actorder_True
From the command line
I recommend using the huggingface-hub
Python library:
pip3 install huggingface-hub
To download the main
branch to a folder called SauerkrautLM-3B-v1-GPTQ
:
mkdir SauerkrautLM-3B-v1-GPTQ
huggingface-cli download TheBloke/SauerkrautLM-3B-v1-GPTQ --local-dir SauerkrautLM-3B-v1-GPTQ --local-dir-use-symlinks False
To download from a different branch, add the --revision
parameter:
mkdir SauerkrautLM-3B-v1-GPTQ
huggingface-cli download TheBloke/SauerkrautLM-3B-v1-GPTQ --revision gptq-8bit--1g-actorder_True --local-dir SauerkrautLM-3B-v1-GPTQ --local-dir-use-symlinks False
More advanced huggingface-cli download usage
If you remove the --local-dir-use-symlinks False
parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: ~/.cache/huggingface
), and symlinks will be added to the specified --local-dir
, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the HF_HOME
environment variable, and/or the --cache-dir
parameter to huggingface-cli
.
For more documentation on downloading with huggingface-cli
, please see: HF -> Hub Python Library -> Download files -> Download from the CLI.
To accelerate downloads on fast connections (1Gbit/s or higher), install hf_transfer
:
pip3 install hf_transfer
And set environment variable HF_HUB_ENABLE_HF_TRANSFER
to 1
:
mkdir SauerkrautLM-3B-v1-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SauerkrautLM-3B-v1-GPTQ --local-dir SauerkrautLM-3B-v1-GPTQ --local-dir-use-symlinks False
Windows Command Line users: You can set the environment variable by running set HF_HUB_ENABLE_HF_TRANSFER=1
before the download command.
With git
(not recommended)
To clone a specific branch with git
, use a command like this:
git clone --single-branch --branch gptq-8bit--1g-actorder_True https://huggingface.co/TheBloke/SauerkrautLM-3B-v1-GPTQ
Note that using Git with HF repos is strongly discouraged. It will be much slower than using huggingface-hub
, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the .git
folder as a blob.)
How to easily download and use this model in text-generation-webui.
Please make sure you're using the latest version of text-generation-webui.
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
- Click the Model tab.
- Under Download custom model or LoRA, enter
TheBloke/SauerkrautLM-3B-v1-GPTQ
.
- To download from a specific branch, enter for example
TheBloke/SauerkrautLM-3B-v1-GPTQ:gptq-8bit--1g-actorder_True
- see Provided Files above for the list of branches for each option.
- Click Download.
- The model will start downloading. Once it's finished it will say "Done".
- In the top left, click the refresh icon next to Model.
- In the Model dropdown, choose the model you just downloaded:
SauerkrautLM-3B-v1-GPTQ
- The model will automatically load, and is now ready for use!
- If you want any custom settings, set them and then click Save settings for this model followed by Reload the Model in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file
quantize_config.json
.
- Once you're ready, click the Text Generation tab and enter a prompt to get started!
Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: ghcr.io/huggingface/text-generation-inference:1.1.0
Example Docker parameters:
--model-id TheBloke/SauerkrautLM-3B-v1-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
pip3 install huggingface-hub
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten.
User: {prompt}
Assistant:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
How to use this GPTQ model from Python code
Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
You can then use the following code
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/SauerkrautLM-3B-v1-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-8bit--1g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten.
User: {prompt}
Assistant:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with Occ4m's GPTQ-for-LLaMa fork.
ExLlama is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
Huggingface Text Generation Inference (TGI) is compatible with all GPTQ models.
Discord
For further support, and discussions on these models and AI in general, join us at:
Thanks, and how to contribute
Thanks to the chirper.ai team!
Thanks to Clay from gpus.llm-utils.org!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
- Patreon: https://patreon.com/TheBlokeAI
- Ko-Fi: https://ko-fi.com/TheBlokeAI
Special thanks to: Aemon Algiz.
Patreon special mentions: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
Original model card: VAGO solutions's SauerkrautLM 3B v1
VAGO solutions SauerkrautLM
Introducing SauerkrautLM-v1 - Your German Language Powerhouse!
We are thrilled to unveil our very first release, SauerkrautLM-v1. This remarkable creation marks a significant milestone as it is specifically tailored for the German-speaking community. In a landscape where German language models are scarce, we are proud to offer a solution that fills this void. What sets SauerkrautLM-v1 apart is its versatility. Whether you are an individual looking to harness its capabilities for personal use or a business seeking to integrate it into your projects, our model is designed to accommodate all. It operates under the LLAMA 2 License, providing you with the freedom to explore its potential in both private and commercial applications. Performance is at the heart of SauerkrautLM-v1. We put it to the test using a customized version of MT-Bench for the German language, and the results speak volumes. It currently stands as the most robust German Language Model on Hugging Face (based on german mt-bench results), showcasing its exceptional capabilities. Rest assured, this model is here to shine and set new standards. And the best thing is it comes in three different sizes (3B, 7B, 13B) to address your individual needs. Our model's journey began with meticulous training using an augmented dataset within the QLoRA approach. This is just the beginning of our model series, promising even more innovative and powerful solutions in the future.
Join us on this exciting adventure as we redefine the possibilities of language modeling for the German-speaking world. SauerkrautLM-v1 is here to empower your language-related endeavors like never before.
All Models
Model | HF | GPTQ | GGUF |
---|---|---|---|
SauerkrautLM-3b-v1 | Link | soon | soon |
SauerkrautLM-7b-v1 | Link | soon | soon |
SauerkrautLM-7b-v1-mistral | Link | soon | soon |
SauerkrautLM-13b-v1 | Link | soon | soon |
Model Details
SauerkrautLM-3b-v1
Training Dataset:
SauerkrautLM was trained with mix of German data augmentation and translated data. We found, that only a simple translation of training data can lead to unnatural German phrasings. Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data.
Training Procedure:
SauerkrautLM-3b-v1 was fine-tuned using QLoRA on 1 RTX 4090 24GB with Axolotl.
- Trained by: SauerkrautLM-v1 trained by VAGO solutions
- Model Type: SauerkrautLM-v1 is an auto-regressive language model based on the transformer architecture
- Language(s): German, English
- License: LLAMA 2 COMMUNITY LICENSE AGREEMENT
- Contact: Website David Golchinfar
Prompt Template:
Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten.
User: {prompt}
Assistant:
Evaluation
MT-Bench-TrueGerman Response Example
SauertkrautLM-3b-v1
Prompt
Stellen Sie sich vor, Sie verfassen einen Blogbeitrag, in dem Sie zwei beliebte Smartphone-Modelle vergleichen. Entwickeln Sie eine Gliederung für den Blogbeitrag, einschließlich der Hauptpunkte und Unterkapitel, um die Merkmale, die Leistung und die Benutzererfahrung der beiden Modelle effektiv zu vergleichen und gegenüberzustellen. Bitte antworten Sie in weniger als 200 Wörtern.", "Nehmen Sie Ihre vorherige Antwort und formulieren Sie sie als Limerick um.
Antwort SauerkrautLM-3b-v1:
Klar, ich kann Ihnen gerne helfen! Hier sind die Hauptpunkte und Unterkapitel für Ihren Blogbeitrag zu den beiden beliebten Smartphone-Modellen:\n1. Hauptpunkte:\n- Untersuchen Sie die Leistung unterschiedlicher Smartphone-Modelle und erklären, welche sie überlegen machen.\n- Betrachten Sie die Benutzererfahrung: Welche Funktionen und App-Erweiterungen verwendet der Benutzer des jeweiligen Smartphones?\n- Überprüfen Sie die Batterieleistung: Welches Smartphone Model sorgfältig auf Batterieleistung achten kann?\n- Betrachten Sie die Verfügbarkeit für zusätzliche Ladegeräte: Welches Smartphone Model die schnellere Ladezeit ermöglicht?\n2. Unterkapitel:\n- Leistungsfaktor: Untersuchen Sie die Leistungsfähigkeit der Smartphones, indem Sie verschiedene Tests durchführen, um festzustellen, welche Modelle überlegen.\n- Benutzererfahrung: Betrachten Sie die Leistungsfähigkeit der Smartphones, indem Sie mit Benutzern aus verschiedenen Hintergrund-Nutzungsszenarien interagieren.\n- Batterielaufzeit: Untersuchen Sie die Batterielaufzeit der Smartphones, um herauszufinden, welches Modell die längere Lebensdauer bietet.\n- Zusätzliche Ladegeräte: Untersuchen Sie die Verfügbarkeit für zusätzliche Ladegeräte, indem Sie verschiedene Modelle testen, um festzustellen, welches Smartphone Modell die schnellere Ladezeit bietet.","Über den kühlen, weichen, süßen Gefühl, der während des Morgens herrscht, mögen wir uns nicht verwirren, während wir uns für den Tag entscheiden.
Prompt
Sie sehen ein schönes rotes Haus auf Ihrer linken Seite und ein faszinierendes Gewächshaus auf Ihrer rechten Seite, einen attraktiven beheizten rosa Ort vor sich. Also, wo ist das Weiße Haus?", "Enthält die ursprüngliche Frage Hinweise, um den Standort des Weißen Hauses eindeutig zu bestimmen?"],"reference": ["Die Antwort ist "Washington, DC".", "Nein.
Antwort SauerkrautLM-3b-v1:
Das Weiße Haus ist auf der linken Seite des rechten Häuschens, wo sich der Gewächshaus befindet.\n\nHinweis: Der Gewächshaus ist nicht beheizt.","Ja, die ursprüngliche Frage vermittelt den Standort des Weißen Hauses eindeutig.
Prompt
Betrachten Sie einen Satelliten, der sich in einer kreisförmigen Umlaufbahn um die Erde befindet. Die Geschwindigkeit des Satelliten nimmt ab. Was wird mit dem orbitalen Radius des Satelliten und seiner Umlaufdauer passieren? Bitte rechtfertigen Sie Ihre Antwort unter Verwendung physikalischer Prinzipien.", "Was sind einige Sonderfälle oder Randfälle in Ihrer Lösung? Wie gehen Sie mit ihnen um?"] ,"reference": ["Der orbitale Radius wird zunehmen und die Umlaufdauer wird zunehmen
Antwort SauerkrautLM-3b-v1:
Wenn der Satellit sich in einer kreisförmigen Umlaufbahn um die Erde befindet, wird sein orbitaler Radius abnehmen. Die Umlaufdauer des Satelliten kann je nach seiner Geschwindigkeit und der Schwerkraft konstant bleiben, wenn er sich nicht dauerhaft in einer kreisförmigen Umlaufbahn befindet. Wenn der Satellit sich in einer kreisförmigen Umlaufbahn befindet, wird sein orbitaler Radius nach dem Start unterschiedlich lang dauern, da er sich mit der Schwerkraft anzieht. Je nach seiner Geschwindigkeit und der Schwerkraft wird der Satellit sein orbitaler Radius ungefähr 1,5-2,2 Mal seiner Geschwindigkeit abnehmen, wenn er sich in einer kreisförmigen Umlaufbahn befindet.","Einige Sonderfälle in Ihrer Lösung sind:\n1. Zwischen zwei Schwerkraftwellen gibt es ein Schwerkraftschwank.\n2. Ein Kernschaltungskern könnte ein Kernschaltungskern sein, der sich mit einer anderen Kernschaltung kreuzt.\n3. Wenn der Kernschaltungskern sich in einem Kernschaltungskern befindet, könnte er seine eigenen Kernschaltungskern entstehen.\n4. Zwei Kernschaltungskern könnten sich in einem Kernschaltungskern befinden und die Schwerkraft auf sich verteilen, was zu einer Schwerkraftwellen führt.\n5. Ein Kernschaltungskern könnte aus mehreren Kernschaltungskernn bestehen.\n6. Wenn ein Kernschaltungskern sich in einer anderen Kernschaltungskern befindet, könnte er die Schwerkraft auf sich verteilen, was zu einer Schwerkraftwellen führt.\n7. Ein Kernschaltungskern könnte eine Schwerkraftwellen aus einer anderen Schwerkraftwellen befreien.\n8. Ein Kernschaltungskern könnte eine Schwerkraftwellen von einer anderen Schwerkraftwellen verursachen.\n\nDie Sonderfälle und Randfälle in Ihrer Lösung können je nach den spezifischen Konzepten und der Frage gelöst werden. Ebenso können Sie Sonderfälle oder Randfälle in Ihrer Lösung mit Ihren wissenschaftlichen Kenntnissen und Erfahrungen angehen
Disclaimer
Our models have been meticulously trained on extensive datasets. While we have made diligent efforts to thoroughly screen and eliminate any instances of coarse or inappropriate language from our data, we must inform users that despite our best efforts in data cleansing, the possibility of some such content slipping through cannot be entirely ruled out. Furthermore, it is important to note that we have implemented filters within our models; however, we cannot always guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. These models may be employed for commercial purposes, and the original Llama2 license remains applicable and is included with the model files.
Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at Dr. Daryoush Vaziri. We are also grateful for your feedback and suggestions.
Collaborations
We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.
- Downloads last month
- 57
Model tree for TheBloke/SauerkrautLM-3B-v1-GPTQ
Base model
VAGOsolutions/SauerkrautLM-3b-v1