TheBloke's LLM work is generously supported by a grant from andreessen horowitz (a16z)
Fire Balloon's Baichuan Llama 7B GPTQ
These files are GPTQ 4bit model files for Fire Balloon's Baichuan Llama 7B.
It is the result of quantising to 4bit using GPTQ-for-LLaMa.
This model is a Llama conversion of [Baichuan Inc's Baichuan 7B]https://huggingface.co/baichuan-inc/baichuan-7B). It contains the same data, but rewritten by Fire Balloon into the familiar Llama format.
Repositories available
- 4-bit GPTQ models for GPU inference
- 2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference
- Unquantised fp16 model in pytorch format, for GPU inference and for further conversions
Prompt template
A general prompt template is unknown at this point.
The example given in the README is a 1-shot categorisation:
Hamlet->Shakespeare\nOne Hundred Years of Solitude->
How to easily download and use this model in text-generation-webui
Please make sure you're using the latest version of text-generation-webui
- Click the Model tab.
- Under Download custom model or LoRA, enter
TheBloke/baichuan-llama-7B-GPTQ
. - Click Download.
- The model will start downloading. Once it's finished it will say "Done"
- In the top left, click the refresh icon next to Model.
- In the Model dropdown, choose the model you just downloaded:
baichuan-llama-7B-GPTQ
- The model will automatically load, and is now ready for use!
- If you want any custom settings, set them and then click Save settings for this model followed by Reload the Model in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file
quantize_config.json
.
- Once you're ready, click the Text Generation tab and enter a prompt to get started!
How to use this GPTQ model from Python code
First make sure you have AutoGPTQ installed:
pip install auto-gptq
Then try the following example code:
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import argparse
model_name_or_path = "TheBloke/baichuan-llama-7B-GPTQ"
model_basename = "baichuan-llama-7b-GPTQ-4bit-128g.no-act.order"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
# Note: check the prompt template is correct for this model.
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
Provided files
baichuan-llama-7b-GPTQ-4bit-128g.no-act.order.safetensors
This will work with AutoGPTQ, ExLlama, and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
It was created with group_size 128 to increase inference accuracy, but without --act-order (desc_act) to increase compatibility and improve inference speed.
baichuan-llama-7b-GPTQ-4bit-128g.no-act.order.safetensors
- Works with AutoGPTQ in CUDA or Triton modes.
- LLaMa models also work with [ExLlama](https://github.com/turboderp/exllama}, which usually provides much higher performance, and uses less VRAM, than AutoGPTQ.
- Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
- Works with text-generation-webui, including one-click-installers.
- Parameters: Groupsize = 128. Act Order / desc_act = False.
Discord
For further support, and discussions on these models and AI in general, join us at:
Thanks, and how to contribute.
Thanks to the chirper.ai team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
- Patreon: https://patreon.com/TheBlokeAI
- Ko-Fi: https://ko-fi.com/TheBlokeAI
Special thanks to: Aemon Algiz.
Patreon special mentions: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
Original model card: Fire Balloon's Baichuan Llama 7B
baichuan-llama-7B
使用LLaMA格式保存的baichuan-7B。可以直接使用LlamaForCausalLM和LlamaTokenizer加载。
baichuan-7B model saved in the format of the LLaMA model. You can directly use LlamaForCausalLM and LlamaTokenizer to load the model.
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("fireballoon/baichuan-llama-7b", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("fireballoon/baichuan-llama-7b", device_map="auto")
The following is from the original repo baichuan-7B.
baichuan-7B
baichuan-7B是由百川智能开发的一个开源的大规模预训练模型。基于Transformer结构,在大约1.2万亿tokens上训练的70亿参数模型,支持中英双语,上下文窗口长度为4096。在标准的中文和英文权威benchmark(C-EVAL/MMLU)上均取得同尺寸最好的效果。
如果希望使用baichuan-7B(如进行推理、Finetune等),我们推荐使用配套代码库baichuan-7B。
baichuan-7B is an open-source large-scale pre-trained model developed by Baichuan Intelligent Technology. Based on the Transformer architecture, it is a model with 7 billion parameters trained on approximately 1.2 trillion tokens. It supports both Chinese and English, with a context window length of 4096. It achieves the best performance of its size on standard Chinese and English authoritative benchmarks (C-EVAL/MMLU).
If you wish to use baichuan-7B (for inference, finetuning, etc.), we recommend using the accompanying code library baichuan-7B.
Why use baichuan-7B
在同尺寸模型中baichuan-7B达到了目前SOTA的水平,参考下面MMLU指标
baichuan-7B使用自有的中英文双语语料进行训练,在中文上进行优化,在C-Eval达到SOTA水平
不同于LLaMA完全禁止商业使用,baichuan-7B使用更宽松的开源协议,允许用于商业目的
Among models of the same size, baichuan-7B has achieved the current state-of-the-art (SOTA) level, as evidenced by the following MMLU metrics.
baichuan-7B is trained on proprietary bilingual Chinese-English corpora, optimized for Chinese, and achieves SOTA performance on C-Eval.
Unlike LLaMA, which completely prohibits commercial use, baichuan-7B employs a more lenient open-source license, allowing for commercial purposes.
How to Get Started with the Model
如下是一个使用baichuan-7B进行1-shot推理的任务,根据作品给出作者名,正确输出为"夜雨寄北->李商隐"
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("fireballoon/baichuan-llama-7b", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("fireballoon/baichuan-llama-7b", device_map="auto")
inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
inputs = inputs.to('cuda:0')
pred = model.generate(**inputs, max_new_tokens=64,repetition_penalty=1.1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
The following is a task of performing 1-shot inference using baichuan-7B, where the author's name is given based on the work, with the correct output being "One Hundred Years of Solitude->Gabriel Garcia Marquez"
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("fireballoon/baichuan-llama-7b", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("fireballoon/baichuan-llama-7b", device_map="auto")
inputs = tokenizer('Hamlet->Shakespeare\nOne Hundred Years of Solitude->', return_tensors='pt')
inputs = inputs.to('cuda:0')
pred = model.generate(**inputs, max_new_tokens=64,repetition_penalty=1.1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
Model Details
Model Description
- Developed by: 百川智能(Baichuan Intelligent Technology)
- Email: opensource@baichuan-inc.com
- Language(s) (NLP): Chinese/English
- License: baichuan-7B License
Model Sources
整体模型基于标准的Transformer结构,我们采用了和LLaMA一样的模型设计
- Position Embedding:采用rotary-embedding,是现阶段被大多数模型采用的位置编码方案,具有很好的外推性。
- Feedforward Layer:采用SwiGLU,Feedforward变化为(8/3)倍的隐含层大小,即11008。
- Layer Normalization: 基于RMSNorm的Pre-Normalization。
具体参数和见下表
Hyperparameter | Value |
---|---|
n_parameters | 7000559616 |
n_layers | 32 |
n_heads | 32 |
d_model | 4096 |
vocab size | 64000 |
sequence length | 4096 |
The overall model is based on the standard Transformer structure, and we have adopted the same model design as LLaMA:
- Position Embedding: We use rotary-embedding, which is the position encoding scheme adopted by most models at this stage, and it has excellent extrapolation capabilities.
- Feedforward Layer: We use SwiGLU. The feedforward changes to (8/3) times the size of the hidden layer, that is, 11008.
- Layer Normalization: Pre-Normalization based on RMSNorm.
The specific parameters are as follows:
Hyperparameter | Value |
---|---|
n_parameters | 7000559616 |
n_layers | 32 |
n_heads | 32 |
d_model | 4096 |
vocab size | 64000 |
sequence length | 4096 |
Uses
Downstream Use
我们同时开源出了和本模型配套的训练代码,允许进行高效的Finetune用于下游任务,具体参见baichuan-7B。
We have also open-sourced the training code that accompanies this model, allowing for efficient finetuning for downstream tasks. For more details, please refer to baichuan-7B.
Out-of-Scope Use
在没有充分评估风险和采取缓解措施的情况下投入生产使用;任何可能被视为不负责任或有害的使用案例。
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
Bias, Risks, and Limitations
baichuan-7B可能会产生事实上不正确的输出,不应依赖它产生事实上准确的信息。baichuan-7B是在各种公共数据集上进行训练的。尽管我们已经做出了巨大的努力来清洗预训练数据,但这个模型可能会生成淫秽、偏见或其他冒犯性的输出。
baichuan-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information. baichuan-7B was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Training Details
训练具体设置参见baichuan-7B。
For specific training settings, please refer to baichuan-7B.
Evaluation
中文评测
C-Eval
CEval数据集是一个全面的中文基础模型评测数据集,涵盖了52个学科和四个难度的级别。我们使用该数据集的dev集作为few-shot的来源,在test集上进行了5-shot测试。
Model 5-shot | Average | Avg(Hard) | STEM | Social Sciences | Humanities | Others |
---|---|---|---|---|---|---|
GPT-4 | 68.7 | 54.9 | 67.1 | 77.6 | 64.5 | 67.8 |
ChatGPT | 54.4 | 41.4 | 52.9 | 61.8 | 50.9 | 53.6 |
Claude-v1.3 | 54.2 | 39.0 | 51.9 | 61.7 | 52.1 | 53.7 |
Claude-instant-v1.0 | 45.9 | 35.5 | 43.1 | 53.8 | 44.2 | 45.4 |
moss-moon-003-base (16B) | 27.4 | 24.5 | 27.0 | 29.1 | 27.2 | 26.9 |
Ziya-LLaMA-13B-pretrain | 30.2 | 22.7 | 27.7 | 34.4 | 32.0 | 28.9 |
LLaMA-7B-hf | 27.1 | 25.9 | 27.1 | 26.8 | 27.9 | 26.3 |
ChatGLM-6B | 34.5 | 23.1 | 30.4 | 39.6 | 37.4 | 34.5 |
Falcon-7B | 25.8 | 24.3 | 25.8 | 26.0 | 25.8 | 25.6 |
Open-LLaMA-v2-pretrain (7B) | 24.0 | 22.5 | 23.1 | 25.3 | 25.2 | 23.2 |
TigerBot-7B-base | 25.7 | 27.0 | 27.3 | 24.7 | 23.4 | 26.1 |
Aquila-7B* | 25.5 | 25.2 | 25.6 | 24.6 | 25.2 | 26.6 |
BLOOM-7B | 22.8 | 20.2 | 21.8 | 23.3 | 23.9 | 23.3 |
BLOOMZ-7B | 35.7 | 25.8 | 31.3 | 43.5 | 36.6 | 35.6 |
baichuan-7B | 42.8 | 31.5 | 38.2 | 52.0 | 46.2 | 39.3 |
Gaokao
Gaokao 是一个以中国高考题作为评测大语言模型能力的数据集,用以评估模型的语言能力和逻辑推理能力。 我们只保留了其中的单项选择题,并对所有模型进行统一5-shot测试。
以下是测试的结果。
Model | Average |
---|---|
Open-LLaMA-v2-pretrain | 21.41 |
Ziya-LLaMA-13B-pretrain | 23.17 |
Falcon-7B | 23.98 |
TigerBot-7B-base | 25.94 |
LLaMA-7B | 27.81 |
ChatGLM-6B | 21.41 |
BLOOM-7B | 26.96 |
BLOOMZ-7B | 28.72 |
Aquila-7B* | 24.39 |
baichuan-7B | 36.24 |
AGIEval
AGIEval 旨在评估模型的认知和解决问题相关的任务中的一般能力。 我们只保留了其中的四选一单项选择题,随机划分后对所有模型进行了统一5-shot测试。
Model | Average |
---|---|
Open-LLaMA-v2-pretrain | 23.49 |
Ziya-LLaMA-13B-pretrain | 27.64 |
Falcon-7B | 27.18 |
TigerBot-7B-base | 25.19 |
LLaMA-7B | 28.17 |
ChatGLM-6B | 23.49 |
BLOOM-7B | 26.55 |
BLOOMZ-7B | 30.27 |
Aquila-7B* | 25.58 |
baichuan-7B | 34.44 |
*其中Aquila模型来源于智源官方网站,仅做参考
English Leaderboard
In addition to Chinese, we also tested the model's performance in English.
MMLU
MMLU is an English evaluation dataset that includes 57 multiple-choice tasks, covering elementary mathematics, American history, computer science, law, etc. The difficulty ranges from high school level to expert level, making it a mainstream LLM evaluation dataset.
We adopted the open-source evaluation scheme, and the final 5-shot results are as follows:
Model | Humanities | Social Sciences | STEM | Other | Average |
---|---|---|---|---|---|
LLaMA-7B2 | 34.0 | 38.3 | 30.5 | 38.1 | 35.1 |
Falcon-7B1 | - | - | - | - | 35.0 |
mpt-7B1 | - | - | - | - | 35.6 |
ChatGLM-6B0 | 35.4 | 41.0 | 31.3 | 40.5 | 36.9 |
BLOOM 7B0 | 25.0 | 24.4 | 26.5 | 26.4 | 25.5 |
BLOOMZ 7B0 | 31.3 | 42.1 | 34.4 | 39.0 | 36.1 |
moss-moon-003-base (16B)0 | 24.2 | 22.8 | 22.4 | 24.4 | 23.6 |
moss-moon-003-sft (16B)0 | 30.5 | 33.8 | 29.3 | 34.4 | 31.9 |
baichuan-7B0 | 38.4 | 48.9 | 35.6 | 48.1 | 42.3 |
The superscript in the Model column indicates the source of the results.
0:reimplemented
1:https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
2:https://paperswithcode.com/sota/multi-task-language-understanding-on-mmlu
- Downloads last month
- 11