原始模型:https://huggingface.co/Neko-Institute-of-Science/pygmalion-7b

lora:https://huggingface.co/ziqingyang/chinese-llama-plus-lora-7b
https://huggingface.co/ziqingyang/chinese-alpaca-plus-lora-7b
将pygmalion-7b与chinese-llama-plus-lora-7b和chinese-alpaca-plus-lora-7b进行合并,增强模型的中文能力,不过存在翻译腔

使用项目: https://github.com/ymcui/Chinese-LLaMA-Alpaca

https://github.com/qwopqwop200/GPTQ-for-LLaMa

兼容AutoGPTQ和GPTQ-for-LLaMa
若选择GPTQ-for-LLaMa加载,请设置 Wbits=4 groupsize=128 model_type=llama

Text-generation-webui懒人包: https://www.bilibili.com/read/cv23495183


Original model: https://huggingface.co/Neko-Institute-of-Science/pygmalion-7b

lora:https://huggingface.co/ziqingyang/chinese-llama-plus-lora-7b
https://huggingface.co/ziqingyang/chinese-alpaca-plus-lora-7b

The pygmalion-7b model is combined with the chinese-llama-plus-lora-7b and chinese-alpaca-plus-lora-7b to enhance the model's Chinese language capabilities, although there may be some translated tone.

Usage projects: https://github.com/ymcui/Chinese-LLaMA-Alpaca

https://github.com/qwopqwop200/GPTQ-for-LLaMa

Compatible with AutoGPTQ and GPTQ-for-LLaMa
If you choose to load GPTQ-for-LLaMa, please set Wbits=4 groupsize=128 model_type=llama

Downloads last month
81
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.