Edit model card

原始模型:https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-HF

lora:https://huggingface.co/ziqingyang/chinese-llama-plus-lora-13b
https://huggingface.co/ziqingyang/chinese-alpaca-plus-lora-13b

将Wizard-Vicuna-13B-Uncensored-HF与chinese-llama-plus-lora-13b和chinese-alpaca-plus-lora-13b进行合并,增强模型的中文能力,不过存在翻译腔

使用项目: https://github.com/ymcui/Chinese-LLaMA-Alpaca

https://github.com/qwopqwop200/GPTQ-for-LLaMa

兼容AutoGPTQ和GPTQ-for-LLaMa
若选择GPTQ-for-LLaMa加载,请设置 Wbits=4 groupsize=128 model_type=llama

Text-generation-webui懒人包: https://www.bilibili.com/read/cv23495183


Original model: https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-HF

lora:https://huggingface.co/ziqingyang/chinese-llama-plus-lora-13b
https://huggingface.co/ziqingyang/chinese-alpaca-plus-lora-13b

The Wizard-Vicuna-13B-Uncensored-HF model is combined with the chinese-alpaca-plus-lora-13b model and chinese-llama-plus-lora-13b model to enhance the model's Chinese language capabilities, although there may be some translated tone.

Usage projects: https://github.com/ymcui/Chinese-LLaMA-Alpaca

https://github.com/qwopqwop200/GPTQ-for-LLaMa

Compatible with AutoGPTQ and GPTQ-for-LLaMa
If you choose to load GPTQ-for-LLaMa, please set Wbits=4 groupsize=128 model_type=llama

Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.