YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
** Model upgraded and finetuned starting from LlaMa model. I hope everyone creates modes starting from this open-source project**
GPTQ conversion command (on CUDA branch): CUDA_VISIBLE_DEVICES=0 python llama.py ../capibara-17b-4bit c4 --wbits 4 --true-sequential --groupsize 128 --save capibara-17b-4bit-128g.pt
Added 1 token to the tokenizer model: python llama-tools/add_tokens.py capibara-17b/tokenizer.model /content/tokenizer.model llama-tools/test_list.txt
Enjoy
- Downloads last month
- 14
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.