Edit model card

CapyBaraHermes 2.5 Mistral 7B - GPTQ

Description

This repo contains AWQ model files for KARAKURI LM 70B Chat v0.1.

How to get the AWQ model

I created AWQ model files by using used autoawq==0.2.3.

pip install autoawq==0.2.3

This is the Python code to create AWQ model.

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer

model_path = "karakuri-ai/karakuri-lm-70b-chat-v0.1"

quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }

# Load model
model = AutoAWQForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)

# Quantize
model.quantize(tokenizer, quant_config=quant_config, calib_data="mmnga/wikipedia-ja-20230720-1k")

quant_path = "karakuri-lm-70b-v0.1-AWQ"
model.save_quantized(quant_path)
tokenizer.save_pretrained(quant_path)

Usage

from vllm import LLM, SamplingParams

sampling_params = SamplingParams(temperature=0.0, max_tokens=100)
llm = LLM(model="masao1211/karakuri-lm-70b-chat-v0.1-AWQ", max_model_len=4096)

system_prompt = "System prompt"


messages = [{"role": "system", "content": "System prompt"}]
messages.append({"role": "user", "content": "User Prompt"})
prompt = llm.llm_engine.tokenizer.tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)
prompts = [prompt]

outputs = llm.generate(prompts, sampling_params)
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
Downloads last month
618
Safetensors
Model size
9.9B params
Tensor type
I32
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for masao1211/karakuri-lm-70b-chat-v0.1-AWQ