Text Generation
Transformers
Safetensors
English
llama
conversational
Inference Endpoints
text-generation-inference
4-bit precision
awq
Edit model card

4-bit GEMM AWQ Quantizations of L3-8B-Stheno-v3.2

Using AutoAWQ release v0.2.5 for quantization.

Original model: https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2

Prompt format

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

AWQ Parameters

  • q_group_size: 128
  • w_bit: 4
  • zero_point: True
  • version: GEMM

How to run

From the AutoAWQ repo here

First install autoawq pypi package:

pip install autoawq

Then run the following:

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer


quant_path = "models/L3-8B-Stheno-v3.2-AWQ"

# Load model
model = AutoAWQForCausalLM.from_quantized(quant_path, fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(quant_path, trust_remote_code=True)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

prompt = "You're standing on the surface of the Earth. "\
        "You walk one mile south, one mile west and one mile north. "\
        "You end up exactly where you started. Where are you?"

chat = [
    {"role": "system", "content": "You are a concise assistant that helps answer questions."},
    {"role": "user", "content": prompt},
]

# <|eot_id|> used for llama 3 models
terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

tokens = tokenizer.apply_chat_template(
    chat,
    return_tensors="pt"
).cuda()

# Generate output
generation_output = model.generate(
    tokens, 
    streamer=streamer,
    max_new_tokens=64,
    eos_token_id=terminators
)

Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

Downloads last month
179
Safetensors
Model size
1.98B params
Tensor type
I32
·
FP16
·
Inference API
Input a message to start chatting with bartowski/L3-8B-Stheno-v3.2-AWQ.
This model can be loaded on Inference API (serverless).

Datasets used to train bartowski/L3-8B-Stheno-v3.2-AWQ