Edit model card

Lora sft finetuned version of Qwen/Qwen1.5-1.8B-Chat

from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM

config = PeftConfig.from_pretrained("eren23/finetune_test_qwen15-1-8b-sft")
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen1.5-1.8B-Chat")
model = PeftModel.from_pretrained(model, "eren23/finetune_test_qwen15-1-8b-sft")
model = model.to("cuda")

from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto

# make prediction
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-1.8B-Chat")

prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(
    model_inputs.input_ids,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

Framework versions

  • PEFT 0.8.2

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 43.27
AI2 Reasoning Challenge (25-Shot) 36.18
HellaSwag (10-Shot) 57.77
MMLU (5-Shot) 44.96
TruthfulQA (0-shot) 38.00
Winogrande (5-shot) 61.17
GSM8k (5-shot) 21.53
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .

Adapter for

Evaluation results