Komodo-LoRA / README.md
suayptalha's picture
Update README.md
12e6226 verified
metadata
language:
  - en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
  - unsloth
  - LoRA
datasets:
  - TIGER-Lab/MathInstruct
base_model:
  - Qwen/Qwen2.5-7B-Instruct

These are the LoRA adapters for model Komodo-7B-Instruct. https://huggingface.co/suayptalha/Komodo-7B-Instruct

Suggested Usage:

model_name = "Qwen/Qwen2.5-7b-Instruct"

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.float16
)

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    device_map="auto",
    torch_dtype=torch.float16,
    quantization_config=bnb_config
)

tokenizer = AutoTokenizer.from_pretrained(model_name)

adapter_path = "suayptalha/Komodo-LoRA"
model = PeftModel.from_pretrained(model, adapter_path)

example_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}"""

inputs = tokenizer(
[
    example_prompt.format(
        "", #Your question here
        "", #Given input here
        "", #Output (for training)
    )
], return_tensors = "pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens = 512, use_cache = True)
tokenizer.batch_decode(outputs)