Uploaded model

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

How to run elyza-tasks-100-TV benchmark

This section explains how to run the Japanese LLM benchmark, elyza-tasks-100-TV, using this model. We assume a free Google Colaboratory GPU instance (T4) as the execution environment, but it can be run with any GPU enviromnemt with equivalent or better VRAM, and equivalent Python environment.

First, copy the benchmark task file "elyza-tasks-100-TV.jsonl" to your working directory. Then, execute all the scripts below. All Hugging Face models used here are public, so you do not need to input an HF_TOKEN.

%%capture
!pip install unsloth
!pip uninstall unsloth -y && pip install --upgrade --no-cache-dir "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install -U torch
!pip install -U peft
from unsloth import FastLanguageModel
from peft import PeftModel
import torch
import json
from tqdm import tqdm
import re

model_id = "llm-jp/llm-jp-3-13b"
adapter_id = "kibuna/llm-jp-3-13b-it-elyza100-ichikara003001_lora"
HF_TOKEN = "" #@param {type:"string"}

dtype = None
load_in_4bit = True

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name=model_id,
    dtype=dtype,
    load_in_4bit=load_in_4bit,
    trust_remote_code=True,
)
model = PeftModel.from_pretrained(model, adapter_id, token = HF_TOKEN)
datasets = []
with open("./elyza-tasks-100-TV_0.jsonl", "r") as f:
    item = ""
    for line in f:
      line = line.strip()
      item += line
      if item.endswith("}"):
        datasets.append(json.loads(item))
        item = ""
FastLanguageModel.for_inference(model)

results = []
for dt in tqdm(datasets):
  input = dt["input"]

  prompt = f"""### 指示\n{input}\n### 回答\n"""

  inputs = tokenizer([prompt], return_tensors = "pt").to(model.device)

  outputs = model.generate(**inputs, max_new_tokens = 512, use_cache = True, do_sample=False, repetition_penalty=1.2)
  prediction = tokenizer.decode(outputs[0], skip_special_tokens=True).split('\n### 回答')[-1]

  results.append({"task_id": dt["task_id"], "input": input, "output": prediction})

json_file_id = re.sub(".*/", "", adapter_id)
with open(f"/content/{json_file_id}_output.jsonl", 'w', encoding='utf-8') as f:
    for result in results:
        json.dump(result, f, ensure_ascii=False)
        f.write('\n')

The outputs are saved in the jsonl file "llm-jp-3-13b-it-elyza100-ichikara003001_lora_output.jsonl".

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for kibuna/llm-jp-3-13b-it-elyza100-ichikara003001_lora

Finetuned
(1139)
this model

Dataset used to train kibuna/llm-jp-3-13b-it-elyza100-ichikara003001_lora