--- library_name: transformers datasets: - kinokokoro/ichikara-instruction-003 language: - ja base_model: - llm-jp/llm-jp-3-13b --- # Model Card for Model ID ## Model Details ### Model Description This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## 出力方法 ```python from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, ) from peft import PeftModel import torch from tqdm import tqdm import json import re HF_TOKEN = "Hugging Face Token" model_id = "KazutoHaruguchi/llm-jp-3-13b-finetune" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", # nf4は通常のINT4より精度が高く、ニューラルネットワークの分布に最適です bnb_4bit_compute_dtype=torch.bfloat16, ) model = AutoModelForCausalLM.from_pretrained( model_id, quantization_config=bnb_config, device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) datasets = [] with open("./elyza-tasks-100-TV_0.jsonl", "r") as f: item = "" for line in f: line = line.strip() item += line if item.endswith("}"): datasets.append(json.loads(item)) item = "" results = [] for data in tqdm(datasets): input = data["input"] prompt = f"""### 指示 {input} ### 回答 """ tokenized_input = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt").to(model.device) attention_mask = torch.ones_like(tokenized_input) with torch.no_grad(): outputs = model.generate( tokenized_input, attention_mask=attention_mask, max_new_tokens=100, do_sample=False, repetition_penalty=1.2, pad_token_id=tokenizer.eos_token_id )[0] output = tokenizer.decode(outputs[tokenized_input.size(1):], skip_special_tokens=True) results.append({"task_id": data["task_id"], "input": input, "output": output}) jsonl_id = re.sub(".*/", "", new_model_id) with open(f"./{jsonl_id}-outputs.jsonl", 'w', encoding='utf-8') as f: for result in results: json.dump(result, f, ensure_ascii=False) # ensure_ascii=False for handling non-ASCII characters f.write('\n') ```