Text Generation
Transformers
Safetensors
English
Japanese
Taka008 commited on
Commit
8894718
β€’
1 Parent(s): 1a4e580

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -64,9 +64,11 @@ Checkpoints format: Hugging Face Transformers (Megatron-DeepSpeed format models
64
 
65
  ```python
66
  import torch
67
- from transformers import AutoTokenizer, AutoModelForCausalLM
 
 
68
  tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-13b-instruct-lora-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1")
69
- model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-13b-instruct-lora-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1", device_map="auto", torch_dtype=torch.float16)
70
  text = "δ»₯下は、タスクをθͺ¬ζ˜Žγ™γ‚‹ζŒ‡η€Ίγ§γ™γ€‚θ¦ζ±‚γ‚’ι©εˆ‡γ«ζΊ€γŸγ™εΏœη­”γ‚’ζ›Έγγͺさい。\n\n### ζŒ‡η€Ί:\n{instruction}\n\n### εΏœη­”:\n".format(instruction="θ‡ͺ焢言θͺžε‡¦η†γ¨γ―何か")
71
  tokenized_input = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt").to(model.device)
72
  with torch.no_grad():
 
64
 
65
  ```python
66
  import torch
67
+ from transformers import AutoTokenizer
68
+ from peft import AutoPeftModelForCausalLM
69
+
70
  tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-13b-instruct-lora-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1")
71
+ model = AutoPeftModelForCausalLM.from_pretrained("llm-jp/llm-jp-13b-instruct-lora-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1", device_map="auto", torch_dtype=torch.float16)
72
  text = "δ»₯下は、タスクをθͺ¬ζ˜Žγ™γ‚‹ζŒ‡η€Ίγ§γ™γ€‚θ¦ζ±‚γ‚’ι©εˆ‡γ«ζΊ€γŸγ™εΏœη­”γ‚’ζ›Έγγͺさい。\n\n### ζŒ‡η€Ί:\n{instruction}\n\n### εΏœη­”:\n".format(instruction="θ‡ͺ焢言θͺžε‡¦η†γ¨γ―何か")
73
  tokenized_input = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt").to(model.device)
74
  with torch.no_grad():