--- library_name: peft base_model: tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1 language: - ja license: apache-2.0 tags: - text-generation-inference - transformers - trl - mixtral datasets: - kunishou/amenokaku-code-instruct license_name: mixtral --- # Uploaded model - **Developed by:** taoki - **License:** apache-2.0 - **Finetuned from model :** tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1 # Usage ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel model_name = "tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True, torch_dtype=torch.bfloat16) model = PeftModel.from_pretrained(model, "taoki/Swallow-MX-8x7b-NVE-v0.1-qlora-amenokaku-code-adapter") prompt="""### Instruction: 紫式部と清少納言の作風をjsonで出力してください。 ### Response: """ input_ids = tokenizer.encode( prompt, add_special_tokens=False, return_tensors="pt" ) tokens = model.generate( input_ids.to(device=model.device), max_new_tokens=1024, temperature=0.99, top_p=0.95, do_sample=True, ) out = tokenizer.decode(tokens[0], skip_special_tokens=True) print(out) ``` # Output ```` ### Instruction: 紫式部と清少納言の作風をjsonで出力してください。 ### Response: ```json { "紫式部": "貴人に会って、その人が話していることを思い出しながら奏でると、これにまさる楽器はありません。」, "清少納言": "人によってあげくはなく、おのずからかなしくゆくほどに、かなしみは深くなりゆきなさるなり。」 } ``` ```` # Framework versions - PEFT 0.9.0