QuantFactory/Mistral-Nemo-Japanese-Instruct-2408-GGUF
This is quantized version of cyberagent/Mistral-Nemo-Japanese-Instruct-2408 created using llama.cpp
Original Model Card
Mistral-Nemo-Japanese-Instruct-2408
Model Description
This is a Japanese continually pre-trained model based on mistralai/Mistral-Nemo-Instruct-2407.
Usage
Make sure to update your transformers installation via pip install --upgrade transformers
.
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model = AutoModelForCausalLM.from_pretrained("cyberagent/Mistral-Nemo-Japanese-Instruct-2408", device_map="auto", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("cyberagent/Mistral-Nemo-Japanese-Instruct-2408")
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
messages = [
{"role": "system", "content": "あなたは親切なAIアシスタントです。"},
{"role": "user", "content": "AIによって私たちの暮らしはどのように変わりますか?"}
]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
output_ids = model.generate(input_ids,
max_new_tokens=1024,
temperature=0.5,
streamer=streamer)
Prompt Format
ChatML Format
<s><|im_start|>system
あなたは親切なAIアシスタントです。<|im_end|>
<|im_start|>user
AIによって私たちの暮らしはどのように変わりますか?<|im_end|>
<|im_start|>assistant
License
Apache-2.0
Author
How to cite
@misc{cyberagent-mistral-nemo-japanese-instruct-2408,
title={Mistral-Nemo-Japanese-Instruct-2408},
url={https://huggingface.co/cyberagent/Mistral-Nemo-Japanese-Instruct-2408},
author={Ryosuke Ishigami},
year={2024},
}
- Downloads last month
- 1,300
Model tree for QuantFactory/Mistral-Nemo-Japanese-Instruct-2408-GGUF
Base model
mistralai/Mistral-Nemo-Base-2407