language:
- en
- ja
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
PLaMo-13B-Instruct
Model Description
PLaMo-13B-Instruct is an instruct fine-tuned model built upon the 8192 context length version of PLaMo-13B text generation model. PLaMo-13B-Instruct is fine-tuned using multiple publicly available Japanese datasets. This model is released under the Apache License 2.0.
PLaMo-13B-Instruct Release blog (Japanese)
Usage
Install the required libraries as follows:
>>> python -m pip install numpy sentencepiece torch transformers accelerate
Execute the following python code:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained(
"pfnet/plamo-13b-instruct",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"pfnet/plamo-13b-instruct",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="auto",
)
def completion(prompt: str, max_new_tokens: int = 128) -> str:
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
generated_ids = model.generate(
inputs.input_ids,
eos_token_id=2,
pad_token_id=3,
max_new_tokens=max_new_tokens,
temperature=1,
top_p=0.95,
top_k=50,
do_sample=True,
)
return tokenizer.decode(generated_ids[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
def generate_prompt(messages: list) -> str:
sep = "\n\n### "
prompt = [
"以下はタスクを説明する指示で、文脈を説明した入力とペアになっています。",
"要求を適切に補完するよう応答を書いてください。",
]
roles = {"instruction": "指示", "response": "応答", "input": "入力"}
for msg in messages:
prompt.append(sep + roles[msg["role"]] + ":\n" + msg["content"])
prompt.append(sep + roles["response"] + ":\n")
return "".join(prompt)
prompt = generate_prompt([
{"role": "instruction", "content": "日本の首都はどこですか?"},
# {"role": "input", "content": "..."} ## An extra input (optional)
])
print(completion(prompt, max_new_tokens=128))
Model Details
- Model size: 13B
- Trained tokens: 1.5T tokens (English: 1.32T tokens, Japanese: 0.18T tokens)
- Tokenizer: sentencepiece tokenizer trained on a subset of the pretraining datasets.
- Context length: 8192
- Developed by: Preferred Networks, Inc
- Model type: Causal decoder-only
- Language(s): Japanese and English
- License: Apache License 2.0
Training Dataset
- databricks-dolly-15k (Japanese translation)
- Anthropic HH-RLHF (Japanese translation, subset)
- OpenAssistant Conversations Dataset (Japanese translation, oasst1)
- Wikinews subset of Izumi-lab llm-japanese-dataset
For the pretraining model, see PLaMo-13B.
Bias, Risks, and Limitations
PLaMo-13B-Instruct is a new technology that carries risks with use. Testing conducted to date has been in English and Japanese, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, PLaMo-13B-Instruct-NC’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of PLaMo-13B-Instruct, developers should perform safety testing and tuning tailored to their specific applications of the model.
How to cite
@online{PLaMoInstruct2023Introducing,
author = {Preferred Networks, Inc},
title = {PLaMo-13B-Instruct},
year = {2023},
url = {https://huggingface.co/pfnet/plamo-13b-instruct},
urldate = {2023-10-26}
}
References
@article{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}