ao-karasu-72B / README.md
shun1taniguchi's picture
Update README.md (#1)
a83de2c verified
|
raw
history blame
2.53 kB
metadata
library_name: transformers
tags: []

drawing

How to use ・ 使い方

We recommend on running with at least 4 A100 cards A100の4枚の環境がおすすめです

Huggingface

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch

tokenizer = AutoTokenizer.from_pretrained("lightblue/ao-karasu-72B")
model = AutoModelForCausalLM.from_pretrained("lightblue/ao-karasu-72B", device_map="auto")

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}]
messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"})

prompt = tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)

pipe(prompt, max_new_tokens=100, do_sample=False, temperature=0.0, return_full_text=False)

vLLM

from vllm import LLM, SamplingParams

sampling_params = SamplingParams(temperature=0.0, max_tokens=100)
llm = LLM(model="lightblue/aokarasu-72B", tensor_parallel_size=4)

messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}]
messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"})
prompt = llm.llm_engine.tokenizer.tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)
prompts = [prompt]

outputs = llm.generate(prompts, sampling_params)
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

Training details 学習詳細

English dev blog

日本語ブログ

Training data 学習データ

Roughly 20 million characters samples from a dataset of more than 1.1 billion characters, which was made up of:

~450 million characters from Wikipedia-based QA (same as Qarasu)

~200 million characters from technical blogs (new)

~200 million characters from Japanese QA site answers (new)

~100 million characters from LLM generated prompts and responses (same as Qarasu)

~70 million characters from news articles (new)

Training schedule

Training for ~1 day on a A100 (80GB) GPU