Edit model card

Model Card for LLaMA 2 Spell Generation

Model Description

This model is a fine-tuned llama-2-7b model for the generation of D&D 5th edition spells

Prompt Format

This prompt format based on the Alpaca model was used for fine-tuning:

"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n" \
f"### Instruction:\n{instruction}\n\n### Response:\n{response}"

It is recommended to use the same prompt in inference to obtain the best results!

Output Format

The output format for a generated spell should be the following:

Name: 
Level: 
School: 
Classes: 
Casting time: 
Range: 
Duration:
Components: [If no components are required, then this field has a None value]
Material cost: [If there is no "M" character in the Components field, then this field is skipped]
Description:

Example:

Name: The Shadow
Level: 1
School: Evocation
Classes: Bard, Cleric, Druid, Ranger, Sorcerer, Warlock, Wizard
Casting time: 1 Action
Range: Self
Duration: Concentration, Up To 1 Minute
Components: V, S, M
Material cost: a small piece of cloth
Description: You touch a creature within range. The target must make a Dexterity saving throw. On a failed save, the target takes 2d6 psychic damage and is charmed by you. On a successful save, the target takes half as much damage.
At Higher Levels. When you cast this spell using a spell slot of 4th level or higher, the damage increases by 1d6 for each slot level above 1st.

Example use

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "m-elio/spell_generation_llama-2-7b"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

instruction = "Write a spell for the 5th edition of the Dungeons & Dragons game."

prompt = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n" \
f"### Instruction:\n{instruction}\n\n### Response:\n"

tokenized_input = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**tokenized_input, max_length=512)

print(tokenizer.batch_decode(outputs.detach().cpu().numpy()[:, tokenized_input.input_ids.shape[1]:], skip_special_tokens=True)[0])
Downloads last month
13
Safetensors
Model size
6.74B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including m-elio/spell_generation_llama-2-7b