--- license: mit language: - en --- # NPC Model This repo contains the domain-specific NPC model we've fined-tuned from **Phi-3-128k**, using LoRA. This model parses a text description of a game scene, and outputs commands like: * `say "Hello Adventurer, care to join me on a quest?` * `greet ` * `attack ` * Any other ` ` you add to the prompt! (We call these "skills"!) ⚠️ This model has been trained to **overfit** on our input prompt format. Follow it closely to reach optimal performance ⚠️ ## Usage **Make your life easier, use our [Python client library](https://github.com/GigaxGames/gigax)** * Instantiating the model using outlines: ```py from outlines import models from gigax.step import NPCStepper from llama_cpp import Llama # Download model from the Hugging Face Gigax Hub before run this code # Our stepper takes in a Outlines model to enable guided generation # This forces the model to follow our output format model = Llama( model_path="./path/to/model/npc-llm-3_8B-128k.gguf", # n_gpu_layers=-1, # Uncomment to use GPU acceleration ) # Instantiate a stepper: handles prompting + output parsing stepper = NPCStepper(model=model) ``` * Calling the model on your game's data: ```py from gigax.parse import CharacterAction from gigax.scene import ( Character, Item, Location, ProtagonistCharacter, ProtagonistCharacter, Skill, ParameterType, ) # Use sample data current_location = Location(name="Old Town", description="A quiet and peaceful town.") NPCs = [ Character( name="John the Brave", description="A fearless warrior", current_location=current_location, ) ] protagonist = ProtagonistCharacter( name="Aldren", description="Brave and curious", current_location=current_location, memories=["Saved the village", "Lost a friend"], quests=["Find the ancient artifact", "Defeat the evil warlock"], skills=[ Skill( name="Attack", description="Deliver a powerful blow", parameter_types=[ParameterType.character], ) ], psychological_profile="Determined and compassionate", ) items = [Item(name="Sword", description="A sharp blade")] events = [ CharacterAction( command="Say", protagonist=protagonist, parameters=[items[0], "What a fine sword!"], ) ] action = stepper.get_action( context=context, locations=locations, NPCs=NPCs, protagonist=protagonist, items=items, events=events, ) ``` ## Input prompt Here's a sample input prompt, showing you the format on which the model has been trained: ```txt - WORLD KNOWLEDGE: A vast open world full of mystery and adventure. - KNOWN LOCATIONS: Old Town - NPCS: John the Brave - CURRENT LOCATION: Old Town: A quiet and peaceful town. - CURRENT LOCATION ITEMS: Sword - LAST EVENTS: Aldren: Say Sword What a fine sword! - PROTAGONIST NAME: Aldren - PROTAGONIST PSYCHOLOGICAL PROFILE: Brave and curious - PROTAGONIST MEMORIES: Saved the village Lost a friend - PROTAGONIST PENDING QUESTS: Find the ancient artifact Defeat the evil warlock - PROTAGONIST ALLOWED ACTIONS: Attack : Deliver a powerful blow Aldren: ``` ### 🤗 We are currently working hard on training on the latest SoTA models (Phi-3, LLama, etc.), and on better data ! 🤗 ## Model info - **Developed by:** Gigax - **Language(s) (NLP):** English - **Finetuned from model [optional]:** [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) - **Contact:** Join our [Discord](https://discord.gg/xES2Z8X4J6) for info, help, and more! ## How to Cite ```bibtex @misc{NPC-LLM-3_8B, url={[https://huggingface.co/Gigax/NPC-LLM-7B](https://huggingface.co/Gigax/NPC-LLM-3_8B)}, title={NPC-LLM-3_8B}, author={Gigax team} } ```