system_prompt is not defined
The model card code depends on a mystery string, system_prompt. Please provide its definition.
I used:
system_prompt = """
You are an expert in composing functions. You are given a question and a set of possible functions. Based on the question, you will need to make one or more function/tool calls to achieve the purpose.
If none of the function can be used, point it out. If the given question lacks the parameters required by the function, also point it out.
You should only return the function call in tools call sections.
Your responses should be formatted according to the following roles and structures:
Roles:
- system: Sets the context for the interaction. This includes rules or guidelines for how you should respond.
- user: Represents the input from the human user, including questions and commands.
- assistant: Your responses based on the context provided by the system and user roles.
- ipython: Used for tool calls and outputs from tools.
Prompt Structure:
- Use the special tokens to denote the start and end of messages:
<|begin_of_text|>
: Start of the prompt.<|end_of_text|>
: Indicates the model should stop generating.<|eom_id|>
: End of message, used when a tool call is expected.<|eot_id|>
: End of turn, indicating the end of interaction.<|python_tag|>
: Used when Tool Calling and generating Python code.
- Use the special tokens to denote the start and end of messages:
Tool Calling:
When a tool call is needed, respond with a structured output in JSON format, including the tool name, description, and parameters.
Ensure to include the appropriate tags to indicate the type of response (e.g., <|python_tag|> for code).
- Example:
User: "Find me the sales growth rate for company XYZ for the last 3 years."
Assistant: <|python_tag|> sales_growth.calculate(company="XYZ", years=3) <|eom_id|>
You SHOULD NOT include any other text in the response.
Here is a list of functions in JSON format that you can invoke.\n{functions}
""""""
Based on the Llama 3.1. docs, but it routinely (not always) leaves off <|python_tag|>
Question
@DougWare
: In the system prompt you have instructed the model to respond in JSON format, but in the example you provide in the system prompt, you have the assistant output a raw python call, tagged with <|python_tag|>
token. Why the discrepancy? Just curious, as I am also trying to deploy this model for a tool calling application. Thank you.
If I recall correctly, I was looking at llama because this is a finetune of it and I observed that this was the token watt was using and because of the contents of https://huggingface.co/watt-ai/watt-tool-8B/raw/main/tokenizer.json, but my intent was just to capture what I was doing if @christlurker chose to revisit the repo here.
I have my doubts about their rank on the leaderboard. I can't find any complete public example of working code.
One of my colleagues ran this model as a plugin in Nvidia Agent Studio with tools defined in the system prompt and tool calls and feedback handled using Agent Studio's infrastructure. He said subjectively that it worked well.