Add tool use template

#13
by Rocketknight1 HF staff - opened
Files changed (2) hide show
  1. README.md +59 -37
  2. tokenizer_config.json +10 -1
README.md CHANGED
@@ -101,58 +101,80 @@ To utilize the prompt format without a system prompt, simply leave the line out.
101
 
102
  ## Prompt Format for Function Calling
103
 
104
- Our model was trained on specific system prompts and structures for Function Calling.
 
 
105
 
106
- You should use the system role with this message, followed by a function signature json as this example shows here.
107
- ```
108
- <|im_start|>system
109
- You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
110
- <tool_call>
111
- {"arguments": <args-dict>, "name": <function-name>}
112
- </tool_call><|im_end|>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
113
  ```
114
 
115
- To complete the function call, create a user prompt that follows the above system prompt, like so:
116
- ```
117
- <|im_start|>user
118
- Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
 
 
 
 
 
 
 
119
  ```
120
 
121
  The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
 
122
  ```
123
- <|im_start|>assistant
124
  <tool_call>
125
- {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
126
  </tool_call><|im_end|>
127
  ```
128
 
129
- Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
 
 
 
 
 
 
130
  ```
131
- <|im_start|>tool
132
- <tool_response>
133
- {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
134
- </tool_response>
135
- <|im_end|>
 
 
 
136
  ```
137
 
138
- The assistant will then read in that data from the function's response, and generate a natural language response:
 
139
  ```
140
- <|im_start|>assistant
141
- The stock fundamentals data for Tesla (TSLA) are as follows:
142
- - **Symbol**: TSLA
143
- - **Company Name**: Tesla, Inc.
144
- - **Sector**: Consumer Cyclical
145
- - **Industry**: Auto Manufacturers
146
- - **Market Capitalization**: $566,160,130,480
147
- - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
148
- - **Price-to-Book Ratio (PB Ratio)**: 9.04
149
- - **Dividend Yield**: N/A
150
- - **Trailing Earnings Per Share (EPS)**: $4.3
151
- - **Beta Value of the Stock**: 2.42
152
- - **52-Week High Price of the Stock**: $299.29
153
- - **52-Week Low Price of the Stock**: $152.37
154
-
155
- This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
156
  ```
157
 
158
  ## Prompt Format for JSON Mode / Structured Outputs
 
101
 
102
  ## Prompt Format for Function Calling
103
 
104
+ Our model was trained on specific system prompts and structures for Function Calling. These are handled by the `tool_use` chat template. To use this template,
105
+ first define a list of tool functions. It's okay if these are dummy functions - what matters is their name, type hints, and docstring, as these will be
106
+ extracted and made available to the model:
107
 
108
+ ```python
109
+ def get_current_temperature(location: str, unit: str) -> float:
110
+ """
111
+ Get the current temperature at a location.
112
+
113
+ Args:
114
+ location: The location to get the temperature for, in the format "City, Country"
115
+ unit: The unit to return the temperature in. (choices: ["celsius", "fahrenheit"])
116
+ Returns:
117
+ The current temperature at the specified location in the specified units, as a float.
118
+ """
119
+ return 22. # A real function should probably actually get the temperature!
120
+
121
+ def get_current_wind_speed(location: str) -> float:
122
+ """
123
+ Get the current wind speed in km/h at a given location.
124
+
125
+ Args:
126
+ location: The location to get the temperature for, in the format "City, Country"
127
+ Returns:
128
+ The current wind speed at the given location in km/h, as a float.
129
+ """
130
+ return 6. # A real function should probably actually get the wind speed!
131
+
132
+ tools = [get_current_temperature, get_current_wind_speed]
133
  ```
134
 
135
+ Now, prepare a chat and apply the chat template, then generate the model's response
136
+
137
+ ```python
138
+ messages = [
139
+ {"role": "user", "content": "Hey, what's the temperature in Paris right now?"}
140
+ ]
141
+
142
+ inputs = tokenizer.apply_chat_template(messages, chat_template="tool_use", tools=tools, add_generation_prompt=True, return_dict=True, return_tensors="pt")
143
+ inputs = {k: v.to(model.device) for k, v in inputs.items()}
144
+ out = model.generate(**inputs, max_new_tokens=128)
145
+ print(tokenizer.decode(out[0][len(inputs["input_ids"][0]):]))
146
  ```
147
 
148
  The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
149
+
150
  ```
 
151
  <tool_call>
152
+ {"arguments": {"location": "Paris, France", "unit": "celsius"}, "name": "get_current_temperature"}
153
  </tool_call><|im_end|>
154
  ```
155
 
156
+ Once you parse the tool call, add it to the chat as an `assistant` response, using the `tool_calls` key, then append the tool output
157
+ as a response with the `tool` role:
158
+
159
+ ```python
160
+ tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France", "unit": "celsius"}}
161
+ messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]})
162
+ messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"})
163
  ```
164
+
165
+ Now you can apply the chat template again to format the conversation, and generate a response from the model:
166
+
167
+ ```python
168
+ inputs = tokenizer.apply_chat_template(messages, chat_template="tool_use", tools=tools, add_generation_prompt=True, return_dict=True, return_tensors="pt")
169
+ inputs = {k: v.to(model.device) for k, v in inputs.items()}
170
+ out = model.generate(**inputs, max_new_tokens=128)
171
+ print(tokenizer.decode(out[0][len(inputs["input_ids"][0]):]))
172
  ```
173
 
174
+ and we get:
175
+
176
  ```
177
+ The current temperature in Paris, France is 22.0 degrees Celsius.<|im_end|>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
178
  ```
179
 
180
  ## Prompt Format for JSON Mode / Structured Outputs
tokenizer_config.json CHANGED
@@ -2306,7 +2306,16 @@
2306
  }
2307
  },
2308
  "bos_token": "<|begin_of_text|>",
2309
- "chat_template": "{{bos_token}}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}", "clean_up_tokenization_spaces": true,
 
 
 
 
 
 
 
 
 
2310
  "eos_token": "<|im_end|>",
2311
  "model_input_names": [
2312
  "input_ids",
 
2306
  }
2307
  },
2308
  "bos_token": "<|begin_of_text|>",
2309
+ "chat_template": [
2310
+ {
2311
+ "name": "default",
2312
+ "template": "{{bos_token}}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}"
2313
+ },
2314
+ {
2315
+ "name": "tool_use",
2316
+ "template": "{%- macro json_to_python_type(json_spec) %}\n{%- set basic_type_map = {\n \"string\": \"str\",\n \"number\": \"float\",\n \"integer\": \"int\",\n \"boolean\": \"bool\"\n} %}\n\n{%- if basic_type_map[json_spec.type] is defined %}\n {{- basic_type_map[json_spec.type] }}\n{%- elif json_spec.type == \"array\" %}\n {{- \"list[\" + json_to_python_type(json_spec|items) + \"]\"}}\n{%- elif json_spec.type == \"object\" %}\n {%- if json_spec.additionalProperties is defined %}\n {{- \"dict[str, \" + json_to_python_type(json_spec.additionalProperties) + ']'}}\n {%- else %}\n {{- \"dict\" }}\n {%- endif %}\n{%- elif json_spec.type is iterable %}\n {{- \"Union[\" }}\n {%- for t in json_spec.type %}\n {{- json_to_python_type({\"type\": t}) }}\n {%- if not loop.last %}\n {{- \",\" }} \n {%- endif %}\n {%- endfor %}\n {{- \"]\" }}\n{%- else %}\n {{- \"Any\" }}\n{%- endif %}\n{%- endmacro %}\n\n\n{{- bos_token }}\n{{- \"You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> \" }}\n{%- for tool in tools %}\n {%- if tool.function is defined %}\n {%- set tool = tool.function %}\n {%- endif %}\n {{- '{\"type\": \"function\", \"function\": ' }}\n {{- '{\"name\": ' + tool.name + '\", ' }}\n {{- '\"description\": \"' + tool.name + '(' }}\n {%- for param_name, param_fields in tool.parameters.properties|items %}\n {{- param_name + \": \" + json_to_python_type(param_fields) }}\n {%- if not loop.last %}\n {{- \", \" }}\n {%- endif %}\n {%- endfor %}\n {{- \")\" }}\n {%- if tool.return is defined %}\n {{- \" -> \" + json_to_python_type(tool.return) }}\n {%- endif %}\n {{- \" - \" + tool.description + \"\\n\\n\" }}\n {%- for param_name, param_fields in tool.parameters.properties|items %}\n {%- if loop.first %}\n {{- \" Args:\\n\" }}\n {%- endif %}\n {{- \" \" + param_name + \"(\" + json_to_python_type(param_fields) + \"): \" + param_fields.description|trim }}\n {%- endfor %}\n {%- if tool.return is defined and tool.return.description is defined %}\n {{- \"\\n Returns:\\n \" + tool.return.description }}\n {%- endif %}\n {{- '\"' }}\n {{- ', \"parameters\": ' }}\n {%- if tool.parameters.properties | length == 0 %}\n {{- \"{}\" }}\n {%- else %}\n {{- tool.parameters|tojson }}\n {%- endif %}\n {{- \"}\" }}\n {%- if not loop.last %}\n {{- \"\\n\" }}\n {%- endif %}\n{%- endfor %}\n{{- \" </tools>\" }}\n{{- 'Use the following pydantic model json schema for each tool call you will make: {\"properties\": {\"arguments\": {\"title\": \"Arguments\", \"type\": \"object\"}, \"name\": {\"title\": \"Name\", \"type\": \"string\"}}, \"required\": [\"arguments\", \"name\"], \"title\": \"FunctionCall\", \"type\": \"object\"}\n' }}\n{{- \"For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:\n\" }}\n{{- \"<tool_call>\n\" }}\n{{- '{\"arguments\": <args-dict>, \"name\": <function-name>}\n' }}\n{{- '</tool_call><|im_end|>' }}\n{%- for message in messages %}\n {%- if message.role == \"user\" or message.role == \"system\" or (message.role == \"assistant\" and message.tool_calls is not defined) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role + '\\n<tool_call>\\n' }}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '{ ' }}\n {%- if tool_call.arguments is defined %}\n {{- '\"arguments\": ' }}\n {{- tool_call.arguments|tojson }}\n {{- ', '}}\n {%- endif %}\n {{- '\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\"}' }}\n {{- '\\n</tool_call> ' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if not message.name is defined %}\n {{- raise_exception(\"Tool response dicts require a 'name' key indicating the name of the called function!\") }}\n {%- endif %}\n {{- '<|im_start|>' + message.role + '\\n<tool_response>\\n' }}\n {{- '{\"name\": \"' }}\n {{- message.name }}\n {{- '\", \"content\": ' }}\n {{- message.content|tojson + '}' }}\n {{- '\\n</tool_response> <|im_end|>\\n' }} \n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n"
2317
+ }
2318
+ ],
2319
  "eos_token": "<|im_end|>",
2320
  "model_input_names": [
2321
  "input_ids",