Rocketknight1 HF staff commited on
Commit
bef895b
1 Parent(s): e75a97e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -37
README.md CHANGED
@@ -101,58 +101,80 @@ To utilize the prompt format without a system prompt, simply leave the line out.
101
 
102
  ## Prompt Format for Function Calling
103
 
104
- Our model was trained on specific system prompts and structures for Function Calling.
 
 
105
 
106
- You should use the system role with this message, followed by a function signature json as this example shows here.
107
- ```
108
- <|im_start|>system
109
- You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
110
- <tool_call>
111
- {"arguments": <args-dict>, "name": <function-name>}
112
- </tool_call><|im_end|>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
113
  ```
114
 
115
- To complete the function call, create a user prompt that follows the above system prompt, like so:
116
- ```
117
- <|im_start|>user
118
- Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
 
 
 
 
 
 
 
119
  ```
120
 
121
  The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
 
122
  ```
123
- <|im_start|>assistant
124
  <tool_call>
125
- {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
126
  </tool_call><|im_end|>
127
  ```
128
 
129
- Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
 
 
 
 
 
 
130
  ```
131
- <|im_start|>tool
132
- <tool_response>
133
- {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
134
- </tool_response>
135
- <|im_end|>
 
 
 
136
  ```
137
 
138
- The assistant will then read in that data from the function's response, and generate a natural language response:
 
139
  ```
140
- <|im_start|>assistant
141
- The stock fundamentals data for Tesla (TSLA) are as follows:
142
- - **Symbol**: TSLA
143
- - **Company Name**: Tesla, Inc.
144
- - **Sector**: Consumer Cyclical
145
- - **Industry**: Auto Manufacturers
146
- - **Market Capitalization**: $566,160,130,480
147
- - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
148
- - **Price-to-Book Ratio (PB Ratio)**: 9.04
149
- - **Dividend Yield**: N/A
150
- - **Trailing Earnings Per Share (EPS)**: $4.3
151
- - **Beta Value of the Stock**: 2.42
152
- - **52-Week High Price of the Stock**: $299.29
153
- - **52-Week Low Price of the Stock**: $152.37
154
-
155
- This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
156
  ```
157
 
158
  ## Prompt Format for JSON Mode / Structured Outputs
 
101
 
102
  ## Prompt Format for Function Calling
103
 
104
+ Our model was trained on specific system prompts and structures for Function Calling. These are handled by the `tool_use` chat template. To use this template,
105
+ first define a list of tool functions. It's okay if these are dummy functions - what matters is their name, type hints, and docstring, as these will be
106
+ extracted and made available to the model:
107
 
108
+ ```python
109
+ def get_current_temperature(location: str, unit: str) -> float:
110
+ """
111
+ Get the current temperature at a location.
112
+
113
+ Args:
114
+ location: The location to get the temperature for, in the format "City, Country"
115
+ unit: The unit to return the temperature in. (choices: ["celsius", "fahrenheit"])
116
+ Returns:
117
+ The current temperature at the specified location in the specified units, as a float.
118
+ """
119
+ return 22. # A real function should probably actually get the temperature!
120
+
121
+ def get_current_wind_speed(location: str) -> float:
122
+ """
123
+ Get the current wind speed in km/h at a given location.
124
+
125
+ Args:
126
+ location: The location to get the temperature for, in the format "City, Country"
127
+ Returns:
128
+ The current wind speed at the given location in km/h, as a float.
129
+ """
130
+ return 6. # A real function should probably actually get the wind speed!
131
+
132
+ tools = [get_current_temperature, get_current_wind_speed]
133
  ```
134
 
135
+ Now, prepare a chat and apply the chat template, then generate the model's response
136
+
137
+ ```python
138
+ messages = [
139
+ {"role": "user", "content": "Hey, what's the temperature in Paris right now?"}
140
+ ]
141
+
142
+ inputs = tokenizer.apply_chat_template(messages, chat_template="tool_use", tools=tools, add_generation_prompt=True, return_dict=True, return_tensors="pt")
143
+ inputs = {k: v.to(model.device) for k, v in inputs.items()}
144
+ out = model.generate(**inputs, max_new_tokens=128)
145
+ print(tokenizer.decode(out[0][len(inputs["input_ids"][0]):]))
146
  ```
147
 
148
  The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
149
+
150
  ```
 
151
  <tool_call>
152
+ {"arguments": {"location": "Paris, France", "unit": "celsius"}, "name": "get_current_temperature"}
153
  </tool_call><|im_end|>
154
  ```
155
 
156
+ Once you parse the tool call, add it to the chat as an `assistant` response, using the `tool_calls` key, then append the tool output
157
+ as a response with the `tool` role:
158
+
159
+ ```python
160
+ tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France", "unit": "celsius"}}
161
+ messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]})
162
+ messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"})
163
  ```
164
+
165
+ Now you can apply the chat template again to format the conversation, and generate a response from the model:
166
+
167
+ ```python
168
+ inputs = tokenizer.apply_chat_template(messages, chat_template="tool_use", tools=tools, add_generation_prompt=True, return_dict=True, return_tensors="pt")
169
+ inputs = {k: v.to(model.device) for k, v in inputs.items()}
170
+ out = model.generate(**inputs, max_new_tokens=128)
171
+ print(tokenizer.decode(out[0][len(inputs["input_ids"][0]):]))
172
  ```
173
 
174
+ and we get:
175
+
176
  ```
177
+ The current temperature in Paris, France is 22.0 degrees Celsius.<|im_end|>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
178
  ```
179
 
180
  ## Prompt Format for JSON Mode / Structured Outputs