Tool response formatting

#3
by RonanMcGovern - opened

Thanks for making these models available!

My question is on this portion of the example on the model card:

<|start_header_id|>assistant<|end_header_id|>

<tool_call>
{"id":"call_deok","name":"get_current_weather","arguments":{"location":"San Francisco","unit":"celsius"}}
</tool_call><|eot_id|><|start_header_id|>tool<|end_header_id|>

<tool_response>
{"id":"call_deok","result":{"temperature":"72","unit":"celsius"}}
</tool_response><|eot_id|><|start_header_id|>assistant<|end_header_id|>

Questions:

  1. How exactly does the model respond to the user question? It would seem surprising to me if it responded with:
<tool_call>
{"id":"call_deok","name":"get_current_weather","arguments":{"location":"San Francisco","unit":"celsius"}}
</tool_call>

if so, where is the "call_deok" coming from?

  1. What is the recommended way to pass the tool response back to the model in order for the model to give a useful answer back? Is it like this:
<|start_header_id|>tool<|end_header_id|>

<tool_response>
{"id":"call_deok","result":{"temperature":"72","unit":"celsius"}}
</tool_response><|eot_id|><|start_header_id|>assistant<|end_header_id|>

Again, it seems odd to be passing in an id like that.

  1. Does the training dataset include training only on generation the tool calls OR does the training dataset also include examples of full conversations where there are tool calls and then the LLM uses the tool call to give a text answer.

Thanks.

Groq org

It would seem surprising to me if it responded with:

Why surprising? It just comes up with an "id" but usually starts from index 0

Is it like this:

Correct, why odd? The ID is needed to distinguish between multiple results that belong to different tool calls.

E.g. this would be valid:

<|start_header_id|>assistant<|end_header_id|>

<tool_call>
{"id":"0","name":"get_current_weather","arguments":{"location":"San Francisco","unit":"celsius"}}
</tool_call><tool_call>
{"id":"1","name":"get_current_weather","arguments":{"location":"Los Angeles","unit":"celsius"}}
</tool_call><|eot_id|>

Does the training dataset include training only on generation the tool calls OR does the training dataset also include examples of full conversations where there are tool calls and then the LLM uses the tool call to give a text answer.

We don't disclose training data details at this time.

Thanks Rick, appreciate it.

On points 1 and 2, I just didn't appreciate the id was a unique identifier that can help the model link tool calls and tool responses. Makes sense now.

On point 3, fair enough, I should have said the motivation for my question was to understand if there is a particular way you recommend passing back the tool response so as to get the best response from the model on the next call.

RonanMcGovern changed discussion status to closed
Groq org

a particular way you recommend passing back the tool response so as to get the best response from the model on the next call

Just the way shown in the text prompt, not sure if anything is still unclear about that format for the tool response.

I think it's clear. Thanks

Sign up or log in to comment