Edit model card

Model Card for EdgeRunner-Command-Nested

EdgeRunner-Command-Nested is an advanced large language model designed specifically for handling complex nested function calls. Initialized from Qwen2.5-7B-Instruct, further enhanced by the integration of the Hermes function call template and additional training on a specialized dataset (based on TinyAgent). This extra dataset focuses on personal domain applications, providing the model with a robust understanding of nested function scenarios that are typical in complex user interactions.

Data Processing Process

1. Data Augmentation:

  • TinyAgent Augmentation: TinyAgent consists of 16 core functions, but each sample only requires a subset of these functions to work. To ensure diversity and avoid overfitting, we used an augmentation technique where each sample is provided with a random superset of functions. This superset covers all the functions that the specific sample requires but is always a subset of the 16 total functions available in TinyAgent. By doing so, we allow the model to generalize across various combinations of functions while still providing the necessary context for each sample to execute correctly.

2. Additional Data Samples:

  • GPT-4o Augmented Samples: To further enhance the dataset, we used GPT-4 for function synthesis in the personal assistant domain. This was done by using a pipeline that automatically generates function definitions, corresponding queries, and execution plans. After generating the function data, each sample was verified for validity to ensure the correctness of the functional outputs. Through this pipeline, we produced approximately 2,000 additional samples. These samples follow the TinyAgent format but introduce a broader range of new and more complex functions. This expanded set of functions enriches the model's understanding of diverse scenarios, allowing it to handle more varied function calls beyond the synthesis-based functions of TinyAgent, thereby improving its ability to generalize across tasks.

3. Data Conversion:

  • Hermes Format Conversion: All function call data, including those from TinyAgent, the additional samples generated, and the function call data we had previously, were converted to the Hermes format. In TinyAgent, the Hermes format uses the $ + number format to indicate nested use cases. This ensures consistency in data representation while also clearly defining nested function use cases for enhanced interpretability. Standardizing all data in Hermes format allows the model to efficiently process and understand function calls across a variety of scenarios.

Example Code

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "edgerunner-ai/EdgeRunner-Command-Nested"
tokenizer = AutoTokenizer.from_pretrained(model_id)

# You can find the full tools list here https://jsonblob.com/1290566717196328960
tools = [{
    "name": "open_and_get_file_path",
    "description": "Opens the file and returns its path.",
    "parameters": {
      "type": "object",
      "properties": {
        "file_name": {
          "type": "str",
          "description": "The name of the file to open"
        }
      },
      "required": [
        "file_name"
      ]
    }
  }, ...]

conversation = [{
        "role": "user",
        "content": "Send an email to Andrea attached with the file 'report.pdf' and ask when he is available for a meeting."
    }]

# Render the tool use prompt as a string:
tool_use_prompt = tokenizer.apply_chat_template(
            conversation,
            tools=tools,
            tokenize=False,
            add_generation_prompt=True,
)

inputs = tokenizer(tool_use_prompt, return_tensors="pt")

model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")

outputs = model.generate(**inputs, max_new_tokens=1000)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Examples

Simple function

Single turn

**User:**  
How is the weather today in New York?

**Assistant:**  
I don't have direct access to real-time data like current weather conditions. However, I can help you find out by performing a web search. Let me do that now.

<tool_call>  
{"name": "web_search", "arguments": {"query": "weather today New York"}}  
</tool_call>

Multi turn

**User:**  
I want to search for some information about music, can you help me with that?

**Assistant:**  
Of course! What specific information about music are you looking for?

**User:**  
I'm looking for information about the history of rock music.

**Assistant:**  
<tool_call>  
{"name": "web_search", "arguments": {"query": "history of rock music"}}  
</tool_call>
**User:**  
I want to search for some information about music, can you help me with that?

**Assistant:**  
Of course! What specific information about music are you looking for?

**User:**  
I'm looking for information about the history of rock music and also information about Justin Bieber.

**Assistant:**  
<tool_call>  
{"name": "web_search", "arguments": {"query": "history of rock music"}}  
</tool_call>
<tool_call>  
{"name": "web_search", "arguments": {"query": "information about Justin Bieber"}}  
</tool_call>

Nested functions

Success

**User:**  
Create a meeting with Mr.A and Mr.B to discuss Edgerunner on 27/9/2024 at 5:00 PM.

**Assistant:**  
<tool_call>  
{"name": "get_email_address", "arguments": {"name": "Mr.A"}}  
</tool_call>
<tool_call>  
{"name": "get_email_address", "arguments": {"name": "Mr.B"}}  
</tool_call>
<tool_call>  
{"name": "create_calendar_event", "arguments": {"title": "Edgerunner Discussion", "start_date": "2024-09-27 17:00:00", "end_date": "2024-09-27 18:00:00", "location": "", "invitees": ["$1", "$2"], "notes": "", "calendar": null}}  
</tool_call>
**User:**  
Generate an image with the prompt 'A beautiful landscape with mountains and a lake' and send it to the Discord server with the message 'Here is a beautiful landscape'.

**Assistant:**  
<tool_call>  
{"name": "generate_image", "arguments": {"prompt": "A beautiful landscape with mountains and a lake"}}  
</tool_call>
<tool_call>  
{"name": "send_to_discord", "arguments": {"message": "Here is a beautiful landscape.", "attachments": ["$1"]}}  
</tool_call>
**User:**  
Send the directions to the Eiffel Tower from the Louvre Museum to Mr. C and Mr. D with an SMS message.

**Assistant:**  
<tool_call>  
{"name": "get_phone_number", "arguments": {"name": "Mr.C"}}  
</tool_call>
<tool_call>  
{"name": "get_phone_number", "arguments": {"name": "Mr.D"}}  
</tool_call>
<tool_call>  
{"name": "maps_show_directions", "arguments": {"start_location": "Louvre Museum", "end_location": "Eiffel Tower", "transport": "d"}}  
</tool_call>
<tool_call>  
{"name": "send_sms", "arguments": {"recipients": ["$1", "$2"], "message": "Here are the directions to get from Louvre Museum to Eiffel Tower: $3"}}  
</tool_call>
**User:**  
Send an email to Mr. A attached with the file 'report.pdf' and ask when he is available for a meeting.

**Assistant:**  
<tool_call>  
{"name": "get_email_address", "arguments": {"name": "Mr.A"}}  
</tool_call>
<tool_call>  
{"name": "open_and_get_file_path", "arguments": {"file_name": "report.pdf"}}  
</tool_call>
<tool_call>  
{"name": "compose_new_email", "arguments": {"recipients": ["$1"], "cc": [], "subject": "Availability for Meeting", "context": "Please find the attached report. When are you available for a meeting?", "attachments": ["$2"]}}  
</tool_call>

Failed

**User:**  
I want to generate an image and send it to Discord. Can you help me with that?

**Assistant:**  
Of course! Could you please provide me with a description or prompt for the image you would like to generate?

**User:**  
I want to generate an image of a cat with a gray background.

**Assistant:**  
<tool_call>  
{"name": "generate_image", "arguments": {"prompt": "a cat with a gray background"}}  
</tool_call>
<tool_call>  
{"name": "send_to_discord", "arguments": {"message": "", "attachments": ["image.png"]}}  
</tool_call>

Benchmarks

Berkeley Function Calling Benchmark Results

Test Name EdgeRunner-Command-Nested Qwen/Qwen2.5-7B-Instruct
multiple_function 0.95 0.81
parallel_multiple_function 0.84 0.655
parallel_function 0.86 0.69
simple 0.96 0.9275
irrelevance 0.74 0.625
Downloads last month
366
Safetensors
Model size
7.62B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for edgerunner-ai/EdgeRunner-Command-Nested

Base model

Qwen/Qwen2.5-7B
Finetuned
(77)
this model
Quantizations
3 models