You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Purchase access to this repo HERE!

Log in or Sign Up to review the conditions and access this model content.

Trelis Tiny

Trelis Tiny is a 1.3B parameter model:

  • Capable of function calling.
  • High token generation speed whether local or hosted.

The GGUF files are NOT WORKING at present, likely owing to a quantization issue. This is being investigated as of Feb 22nd 2024. An issue is open on Github here.

Purchase access to this model here.

  • Purchase includes access to future improvements to the Tiny model that are pushed to this repo.

Function-calling notes:

  • The function metadata format is the same as used for OpenAI.
  • The model is suitable for commercial use.
  • GGUF models are available in 8-bit, 4-bit and 2-bit format in the GGUF branch.
  • For Tiny models, only chain at most one function call at a time.

Check out other fine-tuned function calling models here.

Warning: This model is built on the DeepSeek Coder 1.3B base model and has only been subjected to supervised fine-tuning and fine-tuning for function calling. Care is required in using the model as it may generate undesirable output.

Quick Server Setup

Runpod one click template here. You must add a HuggingFace Hub access token (HUGGING_FACE_HUB_TOKEN) to the environment variables as this is a gated model.

Runpod Affiliate Link (helps support the Trelis channel).

You can also inference the Trelis Tiny model using llama.cpp and gguf files in the gguf branch.

Inference Scripts

See below for sample prompt format.

Complete inference scripts are available for purchase here:

  • Easily format prompts using tokenizer.apply_chat_format (starting from openai formatted functions and a list of messages)
  • Automate catching, handling and chaining of function calls.

Prompt Format

B_FUNC, E_FUNC = "You have access to the following functions. Use them if required:\n\n", "\n\n"
B_INST, E_INST = "\n### Instruction:\n", "\n### Response:\n" #DeepSeek Coder Style
prompt = f"{B_INST}{B_FUNC}{functionList.strip()}{E_FUNC}{user_prompt.strip()}{E_INST}\n\n"

Using tokenizer.apply_chat_template (HIGHLY RECOMMENDED)

For an easier application of the prompt, you can set up as follows:

Set up messages:

[
    {
        "role": "function_metadata",
        "content": "FUNCTION_METADATA"
    },
    {
        "role": "user",
        "content": "What is the current weather in London?"
    },
    {
        "role": "function_call",
        "content": "{\n    \"name\": \"get_current_weather\",\n    \"arguments\": {\n        \"city\": \"London\"\n    }\n}"
    },
    {
        "role": "function_response",
        "content": "{\n    \"temperature\": \"15 C\",\n    \"condition\": \"Cloudy\"\n}"
    },
    {
        "role": "assistant",
        "content": "The current weather in London is Cloudy with a temperature of 15 Celsius"
    }
]

with FUNCTION_METADATA as:

[
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "This function gets the current weather in a given city",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {
                        "type": "string",
                        "description": "The city, e.g., San Francisco"
                    },
                    "format": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "The temperature unit to use."
                    }
                },
                "required": ["city"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "get_clothes",
            "description": "This function provides a suggestion of clothes to wear based on the current weather",
            "parameters": {
                "type": "object",
                "properties": {
                    "temperature": {
                        "type": "string",
                        "description": "The temperature, e.g., 15 C or 59 F"
                    },
                    "condition": {
                        "type": "string",
                        "description": "The weather condition, e.g., 'Cloudy', 'Sunny', 'Rainy'"
                    }
                },
                "required": ["temperature", "condition"]
            }
        }
    }    
]

and then apply the chat template to get a formatted prompt:

tokenizer = AutoTokenizer.from_pretrained('Trelis/Tiny', trust_remote_code=True)

prompt = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)

If you are using a gated model, you need to first run:

pip install huggingface_hub
huggingface-cli login

Manual Prompt:

### Instruction:
You have access to the following functions. Use them if required:

[
    {
        "type": "function",
        "function": {
            "name": "get_big_stocks",
            "description": "Get the names of the largest N stocks by market cap",
            "parameters": {
                "type": "object",
                "properties": {
                    "number": {
                        "type": "integer",
                        "description": "The number of largest stocks to get the names of, e.g. 25"
                    },
                    "region": {
                        "type": "string",
                        "description": "The region to consider, can be \"US\" or \"World\"."
                    }
                },
                "required": [
                    "number"
                ]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "get_stock_price",
            "description": "Get the stock price of an array of stocks",
            "parameters": {
                "type": "object",
                "properties": {
                    "names": {
                        "type": "array",
                        "items": {
                            "type": "string"
                        },
                        "description": "An array of stocks"
                    }
                },
                "required": [
                    "names"
                ]
            }
        }
    }
]

Get the names of the five largest stocks in the US by market cap
### Response:

{
    "name": "get_big_stocks",
    "arguments": {
        "number": 5,
        "region": "US"
    }
}<|end▁of▁sentence|>```

# Dataset
See [Trelis/function_calling_v3](https://huggingface.co/datasets/Trelis/function_calling_v3).

# License
This model may be used commercially for inference according to the terms of the DeepSeek license, or for further fine-tuning and inference (but not re-sale or publication of those models). Users may not re-publish or re-sell this model in the same or derivative form (including fine-tunes).

The use of DeepSeek Coder models is further subject to the DeepSeek Coder Model License. See the [DeepSeek LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details.
Downloads last month
2
Safetensors
Model size
1.35B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using Trelis/Tiny 3

Collection including Trelis/Tiny