|
--- |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
tags: |
|
- fireplace |
|
- function-calling |
|
- code |
|
- code-instruct |
|
- valiant |
|
- valiant-labs |
|
- llama |
|
- llama-2 |
|
- llama-2-chat |
|
- 13b |
|
model_type: llama |
|
license: apache-2.0 |
|
--- |
|
|
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/qg49GOlx8zogDOrMTnb89.jpeg) |
|
|
|
|
|
Fireplace-13b is a function calling model built on the Llama 2 architecture. |
|
- Built on llama-2-13b architecture, using [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) as the base model. |
|
- Emphasizes function calling and code-instruct as skills. |
|
- Version 1.1 improves output structure for a superior user experience. |
|
|
|
(If you're looking for a friendly general-purpose chat model, try ours: [llama-13b](https://huggingface.co/ValiantLabs/ShiningValiantXS) and [70b](https://huggingface.co/ValiantLabs/ShiningValiant)) |
|
|
|
## Version |
|
|
|
This is Version **1.1** of Fireplace-13b. |
|
|
|
The current version of Fireplace-13b uses [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) trained on [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2). |
|
|
|
Fireplace is the first release in our Build Tools campaign, to deliver helpful open source capabilities for users and creators. |
|
|
|
**The next release in our Build Tools series will be coming soon, with an initial release at 70b parameters** - we're very excited to bring this to everyone! |
|
|
|
We're also working to bring Fireplace to larger model architectures, to maximize baseline model capability and function-calling performance. |
|
|
|
## Prompting Guide |
|
Fireplace-13b specializes in function calling and code instruct/chat. |
|
|
|
See [CodeLlama-13b-Instruct-hf](codellama/CodeLlama-13b-Instruct-hf) for code capabilities of the base model. |
|
|
|
For function calling in this version of the model, the recommended format is to deliver the function(s) in a system message and then proceed with chat: |
|
|
|
SYSTEM: You are Fireplace, an expert code assistant with access to the following functions. Use them if required - |
|
{ |
|
""name"": ""function_name"", |
|
} |
|
|
|
USER: Can you (do thing from function)? |
|
|
|
ASSISTANT: |
|
|
|
Assistant will deliver function call responses between \<functioncall> and <|endoftext|>: |
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/rpfkQKAS0E3483Qxn1HIF.png) |
|
|
|
|
|
(Please note that <|endoftext|> is not an EOS/EOT token, it is used to indicate the end of function call responses specifically.) |
|
|
|
For handling of function call responses, append "FUNCTION RESPONSE: " to the existing chat history: |
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/2bKX9Zsk6pHJxKYqEprcq.png) |
|
|
|
|
|
Fireplace is optimized for function/code capabilities and not general chat, but it has also been trained to utilize general instruct-chat capabilities: |
|
|
|
|
|
SYSTEM: You are a helpful assistant. |
|
|
|
USER: user chat input |
|
|
|
ASSISTANT: |
|
|
|
|
|
The model may be subject to errors and limitations, including those of the base model and dataset. We offer Fireplace-13b as open source for all to use. The user is responsible for all outputs. |
|
|
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg) |
|
|
|
|
|
Fireplace is created by [Valiant Labs.](http://valiantlabs.ca/) |
|
|
|
Try our flagship chat model, [Shining Valiant!](https://huggingface.co/ValiantLabs/ShiningValiant) |
|
|
|
[Follow us on X for updates on our models!](https://twitter.com/valiant_labs) |
|
|
|
We care about open source. |
|
For everyone to use. |
|
|
|
We encourage others to finetune further from our models. |