File size: 3,659 Bytes
2478f77 f52ab9f 4a1aca9 f52ab9f 2478f77 f52ab9f 1ba2484 f52ab9f a610589 f52ab9f a610589 f52ab9f a610589 f52ab9f a610589 f52ab9f a610589 f52ab9f a610589 f52ab9f a610589 f52ab9f a610589 f52ab9f a610589 f52ab9f a610589 f52ab9f a610589 f52ab9f a610589 f52ab9f d656b66 f52ab9f d656b66 f52ab9f d656b66 f52ab9f d656b66 f52ab9f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 |
---
language:
- en
pipeline_tag: text-generation
tags:
- fireplace
- function-calling
- code
- code-instruct
- valiant
- valiant-labs
- llama
- llama-2
- llama-2-chat
- 13b
model_type: llama
license: apache-2.0
---
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/qg49GOlx8zogDOrMTnb89.jpeg)
Fireplace-13b is a function calling model built on the Llama 2 architecture.
- Built on llama-2-13b architecture, using [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) as the base model.
- Emphasizes function calling and code-instruct as skills.
- Version 1.1 improves output structure for a superior user experience.
(If you're looking for a friendly general-purpose chat model, try ours: [llama-13b](https://huggingface.co/ValiantLabs/ShiningValiantXS) and [70b](https://huggingface.co/ValiantLabs/ShiningValiant))
## Version
This is Version **1.1** of Fireplace-13b.
The current version of Fireplace-13b uses [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) trained on [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2).
Fireplace is the first release in our Build Tools campaign, to deliver helpful open source capabilities for users and creators.
**The next release in our Build Tools series will be coming soon, with an initial release at 70b parameters** - we're very excited to bring this to everyone!
We're also working to bring Fireplace to larger model architectures, to maximize baseline model capability and function-calling performance.
## Prompting Guide
Fireplace-13b specializes in function calling and code instruct/chat.
See [CodeLlama-13b-Instruct-hf](codellama/CodeLlama-13b-Instruct-hf) for code capabilities of the base model.
For function calling in this version of the model, the recommended format is to deliver the function(s) in a system message and then proceed with chat:
SYSTEM: You are Fireplace, an expert code assistant with access to the following functions. Use them if required -
{
""name"": ""function_name"",
}
USER: Can you (do thing from function)?
ASSISTANT:
Assistant will deliver function call responses between \<functioncall> and <|endoftext|>:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/rpfkQKAS0E3483Qxn1HIF.png)
(Please note that <|endoftext|> is not an EOS/EOT token, it is used to indicate the end of function call responses specifically.)
For handling of function call responses, append "FUNCTION RESPONSE: " to the existing chat history:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/2bKX9Zsk6pHJxKYqEprcq.png)
Fireplace is optimized for function/code capabilities and not general chat, but it has also been trained to utilize general instruct-chat capabilities:
SYSTEM: You are a helpful assistant.
USER: user chat input
ASSISTANT:
The model may be subject to errors and limitations, including those of the base model and dataset. We offer Fireplace-13b as open source for all to use. The user is responsible for all outputs.
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg)
Fireplace is created by [Valiant Labs.](http://valiantlabs.ca/)
Try our flagship chat model, [Shining Valiant!](https://huggingface.co/ValiantLabs/ShiningValiant)
[Follow us on X for updates on our models!](https://twitter.com/valiant_labs)
We care about open source.
For everyone to use.
We encourage others to finetune further from our models. |