Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Llama3-70B-Fireplace - GGUF

Original model description:

language: - en pipeline_tag: text-generation tags: - fireplace - valiant - valiant-labs - llama - llama-3 - llama-3-instruct - llama-3-instruct-70b - 70b - function-calling - conversational - chat - instruct model_type: llama license: llama3

image/jpeg

Fireplace is a function-calling model for Llama 3 70b Instruct.

  • combines function-calling abilities with a high-performance, versatile chat model
  • function-calling utilizing the Llama 3 Instruct format

This version of Fireplace, like our previous Fireplace-13b and Fireplace-34b models, focuses on combining chat-instruct and function-calling only.

We're working now on Fireplace 2 for Llama 3, which will include function calling as one of several enhanced technical skills.

Version

This is the 2024-05-09 release of Fireplace for Llama 3 70b.

We're excited to bring additional releases for Fireplace and other models in our Build Tools lineup to Llama 3 soon!

Prompting Guide

Fireplace uses the Llama 3 Instruct prompt format:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>{{ user_msg_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>{{ model_answer_1 }}<|eot_id|>

Example input for function calling:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n You are Fireplace, an expert code assistant with access to the following functions. Use them if required - { "name": "calculate_tip", "description": "Calculate the tip amount for a bill", "parameters": { "type": "object", "properties": { "bill_amount": { "type": "number", "description": "The total amount of the bill" }, "tip_percentage": { "type": "number", "description": "The percentage of tip to be given" } }, "required": [ "bill_amount", "tip_percentage" ] } } { "name": "check_website_availability", "description": "Check the availability of a website", "parameters": { "type": "object", "properties": { "url": { "type": "string", "description": "The URL of the website" } }, "required": [ "url" ] } } <|eot_id|><|start_header_id|>user<|end_header_id|>\n\nHi, I need help with calculating a tip. My bill is $100 and I want to leave a 30% tip. <|eot_id|><|start_header_id|>assistant<|end_header_id|>

For assistant handling of function responses, deliver them in a new user message:

<|start_header_id|>user<|end_header_id|>\n\n FUNCTION RESPONSE: {"status": "success", "message": "Email has been sent successfully"} <|eot_id|>

WARNING: text-generation-webui

When using Llama 3 Instruct models (including Fireplace) with text-generation-webui note that a current bug in webui can result in incorrect reading of the model's ending tokens, causing unfinished outputs and incorrect structure.

For a temporary workaround if you encounter this issue, edit Fireplace's tokenizer_config file as indicated:

from "eos_token": "<|end_of_text|>",

to "eos_token": "<|eot_id|>",

The Model

Fireplace is built on top of Llama 3 70b Instruct, the highest performance open-source model currently available.

This version of Fireplace uses the glaiveai/glaive-function-calling-v2 dataset converted to Llama 3 Instruct format.

image/jpeg

Fireplace is created by Valiant Labs.

Check out our HuggingFace page for Shining Valiant 2 and our other models!

Follow us on X for updates on our models!

We care about open source. For everyone to use.

We encourage others to finetune further from our models.

Downloads last month
887
GGUF
Model size
70.6B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .