Edit model card
Arcee Spark

Llama-Spark is a powerful conversational AI model developed by Arcee.ai. It's built on the foundation of Llama-3.1-8B and merges the power of our Tome Dataset with Llama-3.1-8B-Instruct, resulting in a remarkable conversationalist that punches well above its 8B parameter weight class.

GGUFs available here

Model Description

Llama-Spark is our commitment to consistently delivering the best-performing conversational AI in the 6-9B parameter range. As new base models become available, we'll continue to update and improve Spark to maintain its leadership position.

This model is a successor to our original Arcee-Spark, incorporating advancements and learnings from our ongoing research and development.

Intended Uses

Llama-Spark is intended for use in conversational AI applications, such as chatbots, virtual assistants, and dialogue systems. It excels at engaging in natural and informative conversations.

Training Information

Llama-Spark is built upon the Llama-3.1-8B base model, fine-tuned using of the Tome Dataset and merged with Llama-3.1-8B-Instruct.

Acknowledgements

We extend our deepest gratitude to PrimeIntellect for being our compute sponsor for this project.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 24.90
IFEval (0-Shot) 79.11
BBH (3-Shot) 29.77
MATH Lvl 5 (4-Shot) 1.06
GPQA (0-shot) 6.60
MuSR (0-shot) 2.62
MMLU-PRO (5-shot) 30.23
Downloads last month
2,681
Safetensors
Model size
8.03B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for arcee-ai/Llama-Spark

Merges
6 models
Quantizations
5 models