File size: 2,032 Bytes
e0f348c
e2325db
27f037c
 
8cf3e9a
ee36e51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6d74a61
ee36e51
 
 
b491aa4
 
 
 
 
 
 
 
 
 
 
 
e2325db
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
license: llama3
---
<div align="center">
  <img src="https://i.ibb.co/9hwFrvL/BLMs-Wkx-NQf-W-46-FZDg-ILhg.jpg" alt="Arcee Spark" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;">
</div>


Llama-Spark is a powerful conversational AI model developed by Arcee.ai. It's built on the foundation of Llama-3.1-8B and merges the power of our Tome Dataset with Llama-3.1-8B-Instruct, resulting in a remarkable conversationalist that punches well above its 8B parameter weight class.

## GGUFs available [here](https://huggingface.co/arcee-ai/Llama-Spark-GGUF)

## Model Description

Llama-Spark is our commitment to consistently delivering the best-performing conversational AI in the 6-9B parameter range. As new base models become available, we'll continue to update and improve Spark to maintain its leadership position. 

This model is a successor to our original Arcee-Spark, incorporating advancements and learnings from our ongoing research and development.

## Intended Uses

Llama-Spark is intended for use in conversational AI applications, such as chatbots, virtual assistants, and dialogue systems. It excels at engaging in natural and informative conversations.

## Training Information

Llama-Spark is built upon the Llama-3.1-8B base model, fine-tuned using of the Tome Dataset and merged with Llama-3.1-8B-Instruct.

## Acknowledgements

We extend our deepest gratitude to **PrimeIntellect** for being our compute sponsor for this project. 

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_arcee-ai__Llama-Spark)

|      Metric       |Value|
|-------------------|----:|
|Avg.               |24.90|
|IFEval (0-Shot)    |79.11|
|BBH (3-Shot)       |29.77|
|MATH Lvl 5 (4-Shot)| 1.06|
|GPQA (0-shot)      | 6.60|
|MuSR (0-shot)      | 2.62|
|MMLU-PRO (5-shot)  |30.23|