File size: 5,723 Bytes
332c99d f062a15 332c99d 8274e28 332c99d e437c6c 332c99d 8274e28 332c99d 8274e28 332c99d 8274e28 332c99d 9452bc7 332c99d fefad69 332c99d f600055 332c99d 8274e28 332c99d 8274e28 332c99d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 |
---
language:
- en
pipeline_tag: text-generation
tags:
- enigma
- valiant
- valiant-labs
- llama
- llama-3.1
- llama-3.1-instruct
- llama-3.1-instruct-8b
- llama-3
- llama-3-instruct
- llama-3-instruct-8b
- 8b
- code
- code-instruct
- python
- conversational
- chat
- instruct
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
datasets:
- sequelbox/Tachibana
- sequelbox/Supernova
model_type: llama
model-index:
- name: Llama3.1-8B-Enigma
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 55.39
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-Enigma
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 28.47
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-Enigma
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 10.12
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-Enigma
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.57
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-Enigma
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 11.41
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-Enigma
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.2
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-Enigma
name: Open LLM Leaderboard
license: llama3.1
---
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/it7MY5MyLCLpFQev5dUis.jpeg)
Enigma is a code-instruct model built on Llama 3.1 8b.
- High quality code instruct performance within the Llama 3 Instruct chat format
- Finetuned on synthetic code-instruct data generated with Llama 3.1 405b. [Find the current version of the dataset here!](https://huggingface.co/datasets/sequelbox/Tachibana)
- Overall chat performance supplemented with [generalist synthetic data.](https://huggingface.co/datasets/sequelbox/Supernova)
## Version
This is the **2024-09-04** release of Enigma for Llama 3.1 8b, enhancing code-instruct and general chat capabilities.
[**Enigma is now available for Llama 3.2 3b** - get it here!](https://huggingface.co/ValiantLabs/Llama3.2-3B-Enigma)
Help us and recommend Enigma to your friends! We're excited for more Enigma releases in the future.
Right now, we're working on more new Build Tools to come very soon, built on Llama 3.1 :)
## Prompting Guide
Enigma uses the [Llama 3.1 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) prompt format. The example script below can be used as a starting point for general chat:
```python
import transformers
import torch
model_id = "ValiantLabs/Llama3.1-8B-Enigma"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are Enigma, a highly capable code assistant."},
{"role": "user", "content": "Can you explain virtualization to me?"}
]
outputs = pipeline(
messages,
max_new_tokens=1024,
)
print(outputs[0]["generated_text"][-1])
```
## The Model
Enigma is built on top of Llama 3.1 8b Instruct, using high quality code-instruct data and general chat data in Llama 3.1 Instruct prompt style to supplement overall performance.
Our current version of Enigma is trained on code-instruct data from [sequelbox/Tachibana](https://huggingface.co/datasets/sequelbox/Tachibana) and general chat data from [sequelbox/Supernova.](https://huggingface.co/datasets/sequelbox/Supernova)
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg)
Enigma is created by [Valiant Labs.](http://valiantlabs.ca/)
[Check out our HuggingFace page for Shining Valiant 2 and our other Build Tools models for creators!](https://huggingface.co/ValiantLabs)
[Follow us on X for updates on our models!](https://twitter.com/valiant_labs)
We care about open source.
For everyone to use.
We encourage others to finetune further from our models. |