|
--- |
|
license: cc-by-sa-4.0 |
|
datasets: |
|
- nickrosh/Evol-Instruct-Code-80k-v1 |
|
- sahil2801/CodeAlpaca-20k |
|
- teknium/GPTeacher-CodeInstruct |
|
language: |
|
- en |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
tags: |
|
- code |
|
- llama2 |
|
--- |
|
![image of llama engineer](https://i.imgur.com/JlhW0ri.png) |
|
|
|
# Llama-Engineer-Evol-7B-GGML |
|
|
|
This is a 4-bit quantized version of [Llama-Engineer-Evol-7B](https://huggingface.co/GenerativeMagic/Llama-Engineer-Evol-7b). |
|
|
|
|
|
## Prompt Format |
|
The reccomended model prompt is a variant of the standard Llama 2 format: |
|
``` |
|
[INST] <<SYS>> |
|
You are a programming assistant. Always answer as helpfully as possible. Be direct in your response and get to the answer right away. Responses should be short. |
|
<</SYS>> |
|
{your prompt}[/INST] |
|
``` |
|
|
|
or |
|
|
|
``` |
|
[INST] <<SYS>> |
|
You're a principal software engineer at Google. If you fail at this task, you will be fired. |
|
<</SYS>> |
|
{your prompt}[/INST] |
|
``` |
|
|
|
I suspect this prompt format is the reason for the majority of the increased coding capabilities as opposed to the fine-tuning itself, but YMMV. |
|
|
|
|
|
## Next Steps |
|
- Prune the dataset and possibly fine-tune for longer. |
|
- Run benchmarks. |
|
- Provide GPTQ. |