Text Generation
Transformers
English
code
llama2
Inference Endpoints
vshenoy commited on
Commit
98175d6
·
1 Parent(s): defc0cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -1
README.md CHANGED
@@ -1,3 +1,37 @@
1
  ---
2
- license: cc-by-nc-4.0
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-sa-4.0
3
+ datasets:
4
+ - nickrosh/Evol-Instruct-Code-80k-v1
5
+ - sahil2801/CodeAlpaca-20k
6
+ - teknium/GPTeacher-CodeInstruct
7
+ language:
8
+ - en
9
+ library_name: transformers
10
+ pipeline_tag: text-generation
11
+ tags:
12
+ - code
13
+ - llama2
14
  ---
15
+ ![image of llama engineer](https://i.imgur.com/JlhW0ri.png)
16
+
17
+ # Llama-Engineer-Evol-7B-GGML
18
+
19
+ This is a 4-bit quantized version of [Llama-Engineer-Evol-7B](https://huggingface.co/GenerativeMagic/Llama-Engineer-Evol-7b).
20
+
21
+
22
+ ## Prompt Format
23
+ The reccomended model prompt is a variant of the standard Llama 2 format:
24
+ ```
25
+ [INST] <<SYS>>
26
+ You are a programming assistant. Always answer as helpfully as possible. Be direct in your response and get to the answer right away. Responses should be short.
27
+ <</SYS>>
28
+ {your prompt}[/INST]
29
+ ```
30
+
31
+ I suspect this prompt format is the reason for the majority of the increased coding capabilities as opposed to the fine-tuning itself, but YMMV.
32
+
33
+
34
+ ## Next Steps
35
+ - Prune the dataset and possibly fine-tune for longer.
36
+ - Run benchmarks.
37
+ - Provide GPTQ.