File size: 3,755 Bytes
b9043a0
06908a4
e298490
 
b9043a0
e298490
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e8d56d4
e298490
 
 
 
 
 
 
e8d56d4
 
e298490
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29cdfe1
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.2
---

### Overview

This is a qlora fine-tuned 13b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros

This is mostly an extension of [1.1](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.1), but with thousands of new training data and an update to allow "PLAINFORMAT" at the end of coding prompts to just print the code without backticks or explanations/usage/etc.

The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.2), with a specific focus on:
- coding
- math/reasoning (using orca style ELI5 instruction/response pairs)
- trivia
- role playing
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general

This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with the 7b/13b versions:

```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT: 
```

So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).

### Usage

To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```

Be sure you are pulling the latest branch!

Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
  --model-path airoboros-13b-gpt4-1.2 \
  --temperature 0.5 \
  --max-new-tokens 2048 \
  --no-history
```

Alternatively, please check out TheBloke's quantized versions:

- https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.2-GPTQ
- https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.2-GGML

### Coding updates from gpt4/1.1:

I added a few hundred instruction/response pairs to the training data with "PLAINFORMAT" as a single, all caps term at the end of the normal instructions, which produce plain text output instead of markdown/backtick code formatting.

It's not guaranteed to work all the time, but mostly it does seem to work as expected.

So for example, instead of:
```
Implement the Snake game in python.
```

You would use:
```
Implement the Snake game in python.  PLAINFORMAT
```

### Other updates from gpt4/1.1:

- Several hundred role-playing data.
- A few thousand ORCA style reasoning/math questions with ELI5 prompts to generate the responses (should not be needed in your prompts to this model however, just ask the question).
- Many more coding examples in various languages, including some that use specific libraries (pandas, numpy, tensorflow, etc.)

### Usage and License Notices

The model is licensed under the LLaMA model, and the dataset is licensed under the terms of OpenAI because it uses ChatGPT. Everything else is free.

Data were generated with OpenAI API (gpt-4) and this dataset is therefore subject to their ToS, meaning non-commercial.  I've put cc-nc as the license but it's probably better classified as "special/other"

aioroboros data and models are intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.