File size: 3,131 Bytes
d74b1cb
dcac0c1
 
 
 
 
 
 
 
 
 
 
d74b1cb
 
dcac0c1
 
 
 
 
 
 
 
 
815b660
dcac0c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97a4c94
 
f22fe94
97a4c94
98e2782
 
dcac0c1
 
 
 
 
 
 
 
 
 
1be6b16
 
a46502e
 
dcac0c1
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
datasets:
- cognitivecomputations/dolphin
- jondurbin/airoboros-2.2.1
- cognitivecomputations/dolphin-coder
- teknium/openhermes
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
- m-a-p/Code-Feedback
- m-a-p/CodeFeedback-Filtered-Instruction
language:
- en
license: bigcode-openrail-m
---

DolphinCoder StarCoder2 15b 🐬

sponsored by [latitude.sh](https://www.latitude.sh/).

Join our Discord! https://discord.gg/cognitivecomputations 

<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />

This model is based on StarCoder2-15b and is subject to bigcode-openrail-m license.

This Dolphin is *really good* at coding, I trained with a lot of coding data.  

This model is uncensored.  I have filtered the dataset to remove alignment and bias.  This makes the model more compliant.  You are advised to implement your own alignment layer before exposing the model as a service.  It will be highly compliant to any requests, even unethical ones.  Please read my blog post about uncensored models.  https://erichartford.com/uncensored-models
You are responsible for any content you create using this model.  Enjoy responsibly.

## Training
It took 3 days to train 3 epochs on 8x H100s using qLoRA and Axolotl

Prompt format:
This model uses ChatML prompt format.
```
<|im_start|>system
You are DolphinCoder, a helpful AI programming assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

```

Example:
```
<|im_start|>system
You are DolphinCoder, a master at software engineering and coding in any programming language.
<|im_start|>user
Please write me a program in golang that parses all the lines in a file, and reverses them character-wise, and saves it to a new file.
<|im_start|>assistant
```

## Quantized models

- [gguf](https://huggingface.co/dagbs/dolphincoder-starcoder2-15b-GGUF)

- [ExLlamaV2](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-exl2)

## Gratitude
- This model was made possible by the generous sponsorship of [latitude.sh](https://www.latitude.sh/).
- Huge thank you to [BigCode](https://www.bigcode-project.org/) for training and publishing the weights of StarCoder2
- HUGE Thank you to the dataset authors: @ise-uiuc, @teknium, @m-a-p
- And HUGE thanks to @winglian and the Axolotl contributors for making the best training framework!
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.

## Example Output

![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/9Yhoy6PYoreqX8KocaDWb.png)

![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/lz0fXODictCmW5pNultqq.png)

[If you would like to financially support my efforts](https://ko-fi.com/erichartford)

[swag](https://fa7113.myshopify.com/)