Text Generation
Transformers
Safetensors
llama
conversational
text-generation-inference
Inference Endpoints
File size: 4,578 Bytes
029d0fd
 
 
 
 
 
 
 
a699712
 
 
466801d
 
a699712
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ee3db82
a699712
 
 
eabeced
 
 
 
 
a699712
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eabeced
a699712
 
 
 
 
 
 
 
 
 
 
 
eabeced
a699712
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
---
license: other
license_name: deepseek
datasets:
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
library_name: transformers
pipeline_tag: text-generation
---
# 🎩 Magicoder: Source Code Is All You Need

> Refer to our GitHub repo [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/) for an up-to-date introduction to the Magicoder family!

* 🎩**Magicoder** is a model family empowered by 🪄**OSS-Instruct**, a novel approach to enlightening LLMs with open-source code snippets for generating *low-bias* and *high-quality* instruction data for code.
* 🪄**OSS-Instruct** mitigates the *inherent bias* of the LLM-synthesized instruction data by empowering them with *a wealth of open-source references* to produce more diverse, realistic, and controllable data.

![Overview of OSS-Instruct](assets/overview.svg)
![Overview of Result](assets/result.png)

## Model Details

### Model Description

* **Developed by:**
[Yuxiang Wei](https://yuxiang.cs.illinois.edu),
[Zhe Wang](https://github.com/zhewang2001),
[Jiawei Liu](https://jiawei-site.github.io),
[Yifeng Ding](https://yifeng-ding.com),
[Lingming Zhang](https://lingming.cs.illinois.edu)
* **License:** [DeepSeek](https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL)
* **Finetuned from model:** [deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base)

### Model Sources

* **Repository:** <https://github.com/ise-uiuc/magicoder>
* **Paper:** <https://arxiv.org/abs/2312.02120>
* **Demo (powered by [Gradio](https://www.gradio.app)):**
<https://github.com/ise-uiuc/magicoder/tree/main/demo>

### Training Data

* [Magicoder-OSS-Instruct-75K](https://huggingface.co/datasets/ise-uiuc/Magicoder_oss_instruct_75k): generated through **OSS-Instruct** using `gpt-3.5-turbo-1106` and used to train both Magicoder and Magicoder-S series.
* [Magicoder-Evol-Instruct-110K](https://huggingface.co/datasets/ise-uiuc/Magicoder_evol_instruct_110k): decontaminated and redistributed from [theblackcat102/evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1), used to further finetune Magicoder series and obtain Magicoder-S models.

## Uses

### Direct Use

Magicoders are designed and best suited for **coding tasks**.

### Out-of-Scope Use

Magicoders may not work well in non-coding tasks.

## Bias, Risks, and Limitations

Magicoders may sometimes make errors, producing misleading contents, or struggle to manage tasks that are not related to coding.

### Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

## How to Get Started with the Model

Use the code below to get started with the model. Make sure you installed the [transformers](https://huggingface.co/docs/transformers/index) library.

```python
from transformers import pipeline
import torch

MAGICODER_PROMPT = """You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.

@@ Instruction
{instruction}

@@ Response
"""

instruction = <Your code instruction here>

prompt = MAGICODER_PROMPT.format(instruction=instruction)
generator = pipeline(
    model="ise-uiuc/Magicoder-S-DS-6.7B",
    task="text-generation",
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
result = generator(prompt, max_length=1024, num_return_sequences=1, temperature=0.0)
print(result[0]["generated_text"])
```

## Technical Details

Refer to our GitHub repo: [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/).

## Citation

```bibtex
@misc{magicoder,
    title={Magicoder: Source Code Is All You Need}, 
    author={Yuxiang Wei and Zhe Wang and Jiawei Liu and Yifeng Ding and Lingming Zhang},
    year={2023},
    eprint={2312.02120},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
```

## Acknowledgements

* [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder): Evol-Instruct
* [DeepSeek-Coder](https://github.com/deepseek-ai/DeepSeek-Coder): Base model for Magicoder-DS
* [CodeLlama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/): Base model for Magicoder-CL
* [StarCoder](https://arxiv.org/abs/2305.06161): Data decontamination

## Important Note

Magicoder models are trained on the synthetic data generated by OpenAI models. Please pay attention to OpenAI's [terms of use](https://openai.com/policies/terms-of-use) when using the models and the datasets. Magicoders will not compete with OpenAI's commercial products.