Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,242 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# aiXcoder-7B Code Large Language Model
|
2 |
+
|
3 |
+
<p align="center">
|
4 |
+
🏠 <a href="https://www.aixcoder.com/" target="_blank">Official website</a>|🛠 <a href="https://marketplace.visualstudio.com/items?itemName=aixcoder-plugin.aixcoder" target="_blank">VS Code Plugin</a>|🛠 <a href="https://plugins.jetbrains.com/plugin/13574-aixcoder-code-completer" target="_blank">Jetbrains Plugin</a>|<a href="https://github.com/aixcoder-plugin/aiXcoder-7B" target="_blank">Github Project</a>
|
5 |
+
</p>
|
6 |
+
|
7 |
+
Welcome to the official repository of aiXcoder-7B Code Large Language Model. This model is designed to understand and generate code across multiple programming languages, offering state-of-the-art performance in code completion, comprehension, generation, and more tasks about programming languages.
|
8 |
+
|
9 |
+
Table of Contents
|
10 |
+
|
11 |
+
1. [Model Introduction](#model-introduction)
|
12 |
+
2. [Quickstart](#quickstart)
|
13 |
+
- [Environment Requirements](#environment-requirements)
|
14 |
+
- [Model Weights](#model-weights)
|
15 |
+
- [Inference Example](#inference-example)
|
16 |
+
3. [License](#license)
|
17 |
+
4. [Acknowledgments](#acknowledgments)
|
18 |
+
|
19 |
+
|
20 |
+
|
21 |
+
## Model Introduction
|
22 |
+
|
23 |
+
As the capabilities of large code models are gradually being unearthed, aiXcoder has consistently pondered on how to make these models more beneficial in real development scenarios. To this end, we have open-sourced aiXcoder 7B Base, which has undergone extensive training on 1.2T Unique Tokens, and the model's pre-training tasks as well as the contextual information have been uniquely designed for real-world code generation contexts.
|
24 |
+
|
25 |
+
aiXcoder 7B Base stands out as the most effective model in code completion scenarios among all models of similar parameter sizes, and it also surpasses mainstream models like codellama 34B and StarCoder2 15B in the average performance on the multilingual nl2code benchmark.
|
26 |
+
|
27 |
+
In our ongoing exploration to apply large code models, the release of aiXcoder 7B Base represents a significant milestone. The current version of aiXcoder 7B Base is a foundational model that focuses on improving the efficiency and accuracy of code completion and code generation tasks, aiming to provide robust support for developers in these scenarios. It is important to note that this version has not undergone specific instruct-tuning, which means it might not yet offer optimal performance for specialized higher-level tasks such as test case generation and code debugging.
|
28 |
+
|
29 |
+
However, we have plans for further development of the aiXcoder model series already in motion. In the near future, we aim to release new versions of the model that have been meticulously instruct-tuned for a wider range of programming tasks, including but not limited to test case generation and code debugging. Through these instruct-tuned models, we anticipate offering developers more comprehensive and deeper programming support, helping them to maximize efficiency at every stage of software development.
|
30 |
+
|
31 |
+
## Quickstart
|
32 |
+
|
33 |
+
### Environment Requirements
|
34 |
+
|
35 |
+
#### Option 1: Build Env
|
36 |
+
|
37 |
+
To run the model inference code, you'll need the following environment setup:
|
38 |
+
|
39 |
+
- Python 3.8 or higher
|
40 |
+
- PyTorch 2.1.0 or higher
|
41 |
+
- sentencepiece 0.2.0 or higher
|
42 |
+
- transformers 4.34.1 or higher (if run inference by transformers library)
|
43 |
+
|
44 |
+
Please ensure all dependencies are installed using the following command:
|
45 |
+
|
46 |
+
```bash
|
47 |
+
conda create -n aixcoder-7b python=3.11
|
48 |
+
conda activate aixcoder-7b
|
49 |
+
git clone git@github.com:aixcoder-plugin/aiXcoder-7b.git
|
50 |
+
cd aiXcoder-7b
|
51 |
+
pip install -r requirements.txt
|
52 |
+
```
|
53 |
+
|
54 |
+
`requirements.txt` listed all necessary libraries and their versions.
|
55 |
+
|
56 |
+
To achieve faster inference speeds, especially for large models, we recommend installing `flash attention`. `Flash attention` is an optimized attention mechanism that significantly reduces computation time for transformer-based models without sacrificing accuracy.
|
57 |
+
|
58 |
+
Before proceeding, ensure your environment meets the CUDA requirements as `flash attention` leverages GPU acceleration. Follow these steps to install `flash attention`:
|
59 |
+
|
60 |
+
```bash
|
61 |
+
git clone git@github.com:Dao-AILab/flash-attention.git
|
62 |
+
cd flash-attention
|
63 |
+
MAX_JOBS=8 python setup.py install
|
64 |
+
```
|
65 |
+
|
66 |
+
#### Option 2: Docker
|
67 |
+
|
68 |
+
For a consistent and isolated environment, we recommend running the model inference code using Docker. Here's how to set up and use Docker for our model:
|
69 |
+
|
70 |
+
1. Install Docker: If you haven't already, install Docker on your machine.
|
71 |
+
|
72 |
+
2. Pull the Docker Image: Pull the Docker image from Docker Hub.
|
73 |
+
|
74 |
+
```bash
|
75 |
+
docker pull pytorch/pytorch:2.1.0-cuda11.8-cudnn8-devel
|
76 |
+
```
|
77 |
+
|
78 |
+
3. Run the Container: Once the image is pulled, you can run the model inside a Docker container.
|
79 |
+
|
80 |
+
```bash
|
81 |
+
docker run --gpus all -it -v /dev/shm:/dev/shm --name aix_instance pytorch/pytorch:2.1.0-cuda11.8-cudnn8-devel /bin/bash
|
82 |
+
pip install sentencepiece
|
83 |
+
git clone git@github.com:aixcoder-plugin/aiXcoder-7b.git
|
84 |
+
cd aiXcoder-7b
|
85 |
+
```
|
86 |
+
|
87 |
+
This command starts a container named aix_instance from the pytorch image. You can interact with the model inside this container.
|
88 |
+
|
89 |
+
To achieve faster inference speeds, especially for large models, we recommend installing `flash attention`.
|
90 |
+
|
91 |
+
```bash
|
92 |
+
git clone git@github.com:Dao-AILab/flash-attention.git
|
93 |
+
cd flash-attention
|
94 |
+
MAX_JOBS=8 python setup.py install
|
95 |
+
```
|
96 |
+
|
97 |
+
4. Model Inference: Within the Docker container, you can run the model inference code as described in the Inference Example section.
|
98 |
+
|
99 |
+
Using Docker provides a clean, controlled environment that minimizes issues related to software versions and dependencies.
|
100 |
+
|
101 |
+
### Model Weights
|
102 |
+
|
103 |
+
You can download the model weights from the following link:
|
104 |
+
|
105 |
+
- [aiXcoder Base Download](https://huggingface.co/aiXcoder/aixcoder-7b-base)
|
106 |
+
- aiXcoder Instruct Download (Comming soon...)
|
107 |
+
|
108 |
+
### Inference Example
|
109 |
+
|
110 |
+
#### Command Line Execution
|
111 |
+
|
112 |
+
For a quick start, you can run the model inference directly from the command line:
|
113 |
+
|
114 |
+
```bash
|
115 |
+
torchrun --nproc_per_node 1 sess_megatron.py --model_dir "path/to/model_weights_dir"
|
116 |
+
```
|
117 |
+
|
118 |
+
Replace "path/to/model_weights_dir" with the actual path to your downloaded model weights.
|
119 |
+
|
120 |
+
|
121 |
+
or run inference with huggingface's transformers:
|
122 |
+
|
123 |
+
```bash
|
124 |
+
python sess_huggingface.py
|
125 |
+
```
|
126 |
+
|
127 |
+
#### Python Script Execution
|
128 |
+
|
129 |
+
Alternatively, you can invoke the model programmatically within your Python scripts. This method provides more flexibility for integrating the model into your applications or workflows. Here's a simple example on how to do it:
|
130 |
+
|
131 |
+
```python
|
132 |
+
|
133 |
+
from sess_megatron import TestInference
|
134 |
+
|
135 |
+
infer = TestInference()
|
136 |
+
res = infer.run_infer(
|
137 |
+
# for FIM style input, code_string stands for prefix context
|
138 |
+
code_string="""# 快速排序算法""",
|
139 |
+
# for FIM style input, later_code stands for suffix context
|
140 |
+
later_code="\n",
|
141 |
+
# file_path should be a path from project to file
|
142 |
+
file_path="test.py",
|
143 |
+
# max num for generated tokens
|
144 |
+
max_new_tokens=256,
|
145 |
+
)
|
146 |
+
print(res)
|
147 |
+
|
148 |
+
"""output:
|
149 |
+
|
150 |
+
def quick_sort(arr):
|
151 |
+
if len(arr) <= 1:
|
152 |
+
return arr
|
153 |
+
pivot = arr[0]
|
154 |
+
less = [i for i in arr[1:] if i <= pivot]
|
155 |
+
greater = [i for i in arr[1:] if i > pivot]
|
156 |
+
return quick_sort(less) + [pivot] + quick_sort(greater)
|
157 |
+
|
158 |
+
|
159 |
+
# 测试
|
160 |
+
arr = [3, 2, 1, 4, 5]
|
161 |
+
print(quick_sort(arr)) # [1, 2, 3, 4, 5]
|
162 |
+
"""
|
163 |
+
|
164 |
+
```
|
165 |
+
|
166 |
+
```python
|
167 |
+
|
168 |
+
|
169 |
+
import torch
|
170 |
+
import sys
|
171 |
+
from hf_mini.utils import input_wrapper
|
172 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
173 |
+
|
174 |
+
device = "cuda" # the device to load the model onto
|
175 |
+
|
176 |
+
tokenizer = AutoTokenizer.from_pretrained("aiXcoder/aixcoder-7b-base")
|
177 |
+
model = AutoModelForCausalLM.from_pretrained("aiXcoder/aixcoder-7b-base", torch_dtype=torch.bfloat16)
|
178 |
+
|
179 |
+
|
180 |
+
text = input_wrapper(
|
181 |
+
# for FIM style input, code_string stands for prefix context
|
182 |
+
code_string="# 快速排序算法",
|
183 |
+
# for FIM style input, later_code stands for suffix context
|
184 |
+
later_code="\n# 测试\narr = [3, 2, 1, 4, 5]\nprint(quick_sort(arr)) # [1, 2, 3, 4, 5]",
|
185 |
+
# file_path should be a path from project to file
|
186 |
+
path="test.py"
|
187 |
+
)
|
188 |
+
|
189 |
+
if len(text) == 0:
|
190 |
+
sys.exit()
|
191 |
+
|
192 |
+
inputs = tokenizer(text, return_tensors="pt", return_token_type_ids=False)
|
193 |
+
|
194 |
+
inputs = inputs.to(device)
|
195 |
+
model.to(device)
|
196 |
+
|
197 |
+
outputs = model.generate(**inputs, max_new_tokens=256)
|
198 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
|
199 |
+
|
200 |
+
|
201 |
+
|
202 |
+
"""output:
|
203 |
+
def quick_sort(arr):
|
204 |
+
# 如果数组长度小于等于1,直接返回
|
205 |
+
if len(arr) <= 1:
|
206 |
+
return arr
|
207 |
+
# 选择数组的第一个元素作为基准
|
208 |
+
pivot = arr[0]
|
209 |
+
# 初始化左右指针
|
210 |
+
left, right = 1, len(arr) - 1
|
211 |
+
# 循环直到左指针小于右指针
|
212 |
+
while left < right:
|
213 |
+
# 从右到左找到第一个小于基准的元素,与左指针元素交换
|
214 |
+
if arr[right] < pivot:
|
215 |
+
arr[left], arr[right] = arr[right], arr[left]
|
216 |
+
left += 1
|
217 |
+
# 从左到右找到第一个大于等于基准的元素,与右指针元素交换
|
218 |
+
if arr[left] >= pivot:
|
219 |
+
right -= 1
|
220 |
+
# 将基准元素与左指针元素交换
|
221 |
+
arr[left], arr[0] = arr[0], arr[left]
|
222 |
+
# 对左半部分进行递归排序
|
223 |
+
quick_sort(arr[:left])
|
224 |
+
# 对右半部分进行递归排序
|
225 |
+
quick_sort(arr[left + 1:])
|
226 |
+
return arr</s>
|
227 |
+
"""
|
228 |
+
|
229 |
+
```
|
230 |
+
|
231 |
+
|
232 |
+
|
233 |
+
## License
|
234 |
+
|
235 |
+
The model weights are licensed under the [Model License](./MODEL_LICENSE) for academic research use; for commercial use, please apply by sending an email to support@aiXcoder.com.
|
236 |
+
|
237 |
+
|
238 |
+
## Acknowledgments
|
239 |
+
|
240 |
+
We would like to thank all contributors to the open-source projects and datasets that made this work possible.
|
241 |
+
|
242 |
+
Thank you for your interest in our Code Large Language Model. We look forward to your contributions and feedback!
|