File size: 9,368 Bytes
e22a247
 
 
 
 
 
 
 
 
 
 
a7464bc
e22a247
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1f02237
 
 
 
 
e22a247
 
170350e
e22a247
 
 
a7464bc
1f02237
 
e22a247
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1f02237
170350e
1f02237
 
e22a247
1f02237
 
e22a247
 
 
 
 
 
 
 
1f02237
e22a247
1f02237
 
 
 
 
 
 
 
 
 
 
e22a247
1f02237
 
 
e22a247
1f02237
e22a247
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1f02237
 
 
 
 
 
e22a247
 
170350e
e22a247
 
 
9bfbf84
e22a247
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1f02237
170350e
1f02237
 
9bfbf84
e22a247
 
 
 
 
 
 
 
 
1f02237
e22a247
1f02237
 
 
 
 
 
 
 
 
 
 
e22a247
1f02237
 
 
e22a247
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
---
language:
- code
pipeline_tag: text-generation
tags:
- code
license: llama2
---



# **Opencsg-phi-2-v0.1**          [[中文]](#chinese)    [[English]](#english)

<a id="english"></a>

<p align="center">
<img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg">
</p>

<p align="center"><a href="https://portal.opencsg.com/models">[OpenCSG Community]</a>   <a href="https://github.com/opencsgs">[github]</a>  <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[wechat]</a>  <a href="https://twitter.com/OpenCsg">[Twitter]</a> </p>


</div>
OpenCSG stands for Converged resources, Software refinement, and Generative LM. The 'C' represents Converged resources, indicating the integration and full utilization of hybrid resources. The 'S' stands for Software refinement, signifying software that is refined by large models. The 'G' represents Generative LM, which denotes widespread, inclusive, and democratized generative large models.

The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively.





## Model Description

Phi-2 is a 2.7 billion-parameter Transformer model trained on augmented data sources, including synthetic NLP texts and filtered websites, alongside existing data used for Phi-1.5. It performs nearly state-of-the-art on benchmarks for common sense, language understanding, and logical reasoning, despite having fewer than 13 billion parameters. 
Unlike some models, Phi-2 hasn't been fine-tuned through reinforcement learning from human feedback. The goal of this open-source model is to enable research into safety challenges like reducing toxicity, understanding biases, enhancing controllability, etc.


opencsg-phi-2-v0.1 is a series of models based on phi-2 that have been fine-tuned using full-parameter tuning methods.
<br>

This is the repository for the base 2.7B version finetuned based on [phi-2](https://huggingface.co/microsoft/phi-2). 

| Model Size    | Base Model                                                                    |
| --- | ----------------------------------------------------------------------------- |
| phi-2 | [opencsg/Opencsg-phi-2-v0.1](https://huggingface.co/opencsg/opencsg-phi-2-v0.1)    | 



## Model Eval

HumanEval is the most common code generation benchmark for evaluating model performance, especially on the compeltion of code exercise cases.
Model evaluation is, to some extent, a metaphysics. Different models have different sensitivities to decoding methods, parameters and instructions.
It is impratical for us to manually set specific configurations for each fine-tuned model, because a real LLM should master general capabilities despite the parameters being manipulated by users.

Therefore, OpenCSG racked their brains to provide a relatively fair method to compare the fine-tuned models on the HumanEval benchmark.
To simplify the comparison, we chosed the Pass@1 metric for the Python language, but our fine-tuning dataset includes samples in multiple languages.

**For fairness, we evaluated the original and fine-tuned CodeLlama models based only on the prompts from the original cases, without including any other instructions.**

**Besides, we use the greedy decoding method for each model during evaluation.**

| Model     | HumanEval python pass@1                                                 |
| ---  |----------------------------------------------------------------------------- |
| phi-2 | 48.2% |
| **opencsg-phi-2-v0.1** |**54.3%**|





**TODO**
- We will provide more benchmark scores on fine-tuned models in the future.
- We will provide different practical problems to evaluate the performance of fine-tuned models in the field of software engineering.



# Model Usage

```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

torch.set_default_device("cuda")

model = AutoModelForCausalLM.from_pretrained("opencsg/opencsg-phi-2-v0.1", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("opencsg/opencsg-phi-2-v0.1", trust_remote_code=True)

inputs = tokenizer('''def print_prime(n):
   """
   Print all primes between 1 and n
   """''', return_tensors="pt", return_attention_mask=False)

outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```

# Training

## Hardware

- **GPUs:** 8 Tesla A800 
- **Training time:** 4 hours

## Software

- **Orchestration:** [Deepspeed](https://github.com/OpenCSGs)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)


<a id="chinese"></a>

<p>

</p>

# OpenCSG介绍


<p align="center">
<img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg">
</p>

<p align="center"><a href="https://opencsg.com/models">[OpenCSG 社区]</a>   <a href="https://github.com/opencsgs">[github]</a>  <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[微信]</a>  <a href="https://twitter.com/OpenCsg">[推特]</a> </p>



</div>
OpenCSG 致力于资源融合、软件求精和生成式 LM。其中,“C”代表资源融合(Converged resources),表示多种混合资源的整合和充分利用。 “S”代表软件求精(Software refinement),表示通过大模型精炼过的软件。 “G”代表生成型语言模型(Generative LM),它表示广泛使用的、包容性的、经过民主化的生成式大模型。

OpenCSG 的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开放开源的原则,将OpenCSG的大模型栈提供给社区。 欢迎大家积极使用、反馈想法和贡献内容。



## 模型介绍


Phi-2是一个拥有27亿参数的Transformer模型,使用了经过增强的数据源进行训练,包括合成的NLP文本和经过筛选的网站,同时还使用了Phi-1.5使用的现有数据。尽管参数少于130亿,但它在常识、语言理解和逻辑推理的基准测试中表现出了接近最先进的水平。
与一些模型不同,Phi-2没有通过人类反馈的强化学习进行微调。这个开源模型的目标是促进对安全挑战的研究,如减少毒性、理解偏见、增强可控性等。

opencsg-phi-2-v0.1是是一系列基于phi-2的通过全参数微调方法进行调优的模型。

<br>

这是基于 [phi-2](https://huggingface.co/microsoft/phi-2) 进行微调的模型版本。

| 模型大小    | 基座模型                                                                    |
| --- | ----------------------------------------------------------------------------- |
| phi-2 | [opencsg/Opencsg-phi-2-v0.1](https://huggingface.co/opencsg/opencsg-phi-2-v0.1)    | 


## 模型评估

HumanEval 是评估模型在代码生成方面性能的最常见的基准,尤其是在代码习题的补全方面。
模型评估在某种程度上是一种玄学。不同的模型对解码方法、参数和指令的敏感度不同,
优秀的大模型是具备通用能力的,而不会因为解码参数的调整使得模型的生成表现有很大的差异。

因此,OpenCSG 提供了一个相对公平的方法来在 HumanEval 基准上比较各微调模型。
方便起见,我们选择了Python语言Pass@1指标,但要注意的是,我们的微调数据集是包含多种编程语言。

**为了公平起见,我们仅根据原始问题的提示来评估原始和微调过的 CodeLlama 模型,不包含任何其他说明。**

**除此之外,我们在评估过程中对每个模型都使用贪婪解码方法。**

| 模型     | HumanEval python pass@1                                                 |
| ---  |----------------------------------------------------------------------------- |
| phi-2 | 48.2% |
| **opencsg-phi-2-v0.1** |**54.3%**|




**TODO**
- 未来我们将提供更多微调模型的在各基准上的分数。
- 我们将提供不同的实际问题来评估微调模型在软件工程领域的性能。



# 模型使用

```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

torch.set_default_device("cuda")

model = AutoModelForCausalLM.from_pretrained("opencsg/opencsg-phi-2-v0.1", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("opencsg/opencsg-phi-2-v0.1", trust_remote_code=True)

inputs = tokenizer('''def print_prime(n):
   """
   Print all primes between 1 and n
   """''', return_tensors="pt", return_attention_mask=False)

outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
# 训练

## 硬件资源

- **GPU数量:** 8 Tesla A800 
- **训练时间:** 4 小时

## 软件使用

- **微调训练框架:** [Deepspeed](https://github.com/OpenCSGs)
- **深度学习框架:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16:** [apex](https://github.com/NVIDIA/apex)