zhao1iang commited on
Commit
565ab8c
1 Parent(s): 5019740

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +230 -6
README.md CHANGED
@@ -1,6 +1,230 @@
1
- ---
2
- license: other
3
- license_name: skywork
4
- license_link: >-
5
- https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf
6
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: skywork
4
+ license_link: >-
5
+ https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf
6
+ ---
7
+
8
+
9
+ <!-- <div align="center">
10
+ <h1>
11
+ ✨Skywork
12
+ </h1>
13
+ </div> -->
14
+ <div align="center"><img src="misc/skywork_logo.jpeg" width="550"/></div>
15
+
16
+ <p align="center">
17
+ 🤗 <a href="https://huggingface.co/Skywork" target="_blank">Hugging Face</a> • 🤖 <a href="https://modelscope.cn/organization/Skywork" target="_blank">ModelScope</a> • 👾 <a href="https://wisemodel.cn/organization/Skywork" target="_blank">Wisemodel</a> • 💬 <a href="https://github.com/SkyworkAI/Skywork/blob/main/misc/wechat.png?raw=true" target="_blank">WeChat</a>• 📜<a href="http://arxiv.org/abs/2310.19341" target="_blank">Tech Report</a>
18
+ </p>
19
+
20
+ <div align="center">
21
+
22
+ [![GitHub Stars](https://img.shields.io/github/stars/SkyworkAI/Skywork-MoE)](https://github.com/SkyworkAI/Skywork-MoE/stargazers)
23
+ [![GitHub Forks](https://img.shields.io/github/forks/SkyworkAI/Skywork-MoE)](https://github.com/SkyworkAI/Skywork-MoE/fork)
24
+ </div>
25
+
26
+ <div align="center">
27
+
28
+ </div>
29
+
30
+
31
+ # Project Introduction
32
+
33
+ Skywork-MoE is a high-performance mixture-of-experts (MoE) model with 146 billion parameters, 16 experts, and 22 billion activated parameters. This model is initialized from the pre-existing dense checkpoints of our Skywork-13B model.
34
+
35
+ We introduce two innovative techniques: Gating Logit Normalization, which enhances expert diversification, and Adaptive Auxiliary Loss Coefficients, which allow for layer-specific adjustment of auxiliary loss coefficients.
36
+
37
+ Skywork-MoE demonstrates comparable or superior performance to models with more parameters or more activated parameters, such as Grok-1, DBRX, Mistral 8*22, and Deepseek-V2.
38
+
39
+ # News and Updates
40
+ * 2024.6.3 We release the **Skywork-MoE-base** model.
41
+
42
+ # Table of contents
43
+
44
+ - [☁️Download URL](#Download-URL)
45
+ - [👨‍💻Model Introduction](#Model-Introduction)
46
+ - [🏆Model Evaluation](#Model-Evaluation)
47
+ - [⚠️Declaration and License Agreement](#Declaration-and-License-Agreement)
48
+ - [🤝Contact Us and Citation](#Contact-Us-and-Citation)
49
+
50
+
51
+ # Download URL
52
+
53
+ | | HuggingFace Model | ModelScope Model | Wisemodel Model |
54
+ |:-------:|:-----------:|:-----------------------------:|:-----------------------------:|
55
+ | **Skywork-MoE-base** | 🤗 [Skywork-MoE-base](https://huggingface.co/Skywork/Skywork-MoE-base) | 🤖[Skywork-MoE-base](https://www.modelscope.cn/models/skywork/Skywork-MoE-base) | 👾[Skywork-MoE-base](https://wisemodel.cn/models/Skywork/Skywork-MoE-base) |
56
+
57
+
58
+ # Benchmark Results
59
+ We evaluated Skywork-MoE-base model on various popular benchmarks, including C-Eval, MMLU, CMMLU, GSM8K, MATH and HumanEval.
60
+ <img src="misc/skywork_moe_base_evaluation.png" alt="Image" width="600" height="280">
61
+
62
+ # Demonstration of Hugging Face Model Inference
63
+
64
+ ## Base Model Inference
65
+
66
+ We can perform inference for the Skywork-MoE-base (16x13B size) model using HuggingFace on 8xA100/A800 or higher GPU hardware configurations.
67
+
68
+ ```python
69
+
70
+ from transformers import AutoModelForCausalLM, AutoTokenizer
71
+
72
+ model = AutoModelForCausalLM.from_pretrained("Skywork/Skywork-MoE-base", trust_remote_code=True, device_map='auto')
73
+ tokenizer = AutoTokenizer.from_pretrained("Skywork/Skywork-MoE-base", trust_remote_code=True)
74
+
75
+ inputs = tokenizer('陕西的省会是西安', return_tensors='pt').to(model.device)
76
+ response = model.generate(inputs.input_ids, max_length=128)
77
+ print(tokenizer.decode(response.cpu()[0], skip_special_tokens=True))
78
+ """
79
+ 陕西的省会是西安。
80
+ 西安,古称长安、镐京,是陕西省会、副省级市、关中平原城市群核心城市、丝绸之路起点城市、“一带一路”核心区、中国西部地区重要的中心城市,国家重要的科研、教育、工业基地。
81
+ 西安是中国四大古都之一,联合国科教文组织于1981年确定的“世界历史名城”,美媒评选的世界十大古都之一。地处关中平原中部,北濒渭河,南依秦岭,八水润长安。下辖11区2县并代管西
82
+ """
83
+
84
+ inputs = tokenizer('陕西的省会是西安,甘肃的省会是兰州,河南的省会是郑州', return_tensors='pt').to(model.device)
85
+ response = model.generate(inputs.input_ids, max_length=128)
86
+ print(tokenizer.decode(response.cpu()[0], skip_special_tokens=True))
87
+ """
88
+ 陕西的省会是西安,甘肃的省会是兰州,河南的省会是郑州,湖北的省会是武汉,湖南的省会是长沙,安徽的省会是合肥,江西的省会是南昌,江苏的省会是南京,浙江的省会是杭州,福建的省会是福州,广东的省会是广州,广西的省会是南宁,四川的省会是成都,贵州的省会是贵阳,云南的省会是昆明,山西的省会是太原,山东的省会是济南,河北的省会是石家庄,辽宁的省会是沈阳,吉林的省��是长春,黑龙江的
89
+ """
90
+
91
+ ```
92
+
93
+
94
+ # Demonstration of vLLM Model Inference
95
+
96
+ ## Quickstart with vLLM
97
+
98
+ We provide a method to quickly deploy the Skywork-Moe-base model based on vllm.
99
+
100
+ Under fp8 precision you can run Skywork-Moe-base with just only 8*4090.
101
+
102
+ You can get the source code in [`vllm`](https://github.com/SkyworkAI/vllm)
103
+
104
+ You can get the fp8 model in [`Skywork-MoE-Base-FP8`](https://huggingface.co/Skywork/Skywork-MoE-Base-FP8)
105
+
106
+ ### Based on local environment
107
+
108
+ Since pytorch only supports 4090 using fp8 precision in the nightly version, you need to install the corresponding or newer version of pytorch.
109
+
110
+ ``` shell
111
+ # for cuda12.1
112
+ pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu121
113
+ # for cuda12.4
114
+ pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu124
115
+ ```
116
+
117
+ Some other dependencies also need to be installed:
118
+
119
+ ```shell
120
+ pip3 install xformers vllm-flash-attn
121
+ ```
122
+
123
+ Then clone the [`vllm`](https://github.com/SkyworkAI/vllm) provided by skywork and change to `skywork-moe` branch:
124
+
125
+ ``` shell
126
+ git clone https://github.com/SkyworkAI/vllm.git -b skywork-moe
127
+ cd vllm
128
+ ```
129
+
130
+ Then compile and install vllm:
131
+
132
+ ``` shell
133
+ MAX_JOBS=8 python3 setup.py install
134
+ ```
135
+
136
+ ### Base on docker
137
+
138
+ You can use the docker image provided by skywork to run vllm directly:
139
+
140
+ ```shell
141
+ docker pull registry.cn-wulanchabu.aliyuncs.com/triple-mu/skywork-moe-vllm:v1
142
+ ```
143
+
144
+ Then start the container and set the model path and working directory.
145
+
146
+ ```shell
147
+ model_path="Skywork/Skywork-MoE-Base-FP8"
148
+ workspace=${PWD}
149
+
150
+ docker run \
151
+ --runtime nvidia \
152
+ --gpus all \
153
+ -it \
154
+ --rm \
155
+ --shm-size=1t \
156
+ --ulimit memlock=-1 \
157
+ --privileged=true \
158
+ --ulimit stack=67108864 \
159
+ --ipc=host \
160
+ -v ${model_path}:/Skywork-MoE-Base-FP8 \
161
+ -v ${workspace}:/workspace \
162
+ registry.cn-wulanchabu.aliyuncs.com/triple-mu/skywork-moe-vllm:v1
163
+ ```
164
+
165
+ Now, you can run the Skywork Moe base model for fun!
166
+
167
+ ### Text Completion
168
+
169
+ ``` python
170
+ from vllm import LLM, SamplingParams
171
+
172
+ model_path = '/path/to/skywork-moe-base'
173
+ prompts = [
174
+ "The president of the United States is",
175
+ "The capital of France is",
176
+ ]
177
+
178
+ sampling_params = SamplingParams(temperature=0.3, max_tokens=256)
179
+
180
+ llm = LLM(
181
+ model=model_path,
182
+ quantization='fp8',
183
+ kv_cache_dtype='fp8',
184
+ tensor_parallel_size=8,
185
+ gpu_memory_utilization=0.95,
186
+ enforce_eager=True,
187
+ trust_remote_code=True,
188
+ )
189
+
190
+ outputs = llm.generate(prompts, sampling_params)
191
+
192
+ for output in outputs:
193
+ prompt = output.prompt
194
+ generated_text = output.outputs[0].text
195
+ print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
196
+ ```
197
+
198
+
199
+ # Declaration and License Agreement
200
+
201
+
202
+ ## Declaration
203
+
204
+ We hereby declare that the Skywork model should not be used for any activities that pose a threat to national or societal security or engage in unlawful actions. Additionally, we request users not to deploy the Skywork model for internet services without appropriate security reviews and records. We hope that all users will adhere to this principle to ensure that technological advancements occur in a regulated and lawful environment.
205
+
206
+ We have done our utmost to ensure the compliance of the data used during the model's training process. However, despite our extensive efforts, due to the complexity of the model and data, there may still be unpredictable risks and issues. Therefore, if any problems arise as a result of using the Skywork open-source model, including but not limited to data security issues, public opinion risks, or any risks and problems arising from the model being misled, abused, disseminated, or improperly utilized, we will not assume any responsibility.
207
+
208
+ ## License Agreement
209
+
210
+ The community usage of Skywork model requires [Skywork Community License](https://github.com/SkyworkAI/Skywork-MoE/blob/main/Skywork%20Community%20License.pdf). The Skywork model supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within [Skywork Community License](https://github.com/SkyworkAI/Skywork-MoE/blob/main/Skywork%20Community%20License.pdf).
211
+
212
+
213
+
214
+ [《Skywork 模型社区许可协议》》]:https://github.com/SkyworkAI/Skywork-MoE/blob/main/Skywork%20模型社区许可协议.pdf
215
+
216
+
217
+ [skywork-opensource@kunlun-inc.com]: mailto:skywork-opensource@kunlun-inc.com
218
+
219
+ # Contact Us and Citation
220
+ If you find our work helpful, please feel free to cite our paper~
221
+ ```
222
+ @misc{wei2024skywork,
223
+ title={Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models},
224
+ author={Tianwen Wei, Bo Zhu, Liang Zhao, Cheng Cheng, Biye Li, Weiwei Lü, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Liang Zeng, Xiaokun Wang, Yutuan Ma, Rui Hu, Shuicheng Yan, Han Fang, Yahui Zhou},
225
+ year={2024},
226
+ archivePrefix={arXiv},
227
+ primaryClass={cs.CL}
228
+ }
229
+ ```
230
+