---
license: other
---
YAYI 2
## 介绍/Introduction
YAYI 2 是中科闻歌研发的**新一代开源大语言模型**,包括 Base 和 Chat 版本,参数规模为 30B,并采用了超过 2 万亿 Tokens 的高质量、多语言语料进行预训练。针对通用和特定领域的应用场景,我们采用了百万级指令进行微调,同时借助人类反馈强化学习方法,以更好地使模型与人类价值观对齐。本次开源的模型为 YAYI2-30B Base 模型。我们希望通过雅意大模型的开源来促进中文预训练大模型开源社区的发展,并积极为此做出贡献。通过开源,我们与每一位合作伙伴共同构建雅意大模型生态。更多技术细节,敬请期待我们的的技术报告🔥。
YAYI 2 is the new generation of open-source large language models launched by Wenge Technology. It has been pretrained for 2.65 trillion tokens of multilingual data with high quality. The base model is aligned with human values through supervised fine-tuning with millions of instructions and reinforcement learning from human feedback (RLHF). We opensource the pre-trained language model in this release, namely **YAYI2-30B**. By open-sourcing the YAYI 2 model, we aim to contribute to the development of the Chinese pre-trained large language model open-source community. Through open-source, we aspire to collaborate with every partner in building the YAYI large language model ecosystem. Stay tuned for more technical details in our upcoming technical report! 🔥
## 模型/Model
| Model Name | Context Length | 🤗 HF Model Name |
|:----------|:----------:|:----------:|
| YAYI2-30B | 4096 | wenge-research/yayi2-30b|
## 评测结果/Evaluation
我们在多个基准数据集上进行了评测,包括 C-Eval、MMLU、 CMMLU、AGIEval、GAOKAO-Bench、GSM8K、MATH、BBH、HumanEval 以及 MBPP。我们考察了模型在语言理解、学科知识、数学推理、逻辑推理以及代码生成方面的表现。YAYI 2 模型在与其规模相近的开源模型中展现出了显著的性能提升。
We evaluate our model on standard benchmarks, including C-Eval, MMLU, CMMLU, AGIEval, GAOKAO-Bench, GSM8K, MATH, BBH, HumanEval, and MBPP. Our goal is to assess the model's performance in language comprehension, knowledge comprehension, mathematical reasoning, logical reasoning, and code generation. YAYI 2 has demonstrated exceptional performance across models with similar size.
|
Knowledge |
Math |
Logic reasonning |
Code |
Model |
C-Eval(val) |
MMLU |
AGIEval |
CMMLU |
GAOKAO-Bench |
GSM8K |
MATH |
BBH |
HumanEval |
MBPP |
|
5-shot |
5-shot |
3/0-shot |
5-shot |
0-shot |
8/4-shot |
4-shot |
3-shot |
0-shot |
3-shot |
MPT-30B |
- |
46.9 |
33.8 |
- |
- |
15.2 |
3.1 |
38.0 |
25.0 |
32.8 |
Falcon-40B |
- |
55.4 |
37.0 |
- |
- |
19.6 |
5.5 |
37.1 |
0.6 |
29.8 |
LLaMA2-34B |
- |
62.6 |
43.4 |
- |
- |
42.2 |
6.2 |
44.1 |
22.6 |
33.0 |
Baichuan2-13B |
59.0 |
59.5 |
37.4 |
61.3 |
45.6 |
52.6 |
10.1 |
49.0 |
17.1 |
30.8 |
Qwen-14B |
71.7 |
67.9 |
51.9 |
70.2 |
62.5 |
61.6 |
25.2 |
53.7 |
32.3 |
39.8 |
InternLM-20B |
58.8 |
62.1 |
44.6 |
59.0 |
45.5 |
52.6 |
7.9 |
52.5 |
25.6 |
35.6 |
Aquila2-34B |
98.5 |
76.0 |
43.8 |
78.5 |
37.8 |
50.0 |
17.8 |
42.5 |
0.0 |
41.0 |
Yi-34B |
81.8 |
76.3 |
56.5 |
82.6 |
68.3 |
67.6 |
15.9 |
66.4 |
26.2 |
38.2 |
YAYI2-30B |
80.9 |
80.5 |
62.0 |
84.0 |
64.4 |
71.2 |
14.8 |
54.5 |
53.1 |
45.8 |
我们使用 [OpenCompass Github 仓库](https://github.com/open-compass/opencompass) 提供的源代码进行了评测。对于对比模型,我们列出了他们在 [OpenCompass](https://opencompass.org.cn) 榜单上的评测结果,截止日期为 2023年12月15日。对于其他尚未在 [OpenCompass](https://opencompass.org.cn/leaderboard-llm) 平台参与评测的模型,包括 MPT、Falcon 和 LLaMa 2,我们采用了 [LLaMA 2](https://arxiv.org/abs/2307.09288) 报告的结果。
We evaluate our model using the source code from the [OpenCompass Github repository](https://github.com/open-compass/opencompass). If available, we report results for comparative models assessed by OpenCompass with the evaluation reference date set to Dec. 15th, 2013. For MPT, Falfon, and Llama, which have not been evaluated by OpenCompass, we use the results reported in the [LLaMA 2](https://arxiv.org/abs/2307.09288) paper.
## 快速开始/Quick Start
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("wenge-research/yayi2-30b", trust_remote_code=True)
>>> model = AutoModelForCausalLM.from_pretrained("wenge-research/yayi2-30b", device_map="auto", trust_remote_code=True)
>>> inputs = tokenizer('The winter in Beijing is', return_tensors='pt')
>>> inputs = inputs.to('cuda')
>>> pred = model.generate(
**inputs,
max_new_tokens=256,
eos_token_id=tokenizer.eos_token_id,
do_sample=True,
repetition_penalty=1.2,
temperature=0.4,
top_k=100,
top_p=0.8
)
>>> print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
## 协议/Liencese
本项目中的代码依照 [Apache-2.0](LICENSE) 协议开源,社区使用 YAYI 2 模型和数据需要遵循[雅意YAYI 2 模型社区许可协议](YAYI2_Community_License)。若您需要将雅意 YAYI 2系列模型或其衍生品用作商业用途,请根据[《雅意 YAYI 2 模型商用许可协议》](YAYI2_Commercial_License)将商用许可申请登记信息发送至指定邮箱yayi@wenge.com。审核通过后,雅意将授予您商用版权许可,请遵循协议中的商业许可限制。
The code in this project is open-sourced under the [Apache-2.0](LICENSE) license. The use of YaYi series model weights and data must adhere to the [YAYI 2 Community License](YAYI2_Community_License). If you intend to use the YAYI 2 series models or their derivatives for commercial purposes, please submit your commercial license application and registration information to yayi@wenge.com, following the [YAYI 2 Commercial License](YAYI2_Commercial_License). Upon approval, YAYI will grant you a commercial copyright license, subject to the commercial license restrictions outlined in the agreement.