YAYI 2
介绍/Introduction
YAYI 2 是中科闻歌研发的开源大语言模型,包括 Base 和 Chat 版本,参数规模为 30B。YAYI2-30B 是基于 Transformer 的大语言模型,采用了 2.65 万亿 Tokens 的高质量、多语言语料进行预训练。针对通用和特定领域的应用场景,我们采用了百万级指令进行微调,同时借助人类反馈强化学习方法,以更好地使模型与人类价值观对齐。
本次开源的模型为 YAYI2-30B Base 模型。如果您想了解更多关于 YAYI 2 模型的细节,我们建议您参阅 GitHub 仓库。更多技术细节,欢迎阅读我们的技术报告🔥YAYI 2: Multilingual Open-Source Large Language Models。
YAYI 2 is a collection of open-source large language models launched by Wenge Technology. YAYI2-30B is a Transformer-based large language model, and has been pretrained for 2.65 trillion tokens of multilingual data with high quality. The base model is aligned with human values through supervised fine-tuning with millions of instructions and reinforcement learning from human feedback (RLHF).
We opensource the pre-trained language model in this release, namely YAYI2-30B. For more details about the YAYI 2, please refer to our GitHub repository. For more technical details, please read our technical report 🔥YAYI 2: Multilingual Open-Source Large Language Models.
模型细节/Model Details
Hyperparameter | Value |
---|---|
n_layers | 64 |
n_heads | 64 |
hidden_size | 7168 |
vocab_size | 81920 |
sequence length | 4096 |
要求/Requirements
python 3.8及以上版本
pytorch 2.0.1 及以上版本
建议使用 CUDA 11.7 及以上版本
运行 BF16 或 FP16 模型需要至少80GB显存(例如1xA100)
python 3.8 and above
pytorch 2.0.1 and above
CUDA 11.7 and above are recommended
To run YAYI2-30B in bf16/fp16, at least 80GB GPU memory is required (e.g., 1xA100-80GB)
快速开始/Quick Start
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("wenge-research/yayi2-30b", trust_remote_code=True)
>>> model = AutoModelForCausalLM.from_pretrained("wenge-research/yayi2-30b", device_map="auto", trust_remote_code=True)
>>> inputs = tokenizer('The winter in Beijing is', return_tensors='pt')
>>> inputs = inputs.to('cuda')
>>> pred = model.generate(
**inputs,
max_new_tokens=256,
eos_token_id=tokenizer.eos_token_id,
do_sample=True,
repetition_penalty=1.2,
temperature=0.4,
top_k=100,
top_p=0.8
)
>>> print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
评测结果/Evaluation
我们在多个基准数据集上进行了评测,包括 C-Eval、MMLU、 CMMLU、AGIEval、GAOKAO-Bench、GSM8K、MATH、BBH、HumanEval 以及 MBPP。我们考察了模型在语言理解、学科知识、数学推理、逻辑推理以及代码生成方面的表现。YAYI 2 模型在与其规模相近的开源模型中展现出了显著的性能提升。
We evaluate our model on standard benchmarks, including C-Eval, MMLU, CMMLU, AGIEval, GAOKAO-Bench, GSM8K, MATH, BBH, HumanEval, and MBPP. Our goal is to assess the model's performance in language comprehension, knowledge comprehension, mathematical reasoning, logical reasoning, and code generation. YAYI 2 has demonstrated exceptional performance across models with similar size.
Knowledge | Math | Logic reasonning | Code | |||||||
---|---|---|---|---|---|---|---|---|---|---|
Model | C-Eval(val) | MMLU | AGIEval | CMMLU | GAOKAO-Bench | GSM8K | MATH | BBH | HumanEval | MBPP |
5-shot | 5-shot | 3/0-shot | 5-shot | 0-shot | 8/4-shot | 4-shot | 3-shot | 0-shot | 3-shot | |
MPT-30B | - | 46.9 | 33.8 | - | - | 15.2 | 3.1 | 38.0 | 25.0 | 32.8 |
Falcon-40B | - | 55.4 | 37.0 | - | - | 19.6 | 5.5 | 37.1 | 0.6 | 29.8 |
LLaMA2-34B | - | 62.6 | 43.4 | - | - | 42.2 | 6.2 | 44.1 | 22.6 | 33.0 |
Baichuan2-13B | 59.0 | 59.5 | 37.4 | 61.3 | 45.6 | 52.6 | 10.1 | 49.0 | 17.1 | 30.8 |
Qwen-14B | 71.7 | 67.9 | 51.9 | 70.2 | 62.5 | 61.6 | 25.2 | 53.7 | 32.3 | 39.8 |
InternLM-20B | 58.8 | 62.1 | 44.6 | 59.0 | 45.5 | 52.6 | 7.9 | 52.5 | 25.6 | 35.6 |
Aquila2-34B | 98.5 | 76.0 | 43.8 | 78.5 | 37.8 | 50.0 | 17.8 | 42.5 | 0.0 | 41.0 |
Yi-34B | 81.8 | 76.3 | 56.5 | 82.6 | 68.3 | 67.6 | 15.9 | 66.4 | 26.2 | 38.2 |
YAYI2-30B | 80.9 | 80.5 | 62.0 | 84.0 | 64.4 | 71.2 | 14.8 | 54.5 | 53.1 | 45.8 |
我们使用 OpenCompass Github 仓库 提供的源代码进行了评测。对于对比模型,我们列出了他们在 OpenCompass 榜单上的评测结果,截止日期为 2023年12月15日。对于其他尚未在 OpenCompass 平台参与评测的模型,包括 MPT、Falcon 和 LLaMa 2,我们采用了 LLaMA 2 报告的结果。
We evaluate our model using the source code from the OpenCompass Github repository. If available, we report results for comparative models assessed by OpenCompass with the evaluation reference date set to Dec. 15th, 2013. For MPT, Falcon, and Llama, which have not been evaluated by OpenCompass, we use the results reported in the LLaMA 2 paper.
协议/License
本项目中的代码依照 Apache-2.0 协议开源,社区使用 YAYI 2 模型和数据需要遵循雅意 YAYI 2 模型社区许可协议。若您需要将雅意 YAYI 2 系列模型或其衍生品用作商业用途,请完整填写《雅意 YAYI 2 模型商用登记信息》,并发送至 yayi@wenge.com,收到邮件后我们将在3个工作日进行审核,通过审核后您将收到商用许可证,请您在使用过程中严格遵守《雅意 YAYI 2 模型商用许可协议》的相关内容,感谢您的配合!
The code in this project is open-sourced under the Apache-2.0 license. The use of YaYi series model weights and data must adhere to the YAYI 2 Community License. If you intend to use the YAYI 2 series models or their derivatives for commercial purposes, please complete the YAYI 2 Model Commercial Registration Information and send it to yayi@wenge.com. After receiving the email, we will conduct an audit within 3 working days. Once the audit is passed, you will receive a commercial license. Please strictly comply with the relevant content of the YAYI 2 Model Commercial License Agreement during the use process. Thank you for your cooperation!
引用/Citation
如果您在工作中使用了我们的模型,请引用我们的论文。
If you are using the resource for your work, please cite our paper.
@article{YAYI 2,
author = {Yin Luo, Qingchao Kong, Nan Xu, et.al.},
title = {YAYI 2: Multilingual Open Source Large Language Models},
journal = {arXiv preprint arXiv:2312.14862},
url = {https://arxiv.org/abs/2312.14862},
year = {2023}
}
- Downloads last month
- 95