wenge-research
commited on
Commit
•
2e50631
1
Parent(s):
8715757
Update README.md
Browse files
README.md
CHANGED
@@ -19,13 +19,13 @@ license: other
|
|
19 |
## 介绍/Introduction
|
20 |
YAYI 2 是中科闻歌研发的开源大语言模型,包括 Base 和 Chat 版本,参数规模为 30B。YAYI2-30B 是基于 Transformer 的大语言模型,采用了 2.65 万亿 Tokens 的高质量、多语言语料进行预训练。针对通用和特定领域的应用场景,我们采用了百万级指令进行微调,同时借助人类反馈强化学习方法,以更好地使模型与人类价值观对齐。
|
21 |
|
22 |
-
本次开源的模型为 YAYI2-30B Base 模型。如果您想了解更多关于 YAYI 2 模型的细节,我们建议您参阅 [GitHub](https://github.com/wenge-research/YAYI2)
|
23 |
|
24 |
|
25 |
|
26 |
YAYI 2 is a collection of open-source large language models launched by Wenge Technology. YAYI2-30B is a Transformer-based large language model, and has been pretrained for 2.65 trillion tokens of multilingual data with high quality. The base model is aligned with human values through supervised fine-tuning with millions of instructions and reinforcement learning from human feedback (RLHF).
|
27 |
|
28 |
-
We opensource the pre-trained language model in this release, namely **YAYI2-30B**. For more details about the YAYI 2, please refer to our [GitHub](https://github.com/wenge-research/YAYI2) repository.
|
29 |
|
30 |
|
31 |
## 模型细节/Model Details
|
|
|
19 |
## 介绍/Introduction
|
20 |
YAYI 2 是中科闻歌研发的开源大语言模型,包括 Base 和 Chat 版本,参数规模为 30B。YAYI2-30B 是基于 Transformer 的大语言模型,采用了 2.65 万亿 Tokens 的高质量、多语言语料进行预训练。针对通用和特定领域的应用场景,我们采用了百万级指令进行微调,同时借助人类反馈强化学习方法,以更好地使模型与人类价值观对齐。
|
21 |
|
22 |
+
本次开源的模型为 YAYI2-30B Base 模型。如果您想了解更多关于 YAYI 2 模型的细节,我们建议您参阅 [GitHub](https://github.com/wenge-research/YAYI2) 仓库。更多技术细节,欢迎阅读我们的技术报告🔥[YAYI 2: Multilingual Open-Source Large Language Models](https://arxiv.org/abs/2312.14862)。
|
23 |
|
24 |
|
25 |
|
26 |
YAYI 2 is a collection of open-source large language models launched by Wenge Technology. YAYI2-30B is a Transformer-based large language model, and has been pretrained for 2.65 trillion tokens of multilingual data with high quality. The base model is aligned with human values through supervised fine-tuning with millions of instructions and reinforcement learning from human feedback (RLHF).
|
27 |
|
28 |
+
We opensource the pre-trained language model in this release, namely **YAYI2-30B**. For more details about the YAYI 2, please refer to our [GitHub](https://github.com/wenge-research/YAYI2) repository. For more technical details, please read our technical report 🔥YAYI 2: Multilingual Open-Source Large Language Models.
|
29 |
|
30 |
|
31 |
## 模型细节/Model Details
|