TurboPascal commited on
Commit
85418ab
1 Parent(s): 424d5f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -8,7 +8,7 @@ pipeline_tag: text-generation
8
  Llama-zh-base is an open-source project that offers a complete training pipeline for building Chinese large language models, ranging from dataset preparation to tokenization, pre-training, prompt tuning, and the reinforcement learning technique RLHF.
9
  This is the Llama-zh-base model trained from scratch using the Chinese pretrain corpus in this project.The amount of parameters is about 0.8B.
10
 
11
- 使用33G中文语料重头开始预训练的Llama模型,旨在提供可用的中小型基础模型。重新构建了embedding层和tokenizer。目前未经过指令微调。参数量约为0.8B左右。
12
 
13
  项目github link [Repo Links](https://github.com/enze5088/Chatterbox/blob/main/docs/model/llama-zh-base.md)
14
 
@@ -34,7 +34,7 @@ Notes:
34
 
35
  ## 数据
36
 
37
- 预训练阶段使用开源数据与本项目爬取的部分数据。共使用约33G中文预训练数据、MC4-zh、Code数据集。清洗后筛选共120G左右数据训练1epoch。未经过指令微调。
38
 
39
  ### 中文预训练数据
40
 
 
8
  Llama-zh-base is an open-source project that offers a complete training pipeline for building Chinese large language models, ranging from dataset preparation to tokenization, pre-training, prompt tuning, and the reinforcement learning technique RLHF.
9
  This is the Llama-zh-base model trained from scratch using the Chinese pretrain corpus in this project.The amount of parameters is about 0.8B.
10
 
11
+ 使用120G中文语料重头开始预训练的Llama模型,旨在提供可用的中小型基础模型。重新构建了embedding层和tokenizer。目前未经过指令微调。参数量约为0.8B左右。
12
 
13
  项目github link [Repo Links](https://github.com/enze5088/Chatterbox/blob/main/docs/model/llama-zh-base.md)
14
 
 
34
 
35
  ## 数据
36
 
37
+ 预训练阶段使用开源数据与本项目爬取的部分数据。共使用约33G中文预训练数据、MC4-zh、Code数据集。清洗后筛选共120G左右数据训练1 epoch,初始学习率1e-4。未经过指令微调。
38
 
39
  ### 中文预训练数据
40