jay68 commited on
Commit
df71b22
1 Parent(s): 2e9e116

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -55,7 +55,7 @@ The code of Chinese data generation and other detailed information can be found
55
 
56
  We trained models using datasets of different sizes (200,000, 600,000, 1,000,000, and 2,000,000 samples) for instruction learning, and we obtained different model versions as shown below:
57
  | Datasize| 200,000 | 600,000 | 1,000,000 | 2,000,000 |
58
- | ----- | ----- | ----- | ----- |
59
  | Finetuned Model | [BELLE-7B-0.2M](https://huggingface.co/BelleGroup/BELLE-7B-0.2M) | [BELLE-7B-0.6M](https://huggingface.co/BelleGroup/BELLE-7B-0.6M) | [BELLE-7B-1M](https://huggingface.co/BelleGroup/BELLE-7B-1M) | [BELLE-7B-2M](https://huggingface.co/BelleGroup/BELLE-7B-2M) |
60
 
61
  ## Training hyper-parameters
@@ -130,7 +130,7 @@ BELLE模型以Bloomz-7b1-mt为基础,在 2.0M 条中文数据上,结合Stanf
130
 
131
  我们采取了不同大小规模(20万、60万、100万和200万样本)的指令学习的数据集训练模型,我们得到不同的模型版本如下所示:
132
  | Datasize| 200,000 | 600,000 | 1,000,000 | 2,000,000 |
133
- | ----- | ----- | ----- | ----- |
134
  | Finetuned Model | [BELLE-7B-0.2M](https://huggingface.co/BelleGroup/BELLE-7B-0.2M) | [BELLE-7B-0.6M](https://huggingface.co/BelleGroup/BELLE-7B-0.6M) | [BELLE-7B-1M](https://huggingface.co/BelleGroup/BELLE-7B-1M) | [BELLE-7B-2M](https://huggingface.co/BelleGroup/BELLE-7B-2M)
135
 
136
  ## 模型训练超参数
 
55
 
56
  We trained models using datasets of different sizes (200,000, 600,000, 1,000,000, and 2,000,000 samples) for instruction learning, and we obtained different model versions as shown below:
57
  | Datasize| 200,000 | 600,000 | 1,000,000 | 2,000,000 |
58
+ | ----- | ----- | ----- | ----- | ----- |
59
  | Finetuned Model | [BELLE-7B-0.2M](https://huggingface.co/BelleGroup/BELLE-7B-0.2M) | [BELLE-7B-0.6M](https://huggingface.co/BelleGroup/BELLE-7B-0.6M) | [BELLE-7B-1M](https://huggingface.co/BelleGroup/BELLE-7B-1M) | [BELLE-7B-2M](https://huggingface.co/BelleGroup/BELLE-7B-2M) |
60
 
61
  ## Training hyper-parameters
 
130
 
131
  我们采取了不同大小规模(20万、60万、100万和200万样本)的指令学习的数据集训练模型,我们得到不同的模型版本如下所示:
132
  | Datasize| 200,000 | 600,000 | 1,000,000 | 2,000,000 |
133
+ | ----- | ----- | ----- | ----- | ----- |
134
  | Finetuned Model | [BELLE-7B-0.2M](https://huggingface.co/BelleGroup/BELLE-7B-0.2M) | [BELLE-7B-0.6M](https://huggingface.co/BelleGroup/BELLE-7B-0.6M) | [BELLE-7B-1M](https://huggingface.co/BelleGroup/BELLE-7B-1M) | [BELLE-7B-2M](https://huggingface.co/BelleGroup/BELLE-7B-2M)
135
 
136
  ## 模型训练超参数