hao wang commited on
Commit
3f68aa8
1 Parent(s): 658479a

add new model

Browse files
Files changed (5) hide show
  1. README.md +6 -6
  2. config.json +15 -0
  3. pytorch_model.bin +3 -0
  4. spiece.model +3 -0
  5. vocab.txt +0 -0
README.md CHANGED
@@ -4,28 +4,28 @@ language:
4
  license: apache-2.0
5
  ---
6
 
7
- # Randeng-TransformerXL-1.1B-Paraphrasing
8
 
9
  - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
10
  - Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
11
 
12
  ## 简介 Brief Introduction
13
 
14
- 基于Transformer-XL结构实现中文句子改写。
15
 
16
- Paraphrase Chinese sentences based on Transformer-XL.
17
 
18
  ## 模型分类 Model Taxonomy
19
 
20
  | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
21
  | :----: | :----: | :----: | :----: | :----: | :----: |
22
- | 通用 General | 自然语言转换 NLT | 燃灯 Randeng | Transformer | 1.1B | 中文-改写 Chinese-Paraphrasing |
23
 
24
  ## 模型信息 Model Information
25
 
26
- 使用1.1B参数的Transformer-XL在悟道语料库(280G版本)上进行预训练,然后在同义句标注数据集上进行微调。
27
 
28
- The 1.1B Transformer-XL model is pre-trained on the Wudao corpus (with 280G samples), and then fine-tuned on the annotated similar-sentence pair dataset.
29
 
30
  ## 使用 Usage
31
 
 
4
  license: apache-2.0
5
  ---
6
 
7
+ # Randeng-Transformer-1.1B-Denoise
8
 
9
  - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
10
  - Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
11
 
12
  ## 简介 Brief Introduction
13
 
14
+ 以语法纠错任务为微调目标的中文Transformer-XL
15
 
16
+ Chinese Transformer-XL with a denoising task as the fine-tuning objective.
17
 
18
  ## 模型分类 Model Taxonomy
19
 
20
  | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
21
  | :----: | :----: | :----: | :----: | :----: | :----: |
22
+ | 通用 General | 自然语言转换 NLT | 燃灯 Randeng | Transformer | 1.1B | 中文-去噪 Chinese-Denoise |
23
 
24
  ## 模型信息 Model Information
25
 
26
+ 我们先使用Transformer-XL的模型结构在悟道语料库(180G版本)上进行预训练,然后在我们自主构建的语法错误数据集上进行微调。其中,去噪任务是从包括 **随机插入/交换/删除/替换/句子重排** 的具有噪声的输入中重建一个流畅和干净的文本。
27
 
28
+ We first pre-trained Transformer-XL on the Wudo corpus (180G version), and then fine-tuned it on a denoised dataset (developed by us). The denoise task is to reconstruct a fluent and clean text from a noisy input which includes **random insertion/swap/deletion/replacement/sentence reordering**.
29
 
30
  ## 使用 Usage
31
 
config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "num_layers":32,
3
+ "vocab_size":50048,
4
+ "hidden_size":1600,
5
+ "num_attention_heads":25,
6
+ "embedding_dropout_prob":0.1,
7
+ "attention_dropout_prob":0.1,
8
+ "output_dropout_prob":0.1,
9
+ "max_sequence_length":512,
10
+ "max_memory_length":512,
11
+ "checkpoint_activations":false,
12
+ "checkpoint_num_layers":1,
13
+ "parallel_output":true,
14
+ "relative_encoding":true
15
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40bb2c508f87756a715bcbf417dcec56c08c2186b431ba109abf61e5fdf0c45c
3
+ size 2291627287
spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ea6f4164152bc58d23e24e48f7bf4187aad72a32e97ec4b3acc832fe183cbc2
3
+ size 1021864
vocab.txt ADDED
The diff for this file is too large to render. See raw diff