Commit
·
0056707
1
Parent(s):
22df017
Upload 6 files
Browse files- .gitattributes +1 -0
- README-EN.md +64 -0
- README-ZH.md +64 -0
- eval.py +72 -0
- test_public.jsonl +0 -0
- train.jsonl +3 -0
- valid.jsonl +0 -0
.gitattributes
CHANGED
@@ -52,3 +52,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
52 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
53 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
54 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
|
|
|
52 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
53 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
54 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
55 |
+
train.jsonl filter=lfs diff=lfs merge=lfs -text
|
README-EN.md
ADDED
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# LCSTS
|
2 |
+
|
3 |
+
### Introduction
|
4 |
+
LCSTS is a Large-scale Chinese Short Text Summarization dataset constructed from the Chinese microblogging website SinaWeibo for the summary generation, which is collected by Harbin Institute of Technology. This corpus consists of over 2 million real Chinese short texts with short summaries given by the writer of each text, as well as 10,666 short summaries marked manually.
|
5 |
+
|
6 |
+
### Paper
|
7 |
+
[LCSTS: A Large Scale Chinese Short Text Summarization Dataset](https://www.aclweb.org/anthology/D15-1229.pdf). EMNLP 2015.
|
8 |
+
|
9 |
+
### Data Size
|
10 |
+
Training set: 2,400,591; Validation set: 8,685; Test set: 725.
|
11 |
+
|
12 |
+
### Data Format
|
13 |
+
Each instance is composed of a human-labeled summary quality score (human_label, an integer), input text (text, a string) and a output summary (summary, an integer).
|
14 |
+
|
15 |
+
### Example
|
16 |
+
```
|
17 |
+
{
|
18 |
+
"human_label": 5,
|
19 |
+
"summary": "林志颖公司疑涉虚假营销无厂房无研发",
|
20 |
+
"text": "日前,方舟子发文直指林志颖旗下爱碧丽推销假保健品,引起哗然。调查发现,爱碧丽没有自己的生产加工厂。其胶原蛋白饮品无核心研发,全部代工生产。号称有“逆生长”功效的爱碧丽“梦幻奇迹限量组”售价>高达1080元,实际成本仅为每瓶4元!"
|
21 |
+
}
|
22 |
+
```
|
23 |
+
- "human_label" (`int`): the human-labeled summary quality score(Only the validation set and the test set have this label, and the data set only includes 3, 4, and 5 points data, not including 1, 2 points data.).
|
24 |
+
- "text" (`str`): input text.
|
25 |
+
- "summary"(`str`): a output summary.
|
26 |
+
|
27 |
+
### Evaluation Code
|
28 |
+
The prediction result needs to be consistent with the format of the evaluation code.
|
29 |
+
|
30 |
+
Dependency packages: rouge==1.0.0, jieba=0.42.1
|
31 |
+
```shell
|
32 |
+
python eval.py prediction_file test_private_file
|
33 |
+
```
|
34 |
+
|
35 |
+
The evaluation metrics are rouge-1, rouge-2, rouge-l, and the output is in dictionary format.
|
36 |
+
```she
|
37 |
+
return {
|
38 |
+
"rouge-1-f": _,
|
39 |
+
"rouge-1-p": _,
|
40 |
+
"rouge-1-r": _,
|
41 |
+
"rouge-2-f": _,
|
42 |
+
"rouge-2-p": _,
|
43 |
+
"rouge-2-r": _,
|
44 |
+
"rouge-l-f": _,
|
45 |
+
"rouge-l-p": _,
|
46 |
+
"rouge-l-r": _}
|
47 |
+
```
|
48 |
+
|
49 |
+
### Author List
|
50 |
+
Baotian Hu, Qingcai Chen, Fangze Zhu
|
51 |
+
|
52 |
+
### Institutions
|
53 |
+
Harbin Institute of Technology
|
54 |
+
|
55 |
+
### Citation
|
56 |
+
```
|
57 |
+
@inproceedings{hu2015lcsts,
|
58 |
+
title={LCSTS: A Large Scale Chinese Short Text Summarization Dataset},
|
59 |
+
author={Hu, Baotian and Chen, Qingcai and Zhu, Fangze},
|
60 |
+
booktitle={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing},
|
61 |
+
pages={1967--1972},
|
62 |
+
year={2015}
|
63 |
+
}
|
64 |
+
```
|
README-ZH.md
ADDED
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# LCSTS
|
2 |
+
|
3 |
+
### 简介
|
4 |
+
LCSTS数据集是一个大规模、高质量中文短文本摘要数据集,由哈尔滨工业大学收集,用于新闻摘要生成任务。该数据集取自于新浪微博的大规模中文短文本摘要数据集,包含了200万真实的中文短文本数据和每个文本作者给出的摘要,另外还有手动标记的10666份文本摘要。
|
5 |
+
|
6 |
+
### 论文
|
7 |
+
[LCSTS: A Large Scale Chinese Short Text Summarization Dataset](https://www.aclweb.org/anthology/D15-1229.pdf). EMNLP 2015.
|
8 |
+
|
9 |
+
### 数据规模
|
10 |
+
训练集:2,400,591个摘要;验证集:8,685个摘要;测试集:725个摘要。
|
11 |
+
|
12 |
+
### 数据格式描述
|
13 |
+
每个实例包含人工标注的摘要质量打分(human_label,以整数形式存储),输入文本(text,以字符串形式存储)和输出的摘要(summary,以字符串形式存储)。
|
14 |
+
|
15 |
+
### 数据样例
|
16 |
+
```
|
17 |
+
{
|
18 |
+
"human_label": 5,
|
19 |
+
"summary": "林志颖公司疑涉虚假营销无厂房无研发",
|
20 |
+
"text": "日前,方舟子发文直指林志颖旗下爱碧丽推销假保健品,引起哗然。调查发现,爱碧丽没有自己的生产加工厂。其胶原蛋白饮品无核心研发,全部代工生产。号称有“逆生长”功效的爱碧丽“梦幻奇迹限量组”售价>高达1080元,实际成本仅为每瓶4元!"
|
21 |
+
}
|
22 |
+
```
|
23 |
+
- "human_label" (`int`):人工标注的摘要质量打分(只有验证集和测试集有该标注,且数据集中仅包括3、4、5分数据,而不包括1、2分的数据)。
|
24 |
+
- "text" (`str`):输入文本。
|
25 |
+
- "summary"(`str`):期待输出的摘要。
|
26 |
+
|
27 |
+
### 评测代码
|
28 |
+
预测结果需要和评测代码保持一样的格式。
|
29 |
+
正确提交文件名:LCSTS.jsonl
|
30 |
+
依赖:rouge==1.0.0, jieba=0.42.1
|
31 |
+
```shell
|
32 |
+
python eval.py prediction_file test_private_file
|
33 |
+
```
|
34 |
+
|
35 |
+
评测指标为rouge-1, rouge-2, rouge-l,输出结果为字典格式:
|
36 |
+
```she
|
37 |
+
return {
|
38 |
+
"rouge-1-f": _,
|
39 |
+
"rouge-1-p": _,
|
40 |
+
"rouge-1-r": _,
|
41 |
+
"rouge-2-f": _,
|
42 |
+
"rouge-2-p": _,
|
43 |
+
"rouge-2-r": _,
|
44 |
+
"rouge-l-f": _,
|
45 |
+
"rouge-l-p": _,
|
46 |
+
"rouge-l-r": _}
|
47 |
+
```
|
48 |
+
|
49 |
+
### 作者列表
|
50 |
+
户保田,陈清财,祝方泽
|
51 |
+
|
52 |
+
### 制作单位
|
53 |
+
哈尔滨工业大学
|
54 |
+
|
55 |
+
### 论文引用
|
56 |
+
```
|
57 |
+
@inproceedings{hu2015lcsts,
|
58 |
+
title={LCSTS: A Large Scale Chinese Short Text Summarization Dataset},
|
59 |
+
author={Hu, Baotian and Chen, Qingcai and Zhu, Fangze},
|
60 |
+
booktitle={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing},
|
61 |
+
pages={1967--1972},
|
62 |
+
year={2015}
|
63 |
+
}
|
64 |
+
```
|
eval.py
ADDED
@@ -0,0 +1,72 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import json
|
2 |
+
import argparse
|
3 |
+
import sys
|
4 |
+
from rouge import Rouge
|
5 |
+
import numpy as np
|
6 |
+
import jieba
|
7 |
+
|
8 |
+
def rouge_score(data):
|
9 |
+
"""
|
10 |
+
compute rouge score
|
11 |
+
Args:
|
12 |
+
data (list of dict including reference and candidate):
|
13 |
+
Returns:
|
14 |
+
res (dict of list of scores): rouge score
|
15 |
+
"""
|
16 |
+
rouge_name = ["rouge-1", "rouge-2", "rouge-l"]
|
17 |
+
item_name = ["f", "p", "r"]
|
18 |
+
|
19 |
+
res = {}
|
20 |
+
for name1 in rouge_name:
|
21 |
+
for name2 in item_name:
|
22 |
+
res["%s-%s"%(name1, name2)] = []
|
23 |
+
for tmp_data in data:
|
24 |
+
origin_candidate = tmp_data['candidate']
|
25 |
+
origin_reference = tmp_data['reference']
|
26 |
+
assert isinstance(origin_candidate, str)
|
27 |
+
if not isinstance(origin_reference, list):
|
28 |
+
origin_reference = [origin_reference]
|
29 |
+
|
30 |
+
tmp_res = []
|
31 |
+
for r in origin_reference:
|
32 |
+
tmp_res.append(Rouge().get_scores(refs=r, hyps=origin_candidate)[0])
|
33 |
+
|
34 |
+
for name1 in rouge_name:
|
35 |
+
for name2 in item_name:
|
36 |
+
res["%s-%s"%(name1, name2)].append(max([tr[name1][name2] for tr in tmp_res]))
|
37 |
+
|
38 |
+
for name1 in rouge_name:
|
39 |
+
for name2 in item_name:
|
40 |
+
res["%s-%s"%(name1, name2)] = np.mean(res["%s-%s"%(name1, name2)])
|
41 |
+
return res
|
42 |
+
|
43 |
+
def load_file(filename):
|
44 |
+
data = []
|
45 |
+
with open(filename, "r") as f:
|
46 |
+
for line in f.readlines():
|
47 |
+
data.append(json.loads(line))
|
48 |
+
f.close()
|
49 |
+
return data
|
50 |
+
|
51 |
+
def proline(line):
|
52 |
+
return " ".join([w for w in jieba.cut("".join(line.strip().split()))])
|
53 |
+
|
54 |
+
|
55 |
+
def compute(golden_file, pred_file, return_dict=True):
|
56 |
+
golden_data = load_file(golden_file)
|
57 |
+
pred_data = load_file(pred_file)
|
58 |
+
|
59 |
+
if len(golden_data) != len(pred_data):
|
60 |
+
raise RuntimeError("Wrong Predictions")
|
61 |
+
|
62 |
+
eval_data = [{"reference": proline(g["summary"]), "candidate": proline(p["summary"])} for g, p in zip(golden_data, pred_data)]
|
63 |
+
return rouge_score(eval_data)
|
64 |
+
|
65 |
+
def main():
|
66 |
+
argv = sys.argv
|
67 |
+
print("预测结果:{}, 测试集: {}".format(argv[1], argv[2]))
|
68 |
+
print(compute(argv[2], argv[1]))
|
69 |
+
|
70 |
+
|
71 |
+
if __name__ == '__main__':
|
72 |
+
main()
|
test_public.jsonl
ADDED
The diff for this file is too large to render.
See raw diff
|
|
train.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0d48f7b570423997eb995288f265359c2a608a5aa89b9c68f23ab9ac4074fb56
|
3 |
+
size 903291188
|
valid.jsonl
ADDED
The diff for this file is too large to render.
See raw diff
|
|