# LCSTS ### Introduction LCSTS is a Large-scale Chinese Short Text Summarization dataset constructed from the Chinese microblogging website SinaWeibo for the summary generation, which is collected by Harbin Institute of Technology. This corpus consists of over 2 million real Chinese short texts with short summaries given by the writer of each text, as well as 10,666 short summaries marked manually. ### Paper [LCSTS: A Large Scale Chinese Short Text Summarization Dataset](https://www.aclweb.org/anthology/D15-1229.pdf). EMNLP 2015. ### Data Size Training set: 2,400,591; Validation set: 8,685; Test set: 725. ### Data Format Each instance is composed of a human-labeled summary quality score (human_label, an integer), input text (text, a string) and a output summary (summary, an integer). ### Example ``` { "human_label": 5, "summary": "林志颖公司疑涉虚假营销无厂房无研发", "text": "日前,方舟子发文直指林志颖旗下爱碧丽推销假保健品,引起哗然。调查发现,爱碧丽没有自己的生产加工厂。其胶原蛋白饮品无核心研发,全部代工生产。号称有“逆生长”功效的爱碧丽“梦幻奇迹限量组”售价>高达1080元,实际成本仅为每瓶4元!" } ``` - "human_label" (`int`): the human-labeled summary quality score(Only the validation set and the test set have this label, and the data set only includes 3, 4, and 5 points data, not including 1, 2 points data.). - "text" (`str`): input text. - "summary"(`str`): a output summary. ### Evaluation Code The prediction result needs to be consistent with the format of the evaluation code. Dependency packages: rouge==1.0.0, jieba=0.42.1 ```shell python eval.py prediction_file test_private_file ``` The evaluation metrics are rouge-1, rouge-2, rouge-l, and the output is in dictionary format. ```she return { "rouge-1-f": _, "rouge-1-p": _, "rouge-1-r": _, "rouge-2-f": _, "rouge-2-p": _, "rouge-2-r": _, "rouge-l-f": _, "rouge-l-p": _, "rouge-l-r": _} ``` ### Author List Baotian Hu, Qingcai Chen, Fangze Zhu ### Institutions Harbin Institute of Technology ### Citation ``` @inproceedings{hu2015lcsts, title={LCSTS: A Large Scale Chinese Short Text Summarization Dataset}, author={Hu, Baotian and Chen, Qingcai and Zhu, Fangze}, booktitle={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing}, pages={1967--1972}, year={2015} } ```