dibyaaaaax commited on
Commit
7ea0d0a
1 Parent(s): afa9044

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +166 -0
README.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A dataset for benchmarking keyphrase extraction and generation techniques from abstracts of English scientific papers. For more details about the dataset please refer the original paper - [http://memray.me/uploads/acl17-keyphrase-generation.pdf](http://memray.me/uploads/acl17-keyphrase-generation.pdf).
2
+
3
+ Data source - [https://github.com/memray/seq2seq-keyphrase](https://github.com/memray/seq2seq-keyphrase)
4
+
5
+ ## Dataset Summary
6
+
7
+
8
+ ## Dataset Structure
9
+
10
+ ## Dataset Statistics
11
+
12
+
13
+ ### Data Fields
14
+
15
+ - **id**: unique identifier of the document.
16
+ - **document**: Whitespace separated list of words in the document.
17
+ - **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all.
18
+ - **extractive_keyphrases**: List of all the present keyphrases.
19
+ - **abstractive_keyphrase**: List of all the absent keyphrases.
20
+
21
+
22
+ ### Data Splits
23
+
24
+ |Split| No. of datapoints |
25
+ |--|--|
26
+ | Train | 530,809 |
27
+ | Test | 20,000|
28
+ | Validation | 20,000|
29
+
30
+ ## Usage
31
+
32
+ ### Full Dataset
33
+
34
+ ```python
35
+ from datasets import load_dataset
36
+
37
+ # get entire dataset
38
+ dataset = load_dataset("midas/kp20k", "raw")
39
+
40
+ # sample from the train split
41
+ print("Sample from training dataset split")
42
+ train_sample = dataset["train"][0]
43
+ print("Fields in the sample: ", [key for key in train_sample.keys()])
44
+ print("Tokenized Document: ", train_sample["document"])
45
+ print("Document BIO Tags: ", train_sample["doc_bio_tags"])
46
+ print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"])
47
+ print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"])
48
+ print("\n-----------\n")
49
+
50
+ # sample from the validation split
51
+ print("Sample from validation dataset split")
52
+ validation_sample = dataset["validation"][0]
53
+ print("Fields in the sample: ", [key for key in validation_sample.keys()])
54
+ print("Tokenized Document: ", validation_sample["document"])
55
+ print("Document BIO Tags: ", validation_sample["doc_bio_tags"])
56
+ print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"])
57
+ print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"])
58
+ print("\n-----------\n")
59
+
60
+ # sample from the test split
61
+ print("Sample from test dataset split")
62
+ test_sample = dataset["test"][0]
63
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
64
+ print("Tokenized Document: ", test_sample["document"])
65
+ print("Document BIO Tags: ", test_sample["doc_bio_tags"])
66
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
67
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
68
+ print("\n-----------\n")
69
+ ```
70
+ **Output**
71
+
72
+ ```bash
73
+
74
+ ```
75
+
76
+ ### Keyphrase Extraction
77
+ ```python
78
+ from datasets import load_dataset
79
+
80
+ # get the dataset only for keyphrase extraction
81
+ dataset = load_dataset("midas/kp20k", "extraction")
82
+
83
+ print("Samples for Keyphrase Extraction")
84
+
85
+ # sample from the train split
86
+ print("Sample from training data split")
87
+ train_sample = dataset["train"][0]
88
+ print("Fields in the sample: ", [key for key in train_sample.keys()])
89
+ print("Tokenized Document: ", train_sample["document"])
90
+ print("Document BIO Tags: ", train_sample["doc_bio_tags"])
91
+ print("\n-----------\n")
92
+
93
+ # sample from the validation split
94
+ print("Sample from validation data split")
95
+ validation_sample = dataset["validation"][0]
96
+ print("Fields in the sample: ", [key for key in validation_sample.keys()])
97
+ print("Tokenized Document: ", validation_sample["document"])
98
+ print("Document BIO Tags: ", validation_sample["doc_bio_tags"])
99
+ print("\n-----------\n")
100
+
101
+ # sample from the test split
102
+ print("Sample from test data split")
103
+ test_sample = dataset["test"][0]
104
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
105
+ print("Tokenized Document: ", test_sample["document"])
106
+ print("Document BIO Tags: ", test_sample["doc_bio_tags"])
107
+ print("\n-----------\n")
108
+ ```
109
+
110
+ ### Keyphrase Generation
111
+ ```python
112
+ # get the dataset only for keyphrase generation
113
+ dataset = load_dataset("midas/kp20k", "generation")
114
+
115
+ print("Samples for Keyphrase Generation")
116
+
117
+ # sample from the train split
118
+ print("Sample from training data split")
119
+ train_sample = dataset["train"][0]
120
+ print("Fields in the sample: ", [key for key in train_sample.keys()])
121
+ print("Tokenized Document: ", train_sample["document"])
122
+ print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"])
123
+ print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"])
124
+ print("\n-----------\n")
125
+
126
+ # sample from the validation split
127
+ print("Sample from validation data split")
128
+ validation_sample = dataset["validation"][0]
129
+ print("Fields in the sample: ", [key for key in validation_sample.keys()])
130
+ print("Tokenized Document: ", validation_sample["document"])
131
+ print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"])
132
+ print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"])
133
+ print("\n-----------\n")
134
+
135
+ # sample from the test split
136
+ print("Sample from test data split")
137
+ test_sample = dataset["test"][0]
138
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
139
+ print("Tokenized Document: ", test_sample["document"])
140
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
141
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
142
+ print("\n-----------\n")
143
+ ```
144
+
145
+
146
+
147
+ ## Citation Information
148
+ Please cite the works below if you use this dataset in your work.
149
+
150
+ ```
151
+ @InProceedings{meng-EtAl:2017:Long,
152
+ author = {Meng, Rui and Zhao, Sanqiang and Han, Shuguang and He, Daqing and Brusilovsky, Peter and Chi, Yu},
153
+ title = {Deep Keyphrase Generation},
154
+ booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
155
+ month = {July},
156
+ year = {2017},
157
+ address = {Vancouver, Canada},
158
+ publisher = {Association for Computational Linguistics},
159
+ pages = {582--592},
160
+ url = {http://aclweb.org/anthology/P17-1054}
161
+ }
162
+ ```
163
+
164
+
165
+ ## Contributions
166
+ Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax), [@UmaGunturi](https://github.com/UmaGunturi) and [@ad6398](https://github.com/ad6398) for adding this dataset