dibyaaaaax commited on
Commit
2f2d801
1 Parent(s): 0d318b9

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +99 -0
README.md ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Dataset Summary
2
+
3
+ A dataset for benchmarking keyphrase extraction and generation techniques from long document english scientific papers. For more details about the dataset please refer the original paper - [https://www.semanticscholar.org/paper/Keyphrase-Extraction-from-Single-Documents-in-the-Schutz/08b75d31a90f206b36e806a7ec372f6f0d12457e](https://www.semanticscholar.org/paper/Keyphrase-Extraction-from-Single-Documents-in-the-Schutz/08b75d31a90f206b36e806a7ec372f6f0d12457e)
4
+
5
+ Original source of the data - []()
6
+
7
+
8
+ ## Dataset Structure
9
+
10
+
11
+ ### Data Fields
12
+
13
+ - **id**: unique identifier of the document.
14
+ - **document**: Whitespace separated list of words in the document.
15
+ - **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all.
16
+ - **extractive_keyphrases**: List of all the present keyphrases.
17
+ - **abstractive_keyphrase**: List of all the absent keyphrases.
18
+
19
+
20
+ ### Data Splits
21
+
22
+ |Split| #datapoints |
23
+ |--|--|
24
+ | Test | 1320 |
25
+
26
+
27
+ ## Usage
28
+
29
+ ### Full Dataset
30
+
31
+ ```python
32
+ from datasets import load_dataset
33
+
34
+ # get entire dataset
35
+ dataset = load_dataset("midas/pubmed", "raw")
36
+
37
+ # sample from the test split
38
+ print("Sample from test dataset split")
39
+ test_sample = dataset["test"][0]
40
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
41
+ print("Tokenized Document: ", test_sample["document"])
42
+ print("Document BIO Tags: ", test_sample["doc_bio_tags"])
43
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
44
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
45
+ print("\n-----------\n")
46
+ ```
47
+ **Output**
48
+
49
+ ```bash
50
+
51
+ ```
52
+
53
+ ### Keyphrase Extraction
54
+ ```python
55
+ from datasets import load_dataset
56
+
57
+ # get the dataset only for keyphrase extraction
58
+ dataset = load_dataset("midas/pubmed", "extraction")
59
+
60
+ print("Samples for Keyphrase Extraction")
61
+
62
+ # sample from the test split
63
+ print("Sample from test data split")
64
+ test_sample = dataset["test"][0]
65
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
66
+ print("Tokenized Document: ", test_sample["document"])
67
+ print("Document BIO Tags: ", test_sample["doc_bio_tags"])
68
+ print("\n-----------\n")
69
+ ```
70
+
71
+ ### Keyphrase Generation
72
+ ```python
73
+ # get the dataset only for keyphrase generation
74
+ dataset = load_dataset("midas/pubmed", "generation")
75
+
76
+ print("Samples for Keyphrase Generation")
77
+
78
+ # sample from the test split
79
+ print("Sample from test data split")
80
+ test_sample = dataset["test"][0]
81
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
82
+ print("Tokenized Document: ", test_sample["document"])
83
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
84
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
85
+ print("\n-----------\n")
86
+ ```
87
+
88
+ ## Citation Information
89
+ ```
90
+ @inproceedings{Schutz2008KeyphraseEF,
91
+ title={Keyphrase Extraction from Single Documents in the Open Domain Exploiting Linguistic and Statistical Methods},
92
+ author={Alexander Schutz},
93
+ year={2008}
94
+ }
95
+
96
+ ```
97
+
98
+ ## Contributions
99
+ Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax) and [@ad6398](https://github.com/ad6398) for adding this dataset