boudinfl commited on
Commit
1cb6508
1 Parent(s): 231aba6

populates datacard

Browse files
Files changed (2) hide show
  1. .gitignore +2 -0
  2. README.md +85 -0
.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+
2
+ .idea/
README.md CHANGED
@@ -1,3 +1,88 @@
1
  ---
 
 
 
 
 
 
2
  license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - unknown
4
+ language_creators:
5
+ - unknown
6
+ languages:
7
+ - en
8
  license: cc-by-4.0
9
+ multilinguality:
10
+ - monolingual
11
+ task_categories:
12
+ - text-mining
13
+ - text-generation
14
+ task_ids:
15
+ - keyphrase-generation
16
+ - keyphrase-extraction
17
+ size_categories:
18
+ - n<1K
19
+ pretty_name: Preprocessed SemEval-2010 Benchmark dataset
20
  ---
21
+
22
+ # Preprocessed SemEval-2010 Benchmark dataset for Keyphrase Generation
23
+
24
+ ## About
25
+
26
+ SemEval-2010 is a dataset for benchmarking keyphrase extraction and generation models.
27
+ The dataset is composed of 144 abstracts of scientific papers collected from the [ACM Digital Library](https://dl.acm.org/).
28
+ Keyphrases were annotated by readers and combined with those provided by the authors.
29
+ Details about the SemEval-2010 dataset can be found in the original paper [(kim et al., 2010)][kim-2010].
30
+
31
+ This version of the dataset was produced by [(Boudin et al., 2016)][boudin-2016] and provides four increasingly sophisticated levels of document preprocessing:
32
+
33
+ * `lvl-1`: default files provided by the SemEval-2010 organizers.
34
+
35
+ * `lvl-2`: for each file, we manually retrieved the original PDF file from
36
+ the ACM Digital Library. We then extract the enriched textual content of
37
+ the PDF files using an Optical Character Recognition (OCR) system and
38
+ perform document logical structure detection using ParsCit v110505. We use
39
+ the detected logical structure to remove author-assigned keyphrases and
40
+ select only relevant elements : title, headers, abstract, introduction,
41
+ related work, body text and conclusion. We finally apply a systematic
42
+ dehyphenation at line breaks.
43
+
44
+ * `lvl-3`: we further abridge the input text from level 2 preprocessed
45
+ documents to the following~: title, headers, abstract, introduction,
46
+ related work, background and conclusion.
47
+
48
+ * `lvl-4`: we abridge the input text from level 3 preprocessed documents using
49
+ an unsupervised summarization technique. We keep the title and abstract
50
+ and select the most content bearing sentences from the remaining contents.
51
+
52
+ Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021].
53
+ Text pre-processing (tokenization) is carried out using `spacy` (`en_core_web_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
54
+ Stemming (Porter's stemmer implementation provided in `nltk`) is applied before reference keyphrases are matched against the source text.
55
+ Details about the process can be found in `prmu.py`.
56
+
57
+ ## Content and statistics
58
+
59
+ The dataset is divided into the following three splits:
60
+
61
+ | Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
62
+ | :--------- |------------:|-------:|-------------:|----------:|------------:|--------:|---------:|
63
+ | Train | 144 | - | - | - | - | - | - |
64
+ | Test | 100 | - | - | - | - | - | - |
65
+
66
+ The following data fields are available :
67
+
68
+ - **id**: unique identifier of the document.
69
+ - **title**: title of the document.
70
+ - **abstract**: abstract of the document.
71
+ - **keyphrases**: list of reference keyphrases.
72
+ - **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
73
+
74
+ ## References
75
+
76
+ - (Kim et al., 2010). Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010.
77
+ [SemEval-2010 Task 5 : Automatic Keyphrase Extraction from Scientific Articles][kim-2010].
78
+ In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 21–26, Uppsala, Sweden. Association for Computational Linguistics.
79
+ - (Boudin et al., 2016) Florian Boudin, Hugo Mougard, and Damien Cram. 2016.
80
+ [How Document Pre-processing affects Keyphrase Extraction Performance][boudin-2016].
81
+ In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT), pages 121–128, Osaka, Japan. The COLING 2016 Organizing Committee.
82
+ - (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
83
+ [Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021].
84
+ In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
85
+
86
+ [kim-2010]: https://aclanthology.org/S10-1004/
87
+ [boudin-2016]: https://aclanthology.org/W16-3917/
88
+ [boudin-2021]: https://aclanthology.org/2021.naacl-main.330/