Exr0n commited on
Commit
b41185e
1 Parent(s): aed46a0

README.md not readme.md

Browse files
Files changed (1) hide show
  1. readme.md → README.md +23 -1
readme.md → README.md RENAMED
@@ -1,6 +1,20 @@
1
  # Wiki Entity Similarity
2
 
3
- This dataset is generated by aggregating the link text that refers to various articles in context. For instance, if wiki article A refers to article B as C, then C is added to the list of aliases for article B, and the pair (B, C) is included in the dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  Following (DPR https://arxiv.org/pdf/2004.04906.pdf), we use the English Wikipedia dump from Dec. 20, 2018 as the source documents for link collection.
6
 
@@ -12,4 +26,12 @@ The dataset includes three quality levels, distinguished by the minimum number o
12
  | 10 | 605,775 | 4,407,409 |
13
  | 20 | 324,949 | 3,195,545 |
14
 
 
 
 
 
 
 
 
 
15
  Generation scripts can be found [in the GitHub repo](https://github.com/Exr0nProjects/wiki-entity-similarity).
 
1
  # Wiki Entity Similarity
2
 
3
+ Usage:
4
+ ``py`
5
+ from datasets import load_dataset
6
+
7
+ corpus = load_dataset('Exr0n/wiki-entity-similarity', '2018thresh20corpus', split='train')
8
+ assert corpus[0] == {'article': 'A1000 road', 'link_text': 'A1000', 'is_same': 1}
9
+
10
+ pairs = load_dataset('Exr0n/wiki-entity-similarity', '2018thresh20pairs', split='train')
11
+ assert corpus[0] == {'article': 'Rhinobatos', 'link_text': 'Ehinobatos beurleni', 'is_same': 1}
12
+ assert len(corpus) == 4_793_180
13
+ ```
14
+
15
+ ## Corpus (`name=*corpus`)
16
+
17
+ The corpora in this are generated by aggregating the link text that refers to various articles in context. For instance, if wiki article A refers to article B as C, then C is added to the list of aliases for article B, and the pair (B, C) is included in the dataset.
18
 
19
  Following (DPR https://arxiv.org/pdf/2004.04906.pdf), we use the English Wikipedia dump from Dec. 20, 2018 as the source documents for link collection.
20
 
 
26
  | 10 | 605,775 | 4,407,409 |
27
  | 20 | 324,949 | 3,195,545 |
28
 
29
+ ## Training Pairs (`name=*pairs`)
30
+ This dataset also includes training pair datasets (with both positive and negative examples) intended for training classifiers. The train/dev/test split is 75/15/10 % of each corpus.
31
+
32
+ ### Training Data Generation
33
+ The training pairs in this dataset are generated by taking each example from the corpus as a positive example, and creating a new negative example from the article title of the positive example and a random link text from a different article.
34
+
35
+ The articles featured in each split are disjoint from the other splits, and each split has the same number of positive (semantically the same) and negative (semantically different) examples.
36
+
37
  Generation scripts can be found [in the GitHub repo](https://github.com/Exr0nProjects/wiki-entity-similarity).