asahi417 commited on
Commit
afcb4ea
1 Parent(s): 6acc499
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ cache
README.md ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license:
5
+ - other
6
+ multilinguality:
7
+ - monolingual
8
+ size_categories:
9
+ - 1K<n<10K
10
+ pretty_name: SemEval2012 task 2 Relational Similarity
11
+ ---
12
+ # Dataset Card for "relbert/semeval2012_relational_similarity"
13
+ ## Dataset Description
14
+ - **Repository:** [RelBERT](https://github.com/asahi417/relbert)
15
+ - **Paper:** [https://aclanthology.org/S12-1047/](https://aclanthology.org/S12-1047/)
16
+ - **Dataset:** SemEval2012: Relational Similarity
17
+
18
+ ### Dataset Summary
19
+ Relational similarity dataset from [SemEval2012 task 2](https://aclanthology.org/S12-1047/), compiled to fine-tune [RelBERT](https://github.com/asahi417/relbert) model.
20
+ The dataset contains a list of positive and negative word pair from 89 pre-defined relations.
21
+ The relation types are constructed on top of following 10 parent relation types.
22
+ ```shell
23
+ {
24
+ 1: "Class Inclusion", # Hypernym
25
+ 2: "Part-Whole", # Meronym, Substance Meronym
26
+ 3: "Similar", # Synonym, Co-hypornym
27
+ 4: "Contrast", # Antonym
28
+ 5: "Attribute", # Attribute, Event
29
+ 6: "Non Attribute",
30
+ 7: "Case Relation",
31
+ 8: "Cause-Purpose",
32
+ 9: "Space-Time",
33
+ 10: "Representation"
34
+ }
35
+ ```
36
+ Each of the parent relation is further grouped into child relation types where the definition can be found [here](https://drive.google.com/file/d/0BzcZKTSeYL8VenY0QkVpZVpxYnc/view?resourcekey=0-ZP-UARfJj39PcLroibHPHw).
37
+
38
+
39
+ ## Dataset Structure
40
+ ### Data Instances
41
+ An example of `train` looks as follows.
42
+ ```
43
+ {
44
+ 'relation_type': '8d',
45
+ 'positives': [ [ "breathe", "live" ], [ "study", "learn" ], [ "speak", "communicate" ], ... ]
46
+ 'negatives': [ [ "starving", "hungry" ], [ "clean", "bathe" ], [ "hungry", "starving" ], ... ]
47
+ }
48
+ ```
49
+
50
+ ### Data Splits
51
+ | name |train|validation|
52
+ |---------|----:|---------:|
53
+ |semeval2012_relational_similarity| 89 | 89|
54
+
55
+
56
+ ### Number of Positive/Negative Word-pairs in each Split
57
+
58
+ | relation_type | positive (train) | negative (train) | positive (validation) | negative (validation) |
59
+ |:----------------|-------------------:|-------------------:|------------------------:|------------------------:|
60
+ | 1 | 50 | 740 | 63 | 826 |
61
+ | 10 | 60 | 730 | 66 | 823 |
62
+ | 10a | 10 | 799 | 14 | 894 |
63
+ | 10b | 10 | 797 | 13 | 893 |
64
+ | 10c | 10 | 800 | 11 | 898 |
65
+ | 10d | 10 | 799 | 10 | 898 |
66
+ | 10e | 10 | 795 | 8 | 896 |
67
+ | 10f | 10 | 799 | 10 | 898 |
68
+ | 1a | 10 | 797 | 14 | 892 |
69
+ | 1b | 10 | 797 | 14 | 892 |
70
+ | 1c | 10 | 800 | 11 | 898 |
71
+ | 1d | 10 | 797 | 16 | 890 |
72
+ | 1e | 10 | 794 | 8 | 895 |
73
+ | 2 | 100 | 690 | 117 | 772 |
74
+ | 2a | 10 | 799 | 15 | 893 |
75
+ | 2b | 10 | 796 | 11 | 894 |
76
+ | 2c | 10 | 798 | 13 | 894 |
77
+ | 2d | 10 | 798 | 10 | 897 |
78
+ | 2e | 10 | 799 | 11 | 897 |
79
+ | 2f | 10 | 802 | 11 | 900 |
80
+ | 2g | 10 | 796 | 16 | 889 |
81
+ | 2h | 10 | 799 | 11 | 897 |
82
+ | 2i | 10 | 800 | 9 | 900 |
83
+ | 2j | 10 | 801 | 10 | 900 |
84
+ | 3 | 80 | 710 | 80 | 809 |
85
+ | 3a | 10 | 799 | 11 | 897 |
86
+ | 3b | 10 | 802 | 11 | 900 |
87
+ | 3c | 10 | 798 | 12 | 895 |
88
+ | 3d | 10 | 798 | 14 | 893 |
89
+ | 3e | 10 | 802 | 5 | 906 |
90
+ | 3f | 10 | 803 | 11 | 901 |
91
+ | 3g | 10 | 801 | 6 | 904 |
92
+ | 3h | 10 | 801 | 10 | 900 |
93
+ | 4 | 80 | 710 | 82 | 807 |
94
+ | 4a | 10 | 802 | 11 | 900 |
95
+ | 4b | 10 | 797 | 7 | 899 |
96
+ | 4c | 10 | 800 | 12 | 897 |
97
+ | 4d | 10 | 796 | 4 | 901 |
98
+ | 4e | 10 | 802 | 12 | 899 |
99
+ | 4f | 10 | 802 | 9 | 902 |
100
+ | 4g | 10 | 798 | 15 | 892 |
101
+ | 4h | 10 | 801 | 12 | 898 |
102
+ | 5 | 90 | 700 | 105 | 784 |
103
+ | 5a | 10 | 798 | 14 | 893 |
104
+ | 5b | 10 | 801 | 8 | 902 |
105
+ | 5c | 10 | 799 | 11 | 897 |
106
+ | 5d | 10 | 797 | 15 | 891 |
107
+ | 5e | 10 | 801 | 8 | 902 |
108
+ | 5f | 10 | 801 | 11 | 899 |
109
+ | 5g | 10 | 802 | 9 | 902 |
110
+ | 5h | 10 | 800 | 15 | 894 |
111
+ | 5i | 10 | 800 | 14 | 895 |
112
+ | 6 | 80 | 710 | 99 | 790 |
113
+ | 6a | 10 | 798 | 15 | 892 |
114
+ | 6b | 10 | 801 | 11 | 899 |
115
+ | 6c | 10 | 801 | 13 | 897 |
116
+ | 6d | 10 | 804 | 10 | 903 |
117
+ | 6e | 10 | 801 | 11 | 899 |
118
+ | 6f | 10 | 799 | 12 | 896 |
119
+ | 6g | 10 | 798 | 12 | 895 |
120
+ | 6h | 10 | 799 | 15 | 893 |
121
+ | 7 | 80 | 710 | 91 | 798 |
122
+ | 7a | 10 | 800 | 14 | 895 |
123
+ | 7b | 10 | 796 | 7 | 898 |
124
+ | 7c | 10 | 797 | 11 | 895 |
125
+ | 7d | 10 | 800 | 14 | 895 |
126
+ | 7e | 10 | 797 | 10 | 896 |
127
+ | 7f | 10 | 796 | 12 | 893 |
128
+ | 7g | 10 | 794 | 9 | 894 |
129
+ | 7h | 10 | 795 | 14 | 890 |
130
+ | 8 | 80 | 710 | 90 | 799 |
131
+ | 8a | 10 | 797 | 14 | 892 |
132
+ | 8b | 10 | 801 | 7 | 903 |
133
+ | 8c | 10 | 796 | 12 | 893 |
134
+ | 8d | 10 | 796 | 13 | 892 |
135
+ | 8e | 10 | 796 | 11 | 894 |
136
+ | 8f | 10 | 797 | 12 | 894 |
137
+ | 8g | 10 | 793 | 7 | 895 |
138
+ | 8h | 10 | 798 | 14 | 893 |
139
+ | 9 | 90 | 700 | 96 | 793 |
140
+ | 9a | 10 | 795 | 14 | 890 |
141
+ | 9b | 10 | 799 | 12 | 896 |
142
+ | 9c | 10 | 790 | 7 | 892 |
143
+ | 9d | 10 | 803 | 9 | 903 |
144
+ | 9e | 10 | 804 | 8 | 905 |
145
+ | 9f | 10 | 799 | 10 | 898 |
146
+ | 9g | 10 | 796 | 14 | 891 |
147
+ | 9h | 10 | 799 | 13 | 895 |
148
+ | 9i | 10 | 799 | 9 | 899 |
149
+ | SUM | 1580 | 70207 | 1778 | 78820 |
150
+
151
+ ### Citation Information
152
+ ```
153
+ @inproceedings{jurgens-etal-2012-semeval,
154
+ title = "{S}em{E}val-2012 Task 2: Measuring Degrees of Relational Similarity",
155
+ author = "Jurgens, David and
156
+ Mohammad, Saif and
157
+ Turney, Peter and
158
+ Holyoak, Keith",
159
+ booktitle = "*{SEM} 2012: The First Joint Conference on Lexical and Computational Semantics {--} Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation ({S}em{E}val 2012)",
160
+ month = "7-8 " # jun,
161
+ year = "2012",
162
+ address = "Montr{\'e}al, Canada",
163
+ publisher = "Association for Computational Linguistics",
164
+ url = "https://aclanthology.org/S12-1047",
165
+ pages = "356--364",
166
+ }
167
+ ```
dataset/train.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
dataset/valid.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
get_stats.py ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ from datasets import load_dataset
3
+
4
+ data = load_dataset('relbert/semeval2012_relational_similarity')
5
+ stats = []
6
+ for k in data.keys():
7
+ for i in data[k]:
8
+ stats.append({'relation_type': i['relation_type'], 'split': k, 'positives': len(i['positives']), 'negatives': len(i['negatives'])})
9
+ df = pd.DataFrame(stats)
10
+ df_train = df[df['split'] == 'train']
11
+ df_valid = df[df['split'] == 'validation']
12
+ stats = []
13
+ for r in df['relation_type'].unique():
14
+ _df_t = df_train[df_train['relation_type'] == r]
15
+ _df_v = df_valid[df_valid['relation_type'] == r]
16
+ stats.append({
17
+ 'relation_type': r,
18
+ 'positive (train)': 0 if len(_df_t) == 0 else _df_t['positives'].values[0],
19
+ 'negative (train)': 0 if len(_df_t) == 0 else _df_t['negatives'].values[0],
20
+ 'positive (validation)': 0 if len(_df_v) == 0 else _df_v['positives'].values[0],
21
+ 'negative (validation)': 0 if len(_df_v) == 0 else _df_v['negatives'].values[0],
22
+ })
23
+
24
+ df = pd.DataFrame(stats).sort_values(by=['relation_type'])
25
+ df.index = df.pop('relation_type')
26
+ sum_pairs = df.sum(0)
27
+ df = df.T
28
+ df['SUM'] = sum_pairs
29
+ df = df.T
30
+
31
+ df.to_csv('stats.csv')
32
+ with open('stats.md', 'w') as f:
33
+ f.write(df.to_markdown())
34
+
35
+
36
+
process.py ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ import tarfile
4
+ import zipfile
5
+ import gzip
6
+ import requests
7
+ from random import shuffle, seed
8
+
9
+ from glob import glob
10
+ from itertools import chain
11
+ import gdown
12
+
13
+ validation_ratio = 0.2
14
+ top_n = 10
15
+
16
+
17
+ def wget(url, cache_dir: str = './cache', gdrive_filename: str = None):
18
+ """ wget and uncompress data_iterator """
19
+ os.makedirs(cache_dir, exist_ok=True)
20
+ if url.startswith('https://drive.google.com'):
21
+ assert gdrive_filename is not None, 'please provide fileaname for gdrive download'
22
+ gdown.download(url, f'{cache_dir}/{gdrive_filename}', quiet=False)
23
+ filename = gdrive_filename
24
+ else:
25
+ filename = os.path.basename(url)
26
+ with open(f'{cache_dir}/{filename}', "wb") as f:
27
+ r = requests.get(url)
28
+ f.write(r.content)
29
+ path = f'{cache_dir}/{filename}'
30
+
31
+ if path.endswith('.tar.gz') or path.endswith('.tgz') or path.endswith('.tar'):
32
+ if path.endswith('.tar'):
33
+ tar = tarfile.open(path)
34
+ else:
35
+ tar = tarfile.open(path, "r:gz")
36
+ tar.extractall(cache_dir)
37
+ tar.close()
38
+ os.remove(path)
39
+ elif path.endswith('.zip'):
40
+ with zipfile.ZipFile(path, 'r') as zip_ref:
41
+ zip_ref.extractall(cache_dir)
42
+ os.remove(path)
43
+ elif path.endswith('.gz'):
44
+ with gzip.open(path, 'rb') as f:
45
+ with open(path.replace('.gz', ''), 'wb') as f_write:
46
+ f_write.write(f.read())
47
+ os.remove(path)
48
+
49
+
50
+ def get_training_data():
51
+ """ Get RelBERT training data
52
+
53
+ Returns
54
+ -------
55
+ pairs: dictionary of list (positive pairs, negative pairs)
56
+ {'1b': [[0.6, ('office', 'desk'), ..], [[-0.1, ('aaa', 'bbb'), ...]]
57
+ """
58
+ cache_dir = 'cache'
59
+ os.makedirs(cache_dir, exist_ok=True)
60
+ remove_relation = None
61
+ path_answer = f'{cache_dir}/Phase2Answers'
62
+ path_scale = f'{cache_dir}/Phase2AnswersScaled'
63
+ url = 'https://drive.google.com/u/0/uc?id=0BzcZKTSeYL8VYWtHVmxUR3FyUmc&export=download'
64
+ filename = 'SemEval-2012-Platinum-Ratings.tar.gz'
65
+ if not (os.path.exists(path_scale) and os.path.exists(path_answer)):
66
+ wget(url, gdrive_filename=filename, cache_dir=cache_dir)
67
+ files_answer = [os.path.basename(i) for i in glob(f'{path_answer}/*.txt')]
68
+ files_scale = [os.path.basename(i) for i in glob(f'{path_scale}/*.txt')]
69
+ assert files_answer == files_scale, f'files are not matched: {files_scale} vs {files_answer}'
70
+ positives = {}
71
+ negatives = {}
72
+ all_relation_type = {}
73
+ positives_score = {}
74
+ seed(42)
75
+ # score_range = [90.0, 88.7] # the absolute value of max/min prototypicality rating
76
+ for i in files_scale:
77
+ relation_id = i.split('-')[-1].replace('.txt', '')
78
+ if remove_relation and int(relation_id[:-1]) in remove_relation:
79
+ continue
80
+ with open(f'{path_answer}/{i}', 'r') as f:
81
+ lines_answer = [_l.replace('"', '').split('\t') for _l in f.read().split('\n')
82
+ if not _l.startswith('#') and len(_l)]
83
+ relation_type = list(set(list(zip(*lines_answer))[-1]))
84
+ assert len(relation_type) == 1, relation_type
85
+ relation_type = relation_type[0]
86
+ with open(f'{path_scale}/{i}', 'r') as f:
87
+ # list of tuple [score, ("a", "b")]
88
+ scales = [[float(_l[:5]), _l[6:].replace('"', '')] for _l in f.read().split('\n')
89
+ if not _l.startswith('#') and len(_l)]
90
+ scales = sorted(scales, key=lambda _x: _x[0])
91
+ # positive pairs are in the reverse order of prototypicality score
92
+ positive_pairs = [[s, tuple(p.split(':'))] for s, p in filter(lambda _x: _x[0] > 0, scales)]
93
+ positive_pairs = sorted(positive_pairs, key=lambda x: x[0], reverse=True)
94
+
95
+ positive_pairs = positive_pairs[:min(top_n, len(positive_pairs))]
96
+ shuffle(positive_pairs)
97
+ positives_score[relation_id] = positive_pairs
98
+ positives[relation_id] = list(list(zip(*positive_pairs))[1])
99
+
100
+ negative_pairs = [tuple(p.split(':')) for s, p in filter(lambda _x: _x[0] < 0, scales)]
101
+ shuffle(negative_pairs)
102
+ negatives[relation_id] = negative_pairs
103
+
104
+ all_relation_type[relation_id] = relation_type
105
+
106
+ # consider positive from other relation as negative
107
+ for k in positives.keys():
108
+ negatives[k] += list(chain(*[_v for _k, _v in positives.items() if _k != k]))
109
+
110
+ # split train & validation
111
+
112
+ positives_valid = {k: v[:int(len(v) * validation_ratio)] for k, v in positives.items()}
113
+ positives_train = {k: v[int(len(v) * validation_ratio):] for k, v in positives.items()}
114
+
115
+ negatives_valid = {k: v[:int(len(v) * validation_ratio)] for k, v in negatives.items()}
116
+ negatives_train = {k: v[int(len(v) * validation_ratio):] for k, v in negatives.items()}
117
+
118
+ positives_score_valid = {k: v[:int(len(v) * validation_ratio)] for k, v in positives_score.items()}
119
+ positives_score_train = {k: v[int(len(v) * validation_ratio):] for k, v in positives_score.items()}
120
+
121
+ outputs = []
122
+ for positives, negatives, positives_score in zip(
123
+ [positives_train, positives_valid],
124
+ [negatives_train, negatives_valid],
125
+ [positives_score_train, positives_score_valid]):
126
+ pairs = {k: [positives[k], negatives[k]] for k in positives.keys()}
127
+ parent = list(set([i[:-1] for i in all_relation_type.keys()]))
128
+ relation_structure = {p: [i for i in all_relation_type.keys() if p == i[:-1]] for p in parent}
129
+ for k, v in relation_structure.items():
130
+ positive = list(chain(*[positives_score[_v] for _v in v]))
131
+ positive = list(list(zip(*sorted(positive, key=lambda x: x[0], reverse=True)))[1])
132
+ negative = []
133
+ for _k, _v in relation_structure.items():
134
+ if _k != k:
135
+ negative += list(chain(*[positives[__v] for __v in _v]))
136
+ pairs[k] = [positive, negative]
137
+ outputs.append([{'relation_type': k, 'positives': pos, 'negatives': neg} for k, (pos, neg) in pairs.items()])
138
+ return outputs
139
+
140
+
141
+ if __name__ == '__main__':
142
+ data_train, data_valid = get_training_data()
143
+ with open('dataset/train.jsonl', 'w') as f_writer:
144
+ f_writer.write('\n'.join([json.dumps(i) for i in data_train]))
145
+ with open('dataset/valid.jsonl', 'w') as f_writer:
146
+ f_writer.write('\n'.join([json.dumps(i) for i in data_valid]))
semeval2012_relational_similarity_v2.py ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import datasets
3
+
4
+ logger = datasets.logging.get_logger(__name__)
5
+ _DESCRIPTION = """[SemEVAL 2012 task 2: Relational Similarity](https://aclanthology.org/S12-1047/)"""
6
+ _NAME = "semeval2012_relational_similarity"
7
+ _VERSION = "1.0.0"
8
+ _CITATION = """
9
+ @inproceedings{jurgens-etal-2012-semeval,
10
+ title = "{S}em{E}val-2012 Task 2: Measuring Degrees of Relational Similarity",
11
+ author = "Jurgens, David and
12
+ Mohammad, Saif and
13
+ Turney, Peter and
14
+ Holyoak, Keith",
15
+ booktitle = "*{SEM} 2012: The First Joint Conference on Lexical and Computational Semantics {--} Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation ({S}em{E}val 2012)",
16
+ month = "7-8 " # jun,
17
+ year = "2012",
18
+ address = "Montr{\'e}al, Canada",
19
+ publisher = "Association for Computational Linguistics",
20
+ url = "https://aclanthology.org/S12-1047",
21
+ pages = "356--364",
22
+ }
23
+ """
24
+
25
+ _HOME_PAGE = "https://github.com/asahi417/relbert"
26
+ _URL = f'https://huggingface.co/datasets/relbert/{_NAME}/raw/main/dataset'
27
+ _URLS = {
28
+ str(datasets.Split.TRAIN): [f'{_URL}/train.jsonl'],
29
+ str(datasets.Split.VALIDATION): [f'{_URL}/valid.jsonl'],
30
+ }
31
+
32
+
33
+ class SemEVAL2012RelationalSimilarityV2Config(datasets.BuilderConfig):
34
+ """BuilderConfig"""
35
+
36
+ def __init__(self, **kwargs):
37
+ """BuilderConfig.
38
+ Args:
39
+ **kwargs: keyword arguments forwarded to super.
40
+ """
41
+ super(SemEVAL2012RelationalSimilarityV2Config, self).__init__(**kwargs)
42
+
43
+
44
+ class SemEVAL2012RelationalSimilarityV2(datasets.GeneratorBasedBuilder):
45
+ """Dataset."""
46
+
47
+ BUILDER_CONFIGS = [
48
+ SemEVAL2012RelationalSimilarityV2Config(name=_NAME, version=datasets.Version(_VERSION), description=_DESCRIPTION),
49
+ ]
50
+
51
+ def _split_generators(self, dl_manager):
52
+ downloaded_file = dl_manager.download_and_extract(_URLS)
53
+ return [datasets.SplitGenerator(name=i, gen_kwargs={"filepaths": downloaded_file[str(i)]})
54
+ for i in [datasets.Split.TRAIN, datasets.Split.VALIDATION]]
55
+
56
+ def _generate_examples(self, filepaths):
57
+ _key = 0
58
+ for filepath in filepaths:
59
+ logger.info(f"generating examples from = {filepath}")
60
+ with open(filepath, encoding="utf-8") as f:
61
+ _list = [i for i in f.read().split('\n') if len(i) > 0]
62
+ for i in _list:
63
+ data = json.loads(i)
64
+ yield _key, data
65
+ _key += 1
66
+
67
+ def _info(self):
68
+ return datasets.DatasetInfo(
69
+ description=_DESCRIPTION,
70
+ features=datasets.Features(
71
+ {
72
+ "relation_type": datasets.Value("string"),
73
+ "positives": datasets.Sequence(datasets.Sequence(datasets.Value("string"))),
74
+ "negatives": datasets.Sequence(datasets.Sequence(datasets.Value("string"))),
75
+ }
76
+ ),
77
+ supervised_keys=None,
78
+ homepage=_HOME_PAGE,
79
+ citation=_CITATION,
80
+ )
stats.csv ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ relation_type,positive (train),negative (train),positive (validation),negative (validation)
2
+ 1,50,740,63,826
3
+ 10,60,730,66,823
4
+ 10a,10,799,14,894
5
+ 10b,10,797,13,893
6
+ 10c,10,800,11,898
7
+ 10d,10,799,10,898
8
+ 10e,10,795,8,896
9
+ 10f,10,799,10,898
10
+ 1a,10,797,14,892
11
+ 1b,10,797,14,892
12
+ 1c,10,800,11,898
13
+ 1d,10,797,16,890
14
+ 1e,10,794,8,895
15
+ 2,100,690,117,772
16
+ 2a,10,799,15,893
17
+ 2b,10,796,11,894
18
+ 2c,10,798,13,894
19
+ 2d,10,798,10,897
20
+ 2e,10,799,11,897
21
+ 2f,10,802,11,900
22
+ 2g,10,796,16,889
23
+ 2h,10,799,11,897
24
+ 2i,10,800,9,900
25
+ 2j,10,801,10,900
26
+ 3,80,710,80,809
27
+ 3a,10,799,11,897
28
+ 3b,10,802,11,900
29
+ 3c,10,798,12,895
30
+ 3d,10,798,14,893
31
+ 3e,10,802,5,906
32
+ 3f,10,803,11,901
33
+ 3g,10,801,6,904
34
+ 3h,10,801,10,900
35
+ 4,80,710,82,807
36
+ 4a,10,802,11,900
37
+ 4b,10,797,7,899
38
+ 4c,10,800,12,897
39
+ 4d,10,796,4,901
40
+ 4e,10,802,12,899
41
+ 4f,10,802,9,902
42
+ 4g,10,798,15,892
43
+ 4h,10,801,12,898
44
+ 5,90,700,105,784
45
+ 5a,10,798,14,893
46
+ 5b,10,801,8,902
47
+ 5c,10,799,11,897
48
+ 5d,10,797,15,891
49
+ 5e,10,801,8,902
50
+ 5f,10,801,11,899
51
+ 5g,10,802,9,902
52
+ 5h,10,800,15,894
53
+ 5i,10,800,14,895
54
+ 6,80,710,99,790
55
+ 6a,10,798,15,892
56
+ 6b,10,801,11,899
57
+ 6c,10,801,13,897
58
+ 6d,10,804,10,903
59
+ 6e,10,801,11,899
60
+ 6f,10,799,12,896
61
+ 6g,10,798,12,895
62
+ 6h,10,799,15,893
63
+ 7,80,710,91,798
64
+ 7a,10,800,14,895
65
+ 7b,10,796,7,898
66
+ 7c,10,797,11,895
67
+ 7d,10,800,14,895
68
+ 7e,10,797,10,896
69
+ 7f,10,796,12,893
70
+ 7g,10,794,9,894
71
+ 7h,10,795,14,890
72
+ 8,80,710,90,799
73
+ 8a,10,797,14,892
74
+ 8b,10,801,7,903
75
+ 8c,10,796,12,893
76
+ 8d,10,796,13,892
77
+ 8e,10,796,11,894
78
+ 8f,10,797,12,894
79
+ 8g,10,793,7,895
80
+ 8h,10,798,14,893
81
+ 9,90,700,96,793
82
+ 9a,10,795,14,890
83
+ 9b,10,799,12,896
84
+ 9c,10,790,7,892
85
+ 9d,10,803,9,903
86
+ 9e,10,804,8,905
87
+ 9f,10,799,10,898
88
+ 9g,10,796,14,891
89
+ 9h,10,799,13,895
90
+ 9i,10,799,9,899
91
+ SUM,1580,70207,1778,78820
stats.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ | relation_type | positive (train) | negative (train) | positive (validation) | negative (validation) |
2
+ |:----------------|-------------------:|-------------------:|------------------------:|------------------------:|
3
+ | 1 | 50 | 740 | 63 | 826 |
4
+ | 10 | 60 | 730 | 66 | 823 |
5
+ | 10a | 10 | 799 | 14 | 894 |
6
+ | 10b | 10 | 797 | 13 | 893 |
7
+ | 10c | 10 | 800 | 11 | 898 |
8
+ | 10d | 10 | 799 | 10 | 898 |
9
+ | 10e | 10 | 795 | 8 | 896 |
10
+ | 10f | 10 | 799 | 10 | 898 |
11
+ | 1a | 10 | 797 | 14 | 892 |
12
+ | 1b | 10 | 797 | 14 | 892 |
13
+ | 1c | 10 | 800 | 11 | 898 |
14
+ | 1d | 10 | 797 | 16 | 890 |
15
+ | 1e | 10 | 794 | 8 | 895 |
16
+ | 2 | 100 | 690 | 117 | 772 |
17
+ | 2a | 10 | 799 | 15 | 893 |
18
+ | 2b | 10 | 796 | 11 | 894 |
19
+ | 2c | 10 | 798 | 13 | 894 |
20
+ | 2d | 10 | 798 | 10 | 897 |
21
+ | 2e | 10 | 799 | 11 | 897 |
22
+ | 2f | 10 | 802 | 11 | 900 |
23
+ | 2g | 10 | 796 | 16 | 889 |
24
+ | 2h | 10 | 799 | 11 | 897 |
25
+ | 2i | 10 | 800 | 9 | 900 |
26
+ | 2j | 10 | 801 | 10 | 900 |
27
+ | 3 | 80 | 710 | 80 | 809 |
28
+ | 3a | 10 | 799 | 11 | 897 |
29
+ | 3b | 10 | 802 | 11 | 900 |
30
+ | 3c | 10 | 798 | 12 | 895 |
31
+ | 3d | 10 | 798 | 14 | 893 |
32
+ | 3e | 10 | 802 | 5 | 906 |
33
+ | 3f | 10 | 803 | 11 | 901 |
34
+ | 3g | 10 | 801 | 6 | 904 |
35
+ | 3h | 10 | 801 | 10 | 900 |
36
+ | 4 | 80 | 710 | 82 | 807 |
37
+ | 4a | 10 | 802 | 11 | 900 |
38
+ | 4b | 10 | 797 | 7 | 899 |
39
+ | 4c | 10 | 800 | 12 | 897 |
40
+ | 4d | 10 | 796 | 4 | 901 |
41
+ | 4e | 10 | 802 | 12 | 899 |
42
+ | 4f | 10 | 802 | 9 | 902 |
43
+ | 4g | 10 | 798 | 15 | 892 |
44
+ | 4h | 10 | 801 | 12 | 898 |
45
+ | 5 | 90 | 700 | 105 | 784 |
46
+ | 5a | 10 | 798 | 14 | 893 |
47
+ | 5b | 10 | 801 | 8 | 902 |
48
+ | 5c | 10 | 799 | 11 | 897 |
49
+ | 5d | 10 | 797 | 15 | 891 |
50
+ | 5e | 10 | 801 | 8 | 902 |
51
+ | 5f | 10 | 801 | 11 | 899 |
52
+ | 5g | 10 | 802 | 9 | 902 |
53
+ | 5h | 10 | 800 | 15 | 894 |
54
+ | 5i | 10 | 800 | 14 | 895 |
55
+ | 6 | 80 | 710 | 99 | 790 |
56
+ | 6a | 10 | 798 | 15 | 892 |
57
+ | 6b | 10 | 801 | 11 | 899 |
58
+ | 6c | 10 | 801 | 13 | 897 |
59
+ | 6d | 10 | 804 | 10 | 903 |
60
+ | 6e | 10 | 801 | 11 | 899 |
61
+ | 6f | 10 | 799 | 12 | 896 |
62
+ | 6g | 10 | 798 | 12 | 895 |
63
+ | 6h | 10 | 799 | 15 | 893 |
64
+ | 7 | 80 | 710 | 91 | 798 |
65
+ | 7a | 10 | 800 | 14 | 895 |
66
+ | 7b | 10 | 796 | 7 | 898 |
67
+ | 7c | 10 | 797 | 11 | 895 |
68
+ | 7d | 10 | 800 | 14 | 895 |
69
+ | 7e | 10 | 797 | 10 | 896 |
70
+ | 7f | 10 | 796 | 12 | 893 |
71
+ | 7g | 10 | 794 | 9 | 894 |
72
+ | 7h | 10 | 795 | 14 | 890 |
73
+ | 8 | 80 | 710 | 90 | 799 |
74
+ | 8a | 10 | 797 | 14 | 892 |
75
+ | 8b | 10 | 801 | 7 | 903 |
76
+ | 8c | 10 | 796 | 12 | 893 |
77
+ | 8d | 10 | 796 | 13 | 892 |
78
+ | 8e | 10 | 796 | 11 | 894 |
79
+ | 8f | 10 | 797 | 12 | 894 |
80
+ | 8g | 10 | 793 | 7 | 895 |
81
+ | 8h | 10 | 798 | 14 | 893 |
82
+ | 9 | 90 | 700 | 96 | 793 |
83
+ | 9a | 10 | 795 | 14 | 890 |
84
+ | 9b | 10 | 799 | 12 | 896 |
85
+ | 9c | 10 | 790 | 7 | 892 |
86
+ | 9d | 10 | 803 | 9 | 903 |
87
+ | 9e | 10 | 804 | 8 | 905 |
88
+ | 9f | 10 | 799 | 10 | 898 |
89
+ | 9g | 10 | 796 | 14 | 891 |
90
+ | 9h | 10 | 799 | 13 | 895 |
91
+ | 9i | 10 | 799 | 9 | 899 |
92
+ |:----------------|-------------------:|-------------------:|------------------------:|------------------------:|
93
+ | SUM | 1580 | 70207 | 1778 | 78820 |