Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Libraries:
Datasets
pandas
License:
File size: 7,232 Bytes
e366903
 
280ddfe
 
 
 
 
 
a19c18b
280ddfe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d0c9ba2
80b366a
d0c9ba2
80b366a
d0c9ba2
80b366a
280ddfe
 
 
 
 
 
8526af7
 
 
280ddfe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8f54a5c
 
e366903
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
---
license: mit
language:
- en
paperswithcode_id: embedding-data/altlex
pretty_name: altlex
---

# Dataset Card for "altlex"

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)
  
## Dataset Description

**Homepage:** [https://github.com/chridey/altlex](https://github.com/chridey/altlex)

**Repository:** [More Information Needed](https://github.com/chridey/altlex)

**Paper:** [https://aclanthology.org/P16-1135.pdf](https://aclanthology.org/P16-1135.pdf)

**Point of Contact:** [Christopher Hidey](ch3085@columbia.edu)

### Dataset Summary

Git repository for software associated with the 2016 ACL paper "Identifying Causal Relations Using Parallel Wikipedia Articles."

Disclaimer: The team releasing altlex did not upload the dataset to the Hub and did not write a dataset card.
These steps were done by the Hugging Face team.

### Supported Tasks and Leaderboards

[More Information Needed](https://github.com/chridey/altlex)

### Languages

[More Information Needed](https://github.com/chridey/altlex)

## Dataset Structure

Parallel Wikipedia Format

This is a gzipped, JSON-formatted file. The "titles" array is the shared title name of the English and Simple Wikipedia articles. 
The "articles" array consists of two arrays and each of those arrays must be the same length as the "titles" array and 
the indices into these arrays must point to the aligned articles and titles. 
Each article within the articles array is an array of tokenized sentence strings (but not word tokenized).

The format of the dictionary is as follows:

```
{"files": [english_name, simple_name],
 "articles": [
              [[article_1_sentence_1_string, article_1_sentence_2_string, ...],
               [article_2_sentence_1_string, article_2_sentence_2_string, ...],
               ...
              ],
              [[article_1_sentence_1_string, article_1_sentence_2_string, ...],
               [article_2_sentence_1_string, article_2_sentence_2_string, ...],
               ...
              ]
             ],
  "titles": [title_1_string, title_2_string, ...]
}

```

Parsed Wikipedia Format

This is a gzipped, JSON-formatted list of parsed Wikipedia article pairs. 
The list stored at 'sentences' is of length 2 and stores each version
of the English and Wikipedia article with the same title.

The data is formatted as follows:

```
[
 {
  "index": article_index,
  "title": article_title_string,
  "sentences": [[parsed_sentence_1, parsed_sentence_2, ...],
                [parsed_sentence_1, parsed_sentence_2, ...]
               ]
 },
 ...
]

```

Parsed Pairs Format

This is a gzipped, JSON-formatted list of parsed sentences. Paraphrase pairs are consecutive 
even and odd indices. For the parsed sentence, see "Parsed Sentence Format."

The data is formatted as follows:

```
[
  ...,
  parsed_sentence_2,
  parsed_sentence_3,
  ...
]

```

Parsed Sentence Format

Each parsed sentence is of the following format:

```
{
   "dep": [[[governor_index, dependent_index, relation_string], ...], ...], 
   "lemmas": [[lemma_1_string, lemma_2_string, ...], ...],
   "pos": [[pos_1_string, pos_2_string, ...], ...],
   "parse": [parenthesized_parse_1_string, ...], 
   "words": [[word_1_string, word_2_string, ...], ...] , 
   "ner": [[ner_1_string, ner_2_string, ...], ...]
}

```

Feature Extractor Config Format

```
{"framenetSettings": 
   {"binary": true/false}, 
 "featureSettings": 
   {
   "arguments_cat_curr": true/false, 
   "arguments_verbnet_prev": true/false, 
   "head_word_cat_curr": true/false, 
   "head_word_verbnet_prev": true/false, 
   "head_word_verbnet_altlex": true/false, 
   "head_word_cat_prev": true/false, 
   "head_word_cat_altlex": true/false, 
   "kld_score": true/false, 
   "head_word_verbnet_curr": true/false, 
   "arguments_verbnet_curr": true/false, 
   "framenet": true/false, 
   "arguments_cat_prev": true/false, 
   "connective": true/false
   }, 
 "kldSettings": 
   {"kldDir": $kld_name}
}

```

Data Point Format

It is also possible to run the feature extractor directly on a single data point. 
From the featureExtraction module create a FeatureExtractor object and call the method addFeatures
on a DataPoint object (note that this does not create any interaction features,
for that you will also need to call makeInteractionFeatures). 
The DataPoint class takes a dictionary as input, in the following format:

```
{
"sentences": {[{"ner": [...], "pos": [...], "words": [...], "stems": [...], "lemmas": [...], "dependencies": [...]}, {...}]}
"altlexLength": integer,
"altlex": {"dependencies": [...]}
}
The sentences list is the pair of sentences/spans where the first span begins with the altlex. Dependencies must be a list where at index i there is a dependency relation string and governor index integer or a NoneType. Index i into the words list is the dependent of this relation. To split single sentence dependency relations, use the function splitDependencies in utils.dependencyUtils.

```

### Curation Rationale

[More Information Needed](https://github.com/chridey/altlex)

### Source Data

#### Initial Data Collection and Normalization

[More Information Needed](https://github.com/chridey/altlex)

#### Who are the source language producers?

[More Information Needed](https://github.com/chridey/altlex)

### Annotations

#### Annotation process

[More Information Needed](https://github.com/chridey/altlex)

#### Who are the annotators?

[More Information Needed](https://github.com/chridey/altlex)

### Personal and Sensitive Information

[More Information Needed](https://github.com/chridey/altlex)

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed](https://github.com/chridey/altlex)

### Discussion of Biases

[More Information Needed](https://github.com/chridey/altlex)

### Other Known Limitations

[More Information Needed](https://github.com/chridey/altlex)

## Additional Information

### Dataset Curators

[More Information Needed](https://github.com/chridey/altlex)

### Licensing Information

[More Information Needed](https://github.com/chridey/altlex)

### Citation Information

### Contributions

Thanks to [@chridey](https://github.com/chridey/altlex/commits?author=chridey) for adding this dataset.

---