Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Libraries:
Datasets
pandas
License:
altlex / README.md
espejelomar's picture
Update README.md
8f54a5c
|
raw
history blame
7.23 kB
metadata
license: mit
language:
  - en
paperswithcode_id: embedding-data/altlex
pretty_name: altlex

Dataset Card for "altlex"

Table of Contents

Dataset Description

Homepage: https://github.com/chridey/altlex

Repository: More Information Needed

Paper: https://aclanthology.org/P16-1135.pdf

Point of Contact: Christopher Hidey

Dataset Summary

Git repository for software associated with the 2016 ACL paper "Identifying Causal Relations Using Parallel Wikipedia Articles."

Disclaimer: The team releasing altlex did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team.

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Parallel Wikipedia Format

This is a gzipped, JSON-formatted file. The "titles" array is the shared title name of the English and Simple Wikipedia articles. The "articles" array consists of two arrays and each of those arrays must be the same length as the "titles" array and the indices into these arrays must point to the aligned articles and titles. Each article within the articles array is an array of tokenized sentence strings (but not word tokenized).

The format of the dictionary is as follows:

{"files": [english_name, simple_name],
 "articles": [
              [[article_1_sentence_1_string, article_1_sentence_2_string, ...],
               [article_2_sentence_1_string, article_2_sentence_2_string, ...],
               ...
              ],
              [[article_1_sentence_1_string, article_1_sentence_2_string, ...],
               [article_2_sentence_1_string, article_2_sentence_2_string, ...],
               ...
              ]
             ],
  "titles": [title_1_string, title_2_string, ...]
}

Parsed Wikipedia Format

This is a gzipped, JSON-formatted list of parsed Wikipedia article pairs. The list stored at 'sentences' is of length 2 and stores each version of the English and Wikipedia article with the same title.

The data is formatted as follows:

[
 {
  "index": article_index,
  "title": article_title_string,
  "sentences": [[parsed_sentence_1, parsed_sentence_2, ...],
                [parsed_sentence_1, parsed_sentence_2, ...]
               ]
 },
 ...
]

Parsed Pairs Format

This is a gzipped, JSON-formatted list of parsed sentences. Paraphrase pairs are consecutive even and odd indices. For the parsed sentence, see "Parsed Sentence Format."

The data is formatted as follows:

[
  ...,
  parsed_sentence_2,
  parsed_sentence_3,
  ...
]

Parsed Sentence Format

Each parsed sentence is of the following format:

{
   "dep": [[[governor_index, dependent_index, relation_string], ...], ...], 
   "lemmas": [[lemma_1_string, lemma_2_string, ...], ...],
   "pos": [[pos_1_string, pos_2_string, ...], ...],
   "parse": [parenthesized_parse_1_string, ...], 
   "words": [[word_1_string, word_2_string, ...], ...] , 
   "ner": [[ner_1_string, ner_2_string, ...], ...]
}

Feature Extractor Config Format

{"framenetSettings": 
   {"binary": true/false}, 
 "featureSettings": 
   {
   "arguments_cat_curr": true/false, 
   "arguments_verbnet_prev": true/false, 
   "head_word_cat_curr": true/false, 
   "head_word_verbnet_prev": true/false, 
   "head_word_verbnet_altlex": true/false, 
   "head_word_cat_prev": true/false, 
   "head_word_cat_altlex": true/false, 
   "kld_score": true/false, 
   "head_word_verbnet_curr": true/false, 
   "arguments_verbnet_curr": true/false, 
   "framenet": true/false, 
   "arguments_cat_prev": true/false, 
   "connective": true/false
   }, 
 "kldSettings": 
   {"kldDir": $kld_name}
}

Data Point Format

It is also possible to run the feature extractor directly on a single data point. From the featureExtraction module create a FeatureExtractor object and call the method addFeatures on a DataPoint object (note that this does not create any interaction features, for that you will also need to call makeInteractionFeatures). The DataPoint class takes a dictionary as input, in the following format:

{
"sentences": {[{"ner": [...], "pos": [...], "words": [...], "stems": [...], "lemmas": [...], "dependencies": [...]}, {...}]}
"altlexLength": integer,
"altlex": {"dependencies": [...]}
}
The sentences list is the pair of sentences/spans where the first span begins with the altlex. Dependencies must be a list where at index i there is a dependency relation string and governor index integer or a NoneType. Index i into the words list is the dependent of this relation. To split single sentence dependency relations, use the function splitDependencies in utils.dependencyUtils.

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information

Contributions

Thanks to @chridey for adding this dataset.