rebel-dataset-de / README.md
mingaflo's picture
Update README.md
22a285f
metadata
license: mit
task_categories:
  - summarization
language:
  - de
tags:
  - wikipedia
  - wikidata
  - Relation Extraction
  - REBEL
pretty_name: German REBEL Dataset
size_categories:
  - 100K<n<1M

Dataset Card for German REBEL Dataset

Dataset Summary

This dataset is the German version of Babelscape/rebel-dataset. It has been generated using CROCODILE. The Wikipedia Version is from November 2022.

Languages

  • German

Dataset Structure

{"docid": "9400003",
 "title": "Odin-Gletscher",
 "uri": "Q7077818",
 "text": "Der Odin-Gletscher ist ein kleiner Gletscher im ostantarktischen Viktorialand. Er fließt von den Westhängen des Mount Odin in der Asgard Range.\n\nDas New Zealand Antarctic Place-Names Committee benannte ihn in Anlehnung an die Benennung des Mount Odin nach Odin, Göttervater, Kriegs- und Totengott der nordischen Mythologie.",
 "entities": [{"uri": "Q35666", "boundaries": [35, 44], "surfaceform": "Gletscher", "annotator": "Me"}, ... ],
 "triples": [{"subject": {"uri": "Q7077818", "boundaries": [4, 18], "surfaceform": "Odin-Gletscher", "annotator": "Me"},
              "predicate": {"uri": "P31", "boundaries": null, "surfaceform": "ist ein(e)", "annotator": "NoSubject-Triple-aligner"},
              "object": {"uri": "Q35666", "boundaries": [35, 44], "surfaceform": "Gletscher", "annotator": "Me"}, "sentence_id": 0,
              "dependency_path": null,
              "confidence": 0.99560546875,
              "annotator": "NoSubject-Triple-aligner"}, ...]
}

Data Instances

The dataset is 1.1GB if unpacked on the system. 195MB if zipped.

Data Fields

"docid": "9644601", "title": Wikipedia Title "uri": "Q4290759", "text": Wikipedia Abstract "entities": A list of Entities

  • uri: Wikidata URI
  • boundaries: Tuple of indices of the entity in the abstract
  • surfaceform: text form of entity
  • annotator: different annotator classes
    "triples": List of Triples as dictionaries
  • sentence_id: Sentence number the triple appears in.
  • "confidence": float, the confidence of the NLI Model
  • subject
    • uri: Wikidata Entity URI
    • boundaries
    • surfaceform
    • annotator
  • predicate
    • uri: Wikidata Relation URI
    • boundaries: always null,
    • surfaceform: Wikidata Relation Name
    • annotator
  • object:
    • uri: Wikidata Entity URI
    • boundaries
    • surfaceform
    • annotator

Data Splits

No splits are provided for now since the relation classes are quite imbalanced. To read the dataset you can adapt the function provided by https://github.com/Babelscape/rebel

def _generate_examples(self, filepath):
        """This function returns the examples in the raw (text) form."""
        logging.info("generating examples from = %s", filepath)
        relations_df = pd.read_csv(self.config.data_files['relations'], header = None, sep='\t')
        relations = list(relations_df[0])

        with open(filepath, encoding="utf-8") as f:
            for id_, row in enumerate(f):
                article = json.loads(row)
                prev_len = 0
                if len(article['triples']) == 0:
                    continue
                count = 0
                for text_paragraph in article['text'].split('\n'):
                    if len(text_paragraph) == 0:
                        continue
                    sentences = re.split(r'(?<=[.])\s', text_paragraph)
                    text = ''
                    for sentence in sentences:
                        text += sentence + ' '
                        if any([entity['boundaries'][0] < len(text) + prev_len < entity['boundaries'][1] for entity in article['entities']]):
                            continue
                        entities = sorted([entity for entity in article['entities'] if prev_len < entity['boundaries'][1] <= len(text)+prev_len], key=lambda tup: tup['boundaries'][0])
                        decoder_output = '<triplet> '
                        for int_ent, entity in enumerate(entities):
                            triplets = sorted([triplet for triplet in article['triples'] if triplet['subject'] == entity and prev_len< triplet['subject']['boundaries'][1]<=len(text) + prev_len and prev_len< triplet['object']['boundaries'][1]<=len(text)+ prev_len and triplet['predicate']['surfaceform'] in relations], key=lambda tup: tup['object']['boundaries'][0])
                            if len(triplets) == 0:
                                continue
                            decoder_output += entity['surfaceform'] + ' <subj> '
                            for triplet in triplets:
                                decoder_output += triplet['object']['surfaceform'] + ' <obj> '  + triplet['predicate']['surfaceform'] + ' <subj> '
                            decoder_output = decoder_output[:-len(' <subj> ')]
                            decoder_output += ' <triplet> '
                        decoder_output = decoder_output[:-len(' <triplet> ')]
                        count += 1
                        prev_len += len(text)

                        if len(decoder_output) == 0:
                            text = ''
                            continue

                        text = re.sub('([\[\].,!?()])', r' \1 ', text.replace('()', ''))
                        text = re.sub('\s{2,}', ' ', text)

                        yield article['uri'] + '-' + str(count), {
                            "title": article['title'],
                            "context": text,
                            "id": article['uri'] + '-' + str(count),
                            "triplets": decoder_output,
                        }
                        text = ''

Dataset Creation

Curation Rationale

This dataset was created to enable the training of a german BART based model as pre-training phase for Relation Extraction.

Source Data

Who are the source language producers?

Any Wikipedia and Wikidata contributor.

Annotations

Annotation process

The dataset extraction pipeline cRocoDiLe: Automatic Relation Extraction Dataset with NLI filtering.

Who are the annotators?

Automatic annottations

Personal and Sensitive Information

All text is from Wikipedia, any Personal or Sensitive Information there may be present in this dataset.

Considerations for Using the Data

Social Impact of Dataset

The dataset serves as a pre-training step for Relation Extraction models. It is distantly annotated, hence it should only be used as such. A model trained solely on this dataset may produce allucinations coming from the silver nature of the dataset.

Discussion of Biases

Since the dataset was automatically created from Wikipedia and Wikidata, it may reflect the biases withing those sources.

For Wikipedia text, see for example Dinan et al 2020 on biases in Wikipedia (esp. Table 1), or Blodgett et al 2020 for a more general discussion of the topic.

For Wikidata, there are class imbalances, also resulting from Wikipedia.

Other Known Limitations

Not for now

Additional Information

Dataset Curators

Me

Licensing Information

Since anyone can create the dataset on their own using the linked GitHub Repository, I am going to use the MIT Licence.

Citation Information

Inspiration by:

@inproceedings{huguet-cabot-navigli-2021-rebel,
title = "REBEL: Relation Extraction By End-to-end Language generation",
author = "Huguet Cabot, Pere-Llu{\'\i}s  and
    Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Online and in the Barceló Bávaro Convention Centre, Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf",
}

Contributions

None for now