--- license: mit task_categories: - summarization language: - de tags: - wikipedia - wikidata - Relation Extraction - REBEL pretty_name: German REBEL Dataset size_categories: - 100K ' for triplet in triplets: decoder_output += triplet['object']['surfaceform'] + ' ' + triplet['predicate']['surfaceform'] + ' ' decoder_output = decoder_output[:-len(' ')] decoder_output += ' ' decoder_output = decoder_output[:-len(' ')] count += 1 prev_len += len(text) if len(decoder_output) == 0: text = '' continue text = re.sub('([\[\].,!?()])', r' \1 ', text.replace('()', '')) text = re.sub('\s{2,}', ' ', text) yield article['uri'] + '-' + str(count), { "title": article['title'], "context": text, "id": article['uri'] + '-' + str(count), "triplets": decoder_output, } text = '' ``` ## Dataset Creation ### Curation Rationale This dataset was created to enable the training of a german BART based model as pre-training phase for Relation Extraction. ### Source Data #### Who are the source language producers? Any Wikipedia and Wikidata contributor. ### Annotations #### Annotation process The dataset extraction pipeline cRocoDiLe: Automatic Relation Extraction Dataset with NLI filtering. #### Who are the annotators? Automatic annottations ### Personal and Sensitive Information All text is from Wikipedia, any Personal or Sensitive Information there may be present in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset The dataset serves as a pre-training step for Relation Extraction models. It is distantly annotated, hence it should only be used as such. A model trained solely on this dataset may produce allucinations coming from the silver nature of the dataset. ### Discussion of Biases Since the dataset was automatically created from Wikipedia and Wikidata, it may reflect the biases withing those sources. For Wikipedia text, see for example Dinan et al 2020 on biases in Wikipedia (esp. Table 1), or Blodgett et al 2020 for a more general discussion of the topic. For Wikidata, there are class imbalances, also resulting from Wikipedia. ### Other Known Limitations Not for now ## Additional Information ### Dataset Curators Me ### Licensing Information Since anyone can create the dataset on their own using the linked GitHub Repository, I am going to use the MIT Licence. ### Citation Information Inspiration by: ``` @inproceedings{huguet-cabot-navigli-2021-rebel, title = "REBEL: Relation Extraction By End-to-end Language generation", author = "Huguet Cabot, Pere-Llu{\'\i}s and Navigli, Roberto", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021", month = nov, year = "2021", address = "Online and in the Barceló Bávaro Convention Centre, Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf", } ``` ### Contributions None for now