wiki40b-da-clean / README.md
jealk's picture
Updated readme, updated Dan Saatrup's existing readme to fit the modified repo.
7b3bf46 verified
metadata
license: cc-by-sa-4.0
dataset_info:
  - config_name: default
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 203133024
        num_examples: 109486
      - name: validation
        num_bytes: 11424453
        num_examples: 6173
      - name: test
        num_bytes: 11808744
        num_examples: 6219
    download_size: 143418920
    dataset_size: 226366221
  - config_name: sentences
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 202232488.28022403
        num_examples: 1572268
      - name: validation
        num_bytes: 11383118.592627235
        num_examples: 88647
      - name: test
        num_bytes: 11756845.828945814
        num_examples: 90769
    download_size: 149698561
    dataset_size: 225372452.70179707
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
  - config_name: sentences
    data_files:
      - split: train
        path: sentences/train-*
      - split: validation
        path: sentences/validation-*
      - split: test
        path: sentences/test-*

Dataset Card for "wiki40b-da-clean"

Dataset Summary

This dataset is an slightly modified and filtered version of Wiki40b-da daset which is a fork of this dataset on the Hugging Face Hub.

The dataset contains two sub-sets, for which the original columns "wikidata_id" and "version_id" are removed from both:

  • "text": Contains the filtered text of the Wikipedia paragraphs, with formatting removed (START_ARTICLE, START_PARAGRAPH and \n removed)
  • "sentences" Contains the sentences from all the 'text' dataset, filtered to only include sentences >5 and <100 words (split after all punctuations (!,?,.) that is followed by a space and a capital letter)

The dataset is curated to use the "text" config for masked next token prediction (MNTP) and the sentences config for SimCSE in relation to training encoder and decoder models.

The training, validation and test splits are the original ones.

Languages

The dataset is available in Danish (da).

Dataset

text (default) An example from the text dataset looks as follows.

{
 'text': "Tekstiler havde mange forskellige formål i oldtidens Ægypten, og blev brugt af (...)",
}

sentences An example from the sentences dataset looks as follows.

{
 'text': "Det tog tre måneder, før hørren kunne høstes.",
}

Additional Information

Dataset Curators

Jesper Alkestrup from the The Tech Collective filtered and uploaded the dataset to the Hugging Face Hub.

Thanks to Dan Saattrup Nielsen from the The Alexandra Institute for uploading Wiki40b-da daset.

Licensing Information

The dataset is licensed under the CC-BY-SA license.