danish-dynaword / README.md
KennethEnevoldsen's picture
Added distribution plot for number of tokens
0cef317 unverified
metadata
license: other
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/*/*.parquet
  - config_name: lexdk
    data_files:
      - split: train
        path: data/lexdk/*.parquet
  - config_name: opensubtitles
    data_files:
      - split: train
        path: data/opensubtitles/*.parquet
  - config_name: retsinformationdk
    data_files:
      - split: train
        path: data/retsinformationdk/*.parquet
  - config_name: ep
    data_files:
      - split: train
        path: data/ep/*.parquet
  - config_name: ft
    data_files:
      - split: train
        path: data/ft/*.parquet
  - config_name: wikisource
    data_files:
      - split: train
        path: data/wikisource/*.parquet
  - config_name: spont
    data_files:
      - split: train
        path: data/spont/*.parquet
  - config_name: tv2r
    data_files:
      - split: train
        path: data/tv2r/*.parquet
  - config_name: adl
    data_files:
      - split: train
        path: data/adl/*.parquet
  - config_name: hest
    data_files:
      - split: train
        path: data/hest/*.parquet
  - config_name: skat
    data_files:
      - split: train
        path: data/skat/*.parquet
  - config_name: dannet
    data_files:
      - split: train
        path: data/dannet/*.parquet
  - config_name: retspraksis
    data_files:
      - split: train
        path: data/retspraksis/*.parquet
  - config_name: wikibooks
    data_files:
      - split: train
        path: data/wikibooks/*.parquet
  - config_name: jvj
    data_files:
      - split: train
        path: data/jvj/*.parquet
  - config_name: gutenberg
    data_files:
      - split: train
        path: data/gutenberg/*.parquet
  - config_name: botxt
    data_files:
      - split: train
        path: data/botxt/*.parquet
  - config_name: depbank
    data_files:
      - split: train
        path: data/depbank/*.parquet
  - config_name: naat
    data_files:
      - split: train
        path: data/naat/*.parquet
  - config_name: synne
    data_files:
      - split: train
        path: data/synne/*.parquet
  - config_name: wiki
    data_files:
      - split: train
        path: data/wiki/*.parquet
  - config_name: nordjyllandnews
    data_files:
      - split: train
        path: data/nordjyllandnews/*.parquet
  - config_name: relig
    data_files:
      - split: train
        path: data/relig/*.parquet
annotations_creators:
  - no-annotation
language_creators:
  - crowdsourced
language:
  - da
multilinguality:
  - monolingual
source_datasets:
  - original
task_categories:
  - text-generation
task_ids:
  - language-modeling
pretty_name: Danish Dynaword
language_bcp47:
  - da
  - da-bornholm
  - da-synnejyl

🧨 Danish Dynaword

Language dan, dansk, Danish
License Permissible, See the respective dataset
Models For model trained used this data see danish-foundation-models
Contact If you have question about this project please create an issue here

Table of Contents

Dataset Description

  • Language: dan, dansk, Danish
  • Number of samples: 588.48K
  • Number of tokens (Llama 3): 1.84B
  • Average document length (characters): 9222.58

Dataset Summary

The Danish dynaword is a continually developed collection of Danish free-form text datasets from various domains. It is intended to be continually updated with new data sources. If you would like to contribute a dataset see the contribute section

Loading the dataset

from datasets import load_dataset

name = "danish-foundation-models/danish-dynaword"
ds = load_dataset(name, split = "train")
sample = ds[1] # see "Data Instances" below

or load it by streaming the data

ds = load_dataset(name, split = "train", streaming=True)
dataset_iter = iter(ds)
sample = next(iter(dataset_iter))

You can also load a single subset at a time:

ds = load_dataset(name, "adl", split = "train")

As Danish Dynaword is continually expanding and curated you can make sure that you get the same dataset every time by specifying the revision: You can also load a single subset at a time:

ds = load_dataset(name, revision="{desired revision}")

Languages:

This dataset includes the following languages:

  • dan-Latn
  • dan-Latn-bornholm
  • dan-Latn-synnejyl

Language is denoted using BCP-47, using the langauge code ISO 639-3 and the script code ISO 15924. The last element denote the region variant.

Dataset Structure

The dataset contains text from different sources which are thoroughly defined in Source Data.

Data Instances

Each entry in the dataset consists of a single text with associated metadata

{
  "text": "SAMLEDE VÆRKER\n\nJEPPE AAKJÆR GYLDENDALSKE BOGHANDEL - NORDISK FORLAG KJØBENHAVN OG\nKRISTIANIA 1919 0[...]",
  "source": "adl",
  "id": "adl_aakjaer06val",
  "added": "2020-09-14",
  "created": "1700-01-01, 2022-01-01",
  "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
  "domain": "Wiki & Books",
  "metadata": {
    "source-pretty": "Archive for Danish Literature"
  }
}

Data Fields

An entry in the dataset consists of the following fields:

  • text(str): The content of the document.
  • source (str): The source of the document (see Source Data).
  • id (str): An unique identifier for each document.
  • added (str): An date for when the document was added to this collection.
  • created (str): An date range for when the document was originally created.
  • license (str): The license of the document. The licenses vary according to the source.
  • domain (str): The domain of the source
  • metadata/source-pretty (str): The long form version of the short-form source name
  • metadata/*: Potentially additional metadata

Data Splits

The entire corpus is provided in the train split.

Dataset Creation

Curation Rationale

These datasets were collected and curated with the intention of making large quantities of Danish text data available. While this was collected with the intention of developing language models it is likely to have multiple other uses such as examining language development and differences across domains.

Annotations

This data generally contains no annotation besides the metadata attached to each sample such as what domain it belongs to.

Source Data

Below follows a brief overview of the sources in the corpus along with their individual license.

Source Description N. Tokens License
lexdk Permissible use articles from lex.dk 5.69M CC-BY-SA 4.0
opensubtitles Danish subsection of OpenSubtitles 271.60M CC-0
retsinformationdk retsinformation.dk (legal-information.dk) the official legal information system of Denmark 516.54M Danish Copyright Law
ep The Danish subsection of Europarl 100.89M CC-0
ft Records from all meetings of The Danish parliament (Folketinget) in the parliament hall 114.09M CC-0
wikisource The Danish subsection of Wikisource 5.34M CC-0
spont Conversational samples collected as a part of research projects at Aarhus University 1.56M CC-0
tv2r Contemporary Danish newswire articles published between 2010 and 2019 21.67M CC-BY-SA 4.0
adl Danish literature from 1700-2023 from the Archive for Danish Literature (ADL) 58.49M CC-0
hest Samples from the Danish debate forum www.heste-nettet.dk 389.33M CC-0
skat Skat is the Danish tax authority. This dataset contains content from its website skat.dk 122.12M CC-0
dannet DanNet is a Danish WordNet 1.52M DanNet 1.0 License
retspraksis Case law or judical practice in Denmark derived from Retspraksis 57.08M CC-0
wikibooks The Danish Subsection of Wikibooks 6.24M CC-0
jvj The works of the Danish author and poet, Johannes V. Jensen 3.55M CC-BY-SA 4.0
gutenberg The Danish subsection from Project Gutenberg 6.76M Gutenberg License
botxt The Bornholmsk Ordbog Dictionary Projec 847.97K CC-0
depbank The Danish subsection of the Universal Dependencies Treebank 185.45K CC-BY-SA 4.0
naat Danish speeches from 1930-2022 286.68K CC-0
synne Dataset collected from synnejysk forening's website, covering the Danish dialect sønderjysk 52.51K CC-0
wiki The Danish subsection of wikipedia 122.00M CC-0
nordjyllandnews Articles from the Danish Newspaper TV2 Nord 37.91M CC-0
relig Danish religious text from the 1700-2022 1.24M CC-0
Total 1.84B

You can learn more about each dataset by pressing

Dataset Statistics

Additional Information

Contributing to the dataset

We welcome contributions to the dataset such as new sources, better data filtering and so on. To get started on contributing please see the contribution guidelines

Citation Information

This version expand upon existing dataset sources such as the Danish gigaword. We recommend that you cite the source of the dataset when using these datasets.

Disclaimer

We do not own any of the text from which the data has been extracted. We only offer files that we believe we are free to redistribute. If any doubt occurs about the legality of any of our file downloads we will take them off right away after contacting us.

Notice and take down policy

Notice: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:

  • Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
  • Clearly identify the copyrighted work claimed to be infringed.
  • Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.

You can contact us through this channel.

Take down: We will comply to legitimate requests by removing the affected sources from the next release of the corpus.


Danish Foundation Models dataset