Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
1M - 10M
License:
File size: 16,565 Bytes
fcc0222 3d99d06 6a88cbd 3e28a50 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 8e1237a fcc0222 3d99d06 7d78f79 87eb2bb 7d78f79 87eb2bb 6a88cbd 7d78f79 9c15515 87eb2bb f2ad4e1 78108d3 7d78f79 87eb2bb 7d78f79 87eb2bb 7d78f79 6a88cbd 7d78f79 87eb2bb 7d78f79 0cef317 7d78f79 87eb2bb 7d78f79 2a1b9d7 7d78f79 38b692a f2ad4e1 0cef317 38b692a 0cef317 f2ad4e1 7d78f79 38b692a 7d78f79 87eb2bb 7d78f79 87eb2bb 7d78f79 87eb2bb 7d78f79 87eb2bb 7d78f79 87eb2bb 7d78f79 87eb2bb 7d78f79 87eb2bb 7d78f79 38b692a 7d78f79 38b692a 7d78f79 58f26ac 7d78f79 a62ed1b 58f26ac a62ed1b 38b692a 7d78f79 87eb2bb 7d78f79 ef0f90a 0cef317 6a88cbd 546c3b3 ef0f90a 546c3b3 243fc1f ef0f90a 546c3b3 ef0f90a 546c3b3 ef0f90a 546c3b3 243fc1f ef0f90a 243fc1f ef0f90a 546c3b3 ef0f90a 7d78f79 3e28a50 2a1b9d7 bb8789e 2a1b9d7 87eb2bb 0cef317 7d78f79 87eb2bb bb8789e 7d78f79 87eb2bb 9c15515 2a1b9d7 9c15515 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 |
---
license: other
configs:
- config_name: default
data_files:
- split: train
path: 'data/*/*.parquet'
- config_name: lexdk
data_files:
- split: train
path: data/lexdk/*.parquet
- config_name: opensubtitles
data_files:
- split: train
path: data/opensubtitles/*.parquet
- config_name: retsinformationdk
data_files:
- split: train
path: data/retsinformationdk/*.parquet
- config_name: ep
data_files:
- split: train
path: data/ep/*.parquet
- config_name: ft
data_files:
- split: train
path: data/ft/*.parquet
- config_name: wikisource
data_files:
- split: train
path: data/wikisource/*.parquet
- config_name: spont
data_files:
- split: train
path: data/spont/*.parquet
- config_name: tv2r
data_files:
- split: train
path: data/tv2r/*.parquet
- config_name: adl
data_files:
- split: train
path: data/adl/*.parquet
- config_name: hest
data_files:
- split: train
path: data/hest/*.parquet
- config_name: skat
data_files:
- split: train
path: data/skat/*.parquet
- config_name: dannet
data_files:
- split: train
path: data/dannet/*.parquet
- config_name: retspraksis
data_files:
- split: train
path: data/retspraksis/*.parquet
- config_name: wikibooks
data_files:
- split: train
path: data/wikibooks/*.parquet
- config_name: jvj
data_files:
- split: train
path: data/jvj/*.parquet
- config_name: gutenberg
data_files:
- split: train
path: data/gutenberg/*.parquet
- config_name: botxt
data_files:
- split: train
path: data/botxt/*.parquet
- config_name: depbank
data_files:
- split: train
path: data/depbank/*.parquet
- config_name: naat
data_files:
- split: train
path: data/naat/*.parquet
- config_name: synne
data_files:
- split: train
path: data/synne/*.parquet
- config_name: wiki
data_files:
- split: train
path: data/wiki/*.parquet
- config_name: nordjyllandnews
data_files:
- split: train
path: data/nordjyllandnews/*.parquet
- config_name: relig
data_files:
- split: train
path: data/relig/*.parquet
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- da
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: Danish Dynaword
language_bcp47:
- da
- da-bornholm
- da-synnejyl
---
<!--
readme structure is inspired by:
https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
-->
# 🧨 Danish Dynaword
| | |
| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Language** | dan, dansk, Danish |
| **License** | Permissible, See the respective dataset |
| **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
| **Contact** | If you have question about this project please create an issue [here](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions) |
## Table of Contents
- [🧨 Danish Dynaword](#-danish-dynaword)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Loading the dataset](#loading-the-dataset)
- [Languages:](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Source Data](#source-data)
- [Dataset Statistics](#dataset-statistics)
- [Additional Information](#additional-information)
- [Contributing to the dataset](#contributing-to-the-dataset)
- [Citation Information](#citation-information)
- [Disclaimer](#disclaimer)
- [Notice and take down policy](#notice-and-take-down-policy)
## Dataset Description
<!-- START-DESC-STATS -->
- **Language**: dan, dansk, Danish
- **Number of samples**: 588.48K
- **Number of tokens (Llama 3)**: 1.84B
- **Average document length (characters)**: 9222.58
<!-- END-DESC-STATS -->
### Dataset Summary
The Danish dynaword is a continually developed collection of Danish free-form text datasets from various domains. It is intended to be continually updated with new data sources. If you would like to contribute a dataset see the [contribute section](#contributing-to-the-dataset)
### Loading the dataset
```py
from datasets import load_dataset
name = "danish-foundation-models/danish-dynaword"
ds = load_dataset(name, split = "train")
sample = ds[1] # see "Data Instances" below
```
or load it by streaming the data
```py
ds = load_dataset(name, split = "train", streaming=True)
dataset_iter = iter(ds)
sample = next(iter(dataset_iter))
```
You can also load a single subset at a time:
```py
ds = load_dataset(name, "adl", split = "train")
```
As Danish Dynaword is continually expanding and curated you can make sure that you get the same dataset every time by specifying the revision:
You can also load a single subset at a time:
```py
ds = load_dataset(name, revision="{desired revision}")
```
### Languages:
This dataset includes the following languages:
- dan-Latn
- dan-Latn-bornholm
- dan-Latn-synnejyl
Language is denoted using [BCP-47](https://en.wikipedia.org/wiki/IETF_language_tag), using the langauge code ISO 639-3 and the script code ISO 15924. The last element denote the region variant.
## Dataset Structure
The dataset contains text from different sources which are thoroughly defined in [Source Data](#source-data).
### Data Instances
Each entry in the dataset consists of a single text with associated metadata
<!-- START-SAMPLE -->
```py
{
"text": "SAMLEDE VÆRKER\n\nJEPPE AAKJÆR GYLDENDALSKE BOGHANDEL - NORDISK FORLAG KJØBENHAVN OG\nKRISTIANIA 1919 0[...]",
"source": "adl",
"id": "adl_aakjaer06val",
"added": "2020-09-14",
"created": "1700-01-01, 2022-01-01",
"license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
"domain": "Wiki & Books",
"metadata": {
"source-pretty": "Archive for Danish Literature"
}
}
```
### Data Fields
An entry in the dataset consists of the following fields:
- `text`(`str`): The content of the document.
- `source` (`str`): The source of the document (see [Source Data](#source-data)).
- `id` (`str`): An unique identifier for each document.
- `added` (`str`): An date for when the document was added to this collection.
- `created` (`str`): An date range for when the document was originally created.
- `license` (`str`): The license of the document. The licenses vary according to the source.
- `domain` (`str`): The domain of the source
- `metadata/source-pretty` (`str`): The long form version of the short-form source name
- `metadata/*`: Potentially additional metadata
<!-- END-SAMPLE -->
### Data Splits
The entire corpus is provided in the `train` split.
## Dataset Creation
### Curation Rationale
These datasets were collected and curated with the intention of making large quantities of Danish text data available. While this was collected with the intention of developing language models it is likely to have multiple other uses such as examining language development and differences across domains.
### Annotations
This data generally contains no annotation besides the metadata attached to each sample such as what domain it belongs to.
### Source Data
Below follows a brief overview of the sources in the corpus along with their individual license.
<!-- START-MAIN TABLE -->
| Source | Description | N. Tokens | License |
|:--------------------|:-----------------------------------------------------------------------------------------------------------------------------|:------------|:-----------------------|
| [lexdk] | Permissible use articles from [lex.dk](https://lex.dk) | 5.69M | [CC-BY-SA 4.0] |
| [opensubtitles] | Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) | 271.60M | [CC-0] |
| [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | 516.54M | [Danish Copyright Law] |
| [ep] | The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/) | 100.89M | [CC-0] |
| [ft] | Records from all meetings of The Danish parliament (Folketinget) in the parliament hall | 114.09M | [CC-0] |
| [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page) | 5.34M | [CC-0] |
| [spont] | Conversational samples collected as a part of research projects at Aarhus University | 1.56M | [CC-0] |
| [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | 21.67M | [CC-BY-SA 4.0] |
| [adl] | Danish literature from 1700-2023 from the Archive for Danish Literature (ADL) | 58.49M | [CC-0] |
| [hest] | Samples from the Danish debate forum www.heste-nettet.dk | 389.33M | [CC-0] |
| [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | 122.12M | [CC-0] |
| [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | 1.52M | [DanNet 1.0 License] |
| [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | 57.08M | [CC-0] |
| [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | 6.24M | [CC-0] |
| [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen) | 3.55M | [CC-BY-SA 4.0] |
| [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | 6.76M | [Gutenberg License] |
| [botxt] | The Bornholmsk Ordbog Dictionary Projec | 847.97K | [CC-0] |
| [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | 185.45K | [CC-BY-SA 4.0] |
| [naat] | Danish speeches from 1930-2022 | 286.68K | [CC-0] |
| [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | 52.51K | [CC-0] |
| [wiki] | The Danish subsection of [wikipedia](https://en.wikipedia.org/wiki/Main_Page) | 122.00M | [CC-0] |
| [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | 37.91M | [CC-0] |
| [relig] | Danish religious text from the 1700-2022 | 1.24M | [CC-0] |
| **Total** | | 1.84B | |
[lexdk]: data/lexdk/lexdk.md
[opensubtitles]: data/opensubtitles/opensubtitles.md
[retsinformationdk]: data/retsinformationdk/retsinformationdk.md
[ep]: data/ep/ep.md
[ft]: data/ft/ft.md
[wikisource]: data/wikisource/wikisource.md
[spont]: data/spont/spont.md
[tv2r]: data/tv2r/tv2r.md
[adl]: data/adl/adl.md
[hest]: data/hest/hest.md
[skat]: data/skat/skat.md
[dannet]: data/dannet/dannet.md
[retspraksis]: data/retspraksis/retspraksis.md
[wikibooks]: data/wikibooks/wikibooks.md
[jvj]: data/jvj/jvj.md
[gutenberg]: data/gutenberg/gutenberg.md
[botxt]: data/botxt/botxt.md
[depbank]: data/depbank/depbank.md
[naat]: data/naat/naat.md
[synne]: data/synne/synne.md
[wiki]: data/wiki/wiki.md
[nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
[relig]: data/relig/relig.md
[CC-0]: https://creativecommons.org/publicdomain/zero/1.0/legalcode.en
[CC-BY-SA 4.0]: https://creativecommons.org/licenses/by-sa/4.0/deed.en
[Danish Copyright Law]: ./data/retsinformationdk/retsinformationdk.md#license-information
[DanNet 1.0 License]: ./data/dannet/dannet.md#license-information
[Gutenberg License]: ./data/gutenberg/gutenberg.md#license-information
<!-- END-MAIN TABLE -->
You can learn more about each dataset by pressing
<!-- ### Quality Control
Dynaword performs quality checks along with each PR. These quality checks includes:
- ensuring unique ids
TODO:
- checking for duplicates
-->
### Dataset Statistics
<!-- START-DATASET PLOTS -->
<img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
<img>
<!-- END-DATASET PLOTS -->
## Additional Information
### Contributing to the dataset
We welcome contributions to the dataset such as new sources, better data filtering and so on. To get started on contributing please see [the contribution guidelines](CONTRIBUTING.md)
### Citation Information
This version expand upon existing dataset sources such as the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite the source of the dataset when using these datasets.
### Disclaimer
We do not own any of the text from which the data has been extracted.
We only offer files that we believe we are free to redistribute. If any doubt occurs about the legality of any of our file downloads we will take them off right away after [contacting us](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/new).
### Notice and take down policy
Notice: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
- Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
- Clearly identify the copyrighted work claimed to be infringed.
- Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
You can contact us through [this channel](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/new).
Take down: We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
---
<h3 style="display: flex; align-items: center;">
<a href="https://www.foundationmodels.dk">
<img src="./docs/icon.png" width="30" style="margin-right: 10px;" />
</a>
A <a href=https://www.foundationmodels.dk>Danish Foundation Models</a> dataset
</h3> |