Datasets:
Working with dataset locally
A huggingface datasets repository is a GitHub repository like any other. You can simply download it like so:
git clone https://huggingface.co/datasets/danish-foundation-models/danish-dynaword
cd danish-dynaword
You can the work with the dataset locally like so:
from datasets import load_dataset
name = "../." # instead of "danish-foundation-models/danish-dynaword"
dataset = load_dataset("../.", split="train")
# make transformations here
Note: While it is local Huggingface still uses a cache, therefore you might need to reset it after changes have been made to see that it works correctly. You can do this by deleting the cached files which you can locate using
dataset.cache_files
.
Installing dependencies
This repo comes with a few dependencies you need to install to make this run. It uses a makefile to run commands and a uv for package management. Once you have uv installed you can install the dependencies using:
make install
Running dataset tests
This dataset is special as it comes with a test suite, e.g. testing in the ids are unique and that the format is consistent. You can run the suite using
make test
Submitting a PR
Creating a PR on Huggingface is a bit different from creating one on Github.
- Go to the community tab on huggingface press new pull request and choose on your machine. Specify the title of the your PR. Then you can simply:
git fetch origin refs/pr/{PR NUMBER}:pr/{PR NUMBER}
git checkout pr/{PR NUMBER}
# make your changes here
# push to hub
git push origin pr/{PR NUMBER}:refs/pr/{PR NUMBER}
Before you make the PR do be sure to make sure that you have completed the following checklist.
Checklist
- I have run the test suite using
make test
and all tests pass - I have added/changed a dataset and have
- I have updated descriptive statistics using
make update-descriptive-statistics
- I have bumped the version use
make bump-version
- I have updated descriptive statistics using
Examples of Previous PRs
To see example PR you can see the following:
Frequently asked questions
Do you accept synthetic dataets
Yes we do generally accept synthetic datasets since it will likely be a promising research direction for low- to mid-resource languages. However, you should be aware that synthetic dataset will probably require a more detailed examination and description. We will for instance examine the quality of the synthetic subset and whether the model used for the creation permits resharing of the synthetic data under permissible licenses.
Do you accept non-Danish data
Generally this repository is intended for Danish text, however quite broadly defined. For instance, we do accept data containing code-switching and historical Danish text.