Datasets:
Using language as a basis for spliting datasets
Can you divide the dataset by language? Similar to https://huggingface.co/datasets/facebook/voxpopuli . In fact, we prefer to download minority languages.
Thank you for your attention to Emilia, actually we have divided the dataset by languages. Click "Files and versions" and you may have subfolders for each languages including, EN, ZH, DE, FR, JP, KO
Thanks for your reply. I hope to run load_dataset("amphion/Emilia-Dataset", languages=['de']) and download the de data instead of the full data by specifying the LID in the languages parameter.
Thanks for your suggestion. I think you can use this feature as introduced in HuggingFace docs to load specific language data.
E.g.
>>> from datasets import load_dataset
>>> path = "DE/*.tar"
>>> dataset = load_dataset("amphion/Emilia-Dataset", data_files={"de": path}, split="de", streaming=True)
We are planning to add this to our README.md. Please let us know if this works :)
Thank you very much and best wish to you