How to download in parallel?

#5
by hh2017 - opened

Hello,

In your previous update, you mentioned "chore: Parquet files are now sharded (size < 200 MB), allowing parallel downloads and processing." This is a great improvement, and I'm interested in leveraging this for faster downloads.

Currently, I use the load_dataset command to download languages, but the process seems to be serial. Could you guide me on how to download these files in parallel? Any advice or best practices would be much appreciated.

Thank you for your time and effort on this dataset!

Hello,

Thanks for reaching out and for the kind words!

If you have the latest version of Datasets, you can download in parallel by calling load_dataset with the additional argument num_proc. This is described in the docs here: https://huggingface.co/docs/datasets/loading#multiprocessing.

Best

Let me know if that solves your issue!

I included the num_proc=8 parameter in my function call. However, I'm observing only a single progress bar for the download process, which is contrary to my expectation of seeing multiple progress bars, one for each process.

Is it the case that the progress bars for other processes are not displayed, or is there a possibility that multiple processes aren't being initiated as intended?

Also, I didn't notice a significant difference in the download speed shown with and without the num_proc=8 parameter; the speeds seemed to be comparable in both cases.

Below is the snippet of my code and the output observed:

languages = ['hi']
for lang in languages:
    data = load_dataset("graelo/wikipedia", f"20230601.{lang}", cache_dir=f"../data", num_proc=8)

And the output is as follows:

Downloading data files: 100%|███████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 914.59it/s]
Extracting data files: 100%|████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 130.52it/s]
Downloading data files:   0%| | 0/1 [00:00<?, ?it/s]
Downloading data:  22%|████████████ | 49.8M/230M [01:27<05:05, 591kB/s]

Any suggestions or further explanations would be greatly appreciated.

Thank you for your assistance!

Hello, sorry for the delay!

Same as you, I was surprised at first, but actually, this is normal because there is only one parquet file for Hindi (hi), see here: https://huggingface.co/datasets/graelo/wikipedia/tree/main/data/20230901/hi. The HF Datasets library can only parallelize if there are multiple parquet files.

For more context, the recommended size for parquet files with Huggingface is 500MB (source). I anticipated a bit these issues and diverged from the recommendation with a size of 256MB, but indeed quite a few languages do not have enough Wikipedia data to result in multiple 256MB parquet files. In the next release (early December), I could split in 128MB parquet files, but in your case that would be just twice faster, not 8 times faster (as your num_proc=8 suggests you would like). So I'm not particularly inclined on to lower the size of files. Do you feel otherwise?

Did that answer your question?

Best!

Hello,

Thank you for the clarification. I've noticed similar download patterns with larger datasets like '20230901.en'. But as I've already successfully acquired the necessary data, there are no concerns on my end; I was just curious about the process.

Looking forward to the December update, and thanks again for your support!

Best,

Sign up or log in to comment