Datasets:

Languages:
English
ArXiv:
License:

Make the dataset work with cloned dataset by Git LFS

#5
by caonv - opened

Hi all,

I recently started using HuggingFace and was excited to try out the HF datasets. However, when I used datasets.load_dataset to load the “ImageNet-1K” dataset with the default configuration, one of the archive files could not be downloaded due to a connection error. When I tried to load the dataset again, the corrupted files were skipped and the next step (generating the train split) was processed, resulting in errors.

As a workaround, I cloned the dataset from HuggingFace using Git LFS and used the local path to load it. However, I encountered another issue where the loading script “imagenet-1k.py” always checked for online data files instead of looking for the downloaded files in ./data. To resolve this, I modified the _DATA_URL to force the loader to look in ./data. This changed the checksum of the dataset and caused it to be placed in a different folder from the default path. As a result, I had to specify the path to “imagenet-1k.py” every time I used the dataset.

Is there a solution for using cloned datasets in the same way as datasets loaded by load_dataset with an internet connection? Any help would be appreciated. Thanks!

I have the same issue. I have a system (FPGA-based computer) without internet access.

You can download this repository locally and use load_dataset("path/to/local/imagenet-1k")(we fixed the _DATA_URL issue so you don't have to change anything like OP)

To download the repotisory you can use Git LFS or simply the huggingface_hub python client: https://huggingface.co/docs/huggingface_hub/guides/download#download-an-entire-repository

Sign up or log in to comment