Dataset loading issues

#1
by mbkv - opened

Dear all,
I tried to load this dataset using the code snippet provided.
The line dataset = fouh.load_from_hub("Voxel51/Data-Centric-Visual-AI-Challenge-Train-Set", use_auth_token=True) times out. Also, the number of media files it tries to download seems strange. There are 65986 samples. However, 7665 media files only? Please see the below screenshot
Am I doing something wrong? Also where are the media files getting downloaded to?
HFloading..JPG

The data was downloaded to C:\Users\MyUserName\fiftyone\huggingface\hub\Voxel51 on my Windows machine.

I had some trouble getting the dataset downloaded fully, but eventually it completed successfully.
Setting overwrite=True helped me avoid a name collision any time I wanted to retry the download.

Don't know exactly what kind of timeout you are experiencing, so this might not be very helpful..

Hello, thanks for the overwrite tip. Will try this again then. How many media/image files do you have on the downloaded path? This should be normally equal to the number of samples.

Hm. I'm not too familiar with how voxel51/huggingface downloads work, so not sure if this is going to be helpful, but I tried again on my secondary machine and got a timeout, perhaps the same one you are getting?

At first it looks good
image.png
but then I get
ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='cdn-lfs-us-1.huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 02557b63-77a3-4b66-9989-fa0483669bd2)')
Then if I try to restart without overwrite=True, I get ValueError: Dataset name 'Voxel51/Data-Centric-Visual-AI-Challenge-Train-Set' is not available
When I set overwrite=True the download continues from where it left off, it seems..

The total number of media files should be around 66k. So 7665 you are getting is a bit strange. Or it could mean that 59k images have already been downloaded and it is just continuing..? It seems to me that the download actually continues where it left off when re-running after a unsuccessfull download. I think that explains the lower number of media files.. Seems that the overwrite argument refers to the name in the V51 database, not the files on disk..

Were you able to get the full dataset downloaded eventually?

No, I was not able to download the full dataset. Worse, because I deleted all the downloaded image files in the default location ~/fiftyone/huggingface/hub during the previous run.
I re-launched fresh with dataset = fouh.load_from_hub("Voxel51/Data-Centric-Visual-AI-Challenge-Train-Set", use_auth_token=True, overwrite=True)
It failed at 91% !! I have attached the full stack trace below in case the fiftyone team would like to look at it

errfinal.JPG

carbon.png

mbkv changed discussion title from Potential Loading inconsistency to Dataset loading issues

Yeahm I'm getting the same now..
LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.

Running the command again and again until it completes seems to do the trick.. Perhaps increasing the internal timeout from 10 seconds would also help.

Voxel51 org

I've been observing some slowness in downloads as well. Unfortunately, there seem to be some issues with larger files lately. I suggest passing in the following argument, which may help batch_size=1000 or some lower number if you still face issues.

Hello Harpreet - thanks for the heads-up. Also, despite repeated runs, the final number of media files (.jpg) on disk is 60608. The number of samples in the loaded dataset is 65986. Perhaps, this is because of duplicates if the media filenames are hashes. If not, we have to look into this.

Dataset downloading is very slow. Is there a workaround for downloading the dataset without running "load_from_hub" ?

Are there any updates regarding this issue? I am still facing problems with very slow download speed. Thanks!

Any issues with dataset download speeds is not in our control, as it's on HF's side. Apologies for the trouble this causes, I am considering reducing the size of the dataset to mitigate this issue. I will let you know once I reach a decision.

harpreetsahota changed discussion status to closed

Sign up or log in to comment