Datasets:
Error When Downloading - Happened Twice So Far
Hi there, I'm just trying to download the dataset locally so I can work with it. It seems that using huggingface datasets is the only way to do this. As such I am running the load_dataset
command on the bigcode/the-stack-dedup
dataset.
I have gotten this error both times:
Traceback (most recent call last):████████████████████████████████████████████████▍ | 80.8M/195M [00:01<00:02, 51.7MB/s]
File "/home/paperspace/.local/lib/python3.9/site-packages/urllib3/response.py", line 444, in _error_catcher
yield
File "/home/paperspace/.local/lib/python3.9/site-packages/urllib3/response.py", line 567, in read
data = self._fp_read(amt) if not fp_closed else b""
File "/home/paperspace/.local/lib/python3.9/site-packages/urllib3/response.py", line 533, in _fp_read
return self._fp.read(amt) if amt is not None else self._fp.read()
File "/usr/lib/python3.9/http/client.py", line 463, in read
n = self.readinto(b)
File "/usr/lib/python3.9/http/client.py", line 507, in readinto
n = self.fp.readinto(b)
File "/usr/lib/python3.9/socket.py", line 704, in readinto
return self._sock.recv_into(b)
File "/usr/lib/python3.9/ssl.py", line 1242, in recv_into
return self.read(nbytes, buffer)
File "/usr/lib/python3.9/ssl.py", line 1100, in read
return self._sslobj.read(len, buffer)
ConnectionResetError: [Errno 104] Connection reset by peer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/paperspace/.local/lib/python3.9/site-packages/requests/models.py", line 816, in generate
yield from self.raw.stream(chunk_size, decode_content=True)
File "/home/paperspace/.local/lib/python3.9/site-packages/urllib3/response.py", line 628, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "/home/paperspace/.local/lib/python3.9/site-packages/urllib3/response.py", line 593, in read
raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
File "/usr/lib/python3.9/contextlib.py", line 137, in __exit__
self.gen.throw(typ, value, traceback)
File "/home/paperspace/.local/lib/python3.9/site-packages/urllib3/response.py", line 461, in _error_catcher
raise ProtocolError("Connection broken: %r" % e, e)
urllib3.exceptions.ProtocolError: ("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/load.py", line 1746, in load_dataset
builder_instance.download_and_prepare(
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/builder.py", line 771, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 34, in _split_generators
data_files = dl_manager.download_and_extract(self.config.data_files)
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 431, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 309, in download
downloaded_path_or_paths = map_nested(
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 393, in map_nested
mapped = [
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 394, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 348, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 348, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 330, in _single_map_nested
return function(data_struct)
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 335, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 185, in cached_path
output_path = get_from_cache(
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 577, in get_from_cache
http_get(
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 380, in http_get
for chunk in response.iter_content(chunk_size=1024):
File "/home/paperspace/.local/lib/python3.9/site-packages/requests/models.py", line 818, in generate
raise ChunkedEncodingError(e)
requests.exceptions.ChunkedEncodingError: ("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer'))
If I feed this into ChatGPT it responds with:
It looks like you're encountering a ConnectionResetError while downloading a dataset. This error occurs when the connection is interrupted or reset by the remote server (in this case, the server hosting the dataset).
This shouldn't be happening as I'm on a stable connection, on a dedicated server (A100-80GB Instane with 8 Cores).
Any advice? I'm retrying the download a 3rd time and I increased the number of retries from 1 to 5 and set resume_download to True.
Feelling very frustrated. It continuously fails and I've now wasted a day of server time trying to solve this issue. I even tried accessing the URLs directly, but that's taking much longer. Any help would be greatly appreciated.
Hi, if you’re using a large number of workers for data loading (num_proc
), can you try reducing it?
You could also try downloading the languages separately
for lang in languages:
ds = load_datsaset("bigcode/the-stack-dedup", data_dir=f"data/{lang}")
...
And you have the option of cloning the reposidtory with git lfs
Hi there, I tried with 1 worker and the recommended os.get_cpus() number of workers. Both failed in the same way.
I'll try the language style and see how that goes.
What's the command to clone a repository with git lfs?
If you have git lfs installed you can do:
git clone https://huggingface.co/datasets/bigcode/the-stack-dedup/
Let us know if that works, you can also open an issue in datasets
if this persists https://github.com/huggingface/datasets/issues
I tried by language (I had to lowercase the languages as they are case-sensitive). However, I immediately encountered this issue:
Using custom data configuration bigcode--the-stack-dedup-2532087ebd2bd269
Downloading and preparing dataset parquet/bigcode--the-stack-dedup to ./inputs/bigcode___parquet/bigcode--the-stack-dedup-2532087ebd2bd269/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...
Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 26.6M/26.6M [00:01<00:00, 20.1MB/s]
Downloading data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.93s/it]
Extracting data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1064.54it/s]
0%| | 0/358 [00:06<?, ?it/s]
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/load.py", line 1746, in load_dataset
builder_instance.download_and_prepare(
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/builder.py", line 1277, in _prepare_split
writer.write_table(table)
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/arrow_writer.py", line 523, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/arrow_writer.py", line 351, in _build_writer
inferred_features = Features.from_arrow_schema(inferred_schema)
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/features/features.py", line 1568, in from_arrow_schema
return Features.from_dict(metadata["info"]["features"])
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/features/features.py", line 1597, in from_dict
obj = generate_from_dict(dic)
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/features/features.py", line 1280, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/features/features.py", line 1280, in <dictcomp>
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/home/paperspace/.local/lib/python3.9/site-packages/datasets/features/features.py", line 1284, in generate_from_dict
return Sequence(feature=generate_from_dict(obj["feature"]), length=obj["length"])
KeyError: 'length'
I'm not certain but it looks like it's looking for a feature or column that doesn't exist. I'll try with GIT for now. Thanks!
Seems to be working for now with GIT LFS... very slow (around 25MB/s vs. the usual 250-500MB/s). If this finishes I'll close out the ticket. Otherwise I will migrate this to datasets as an open issue.
So the whole thing downloaded. There are a couple of additional questions I had/noticed:
- It's considerably smaller than expected 900GB on disk (after deleting the .GIT folder as I no longer need it)
- There are two languages indicated by parquet files that are not in the programming-languages.json file (see below)
'..../the-stack-dedup/data/public-key/data-00000-of-00001.parquet',
'..../the-stack-dedup/data/raw-token-data/data-00001-of-00002.parquet',
'..../the-stack-dedup/data/raw-token-data/data-00000-of-00002.parquet'
This seems like a more up to date version:
the public-key
and raw-token-data
languages are indeed not part of the dataset. Regarding the size, it's expected to be smaller than the actual text size since the data is compressed in parquet files.
Makes sense.
Why are they found as parquet files in the repo if they aren’t part of the dataset?
It was a mistake, we're updating the dataset. Thanks for pointing it out!