html_url
stringlengths 48
51
| title
stringlengths 5
268
| comments
stringlengths 70
51.8k
| body
stringlengths 0
29.8k
| comment_length
int64 16
1.52k
| text
stringlengths 164
54.1k
|
---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/2945 | Protect master branch | @lhoestq now the 2 are implemented.
Please note that for the the second protection, finally I have chosen to protect the master branch only from **merge commits** (see update comment above), so no need to disable/re-enable the protection on each release (direct commits, different from merge commits, can be pushed to the remote master branch; and eventually reverted without messing up the repo history). | After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.:
- 00cc036fea7c7745cfe722360036ed306796a3f2
- 13ae8c98602bbad8197de3b9b425f4c78f582af1
- ...
I propose to protect our master branch, so that we avoid we can accidentally make this kind of mistakes in the future:
- [x] For Pull Requests using GitHub, allow only squash merging, so that only a single commit per Pull Request is merged into the master branch
- Currently, simple merge commits are already disabled
- I propose to disable rebase merging as well
- ~~Protect the master branch from direct pushes (to avoid accidentally pushing of merge commits)~~
- ~~This protection would reject direct pushes to master branch~~
- ~~If so, for each release (when we need to commit directly to the master branch), we should previously disable the protection and re-enable it again after the release~~
- [x] Protect the master branch only from direct pushing of **merge commits**
- GitHub offers the possibility to protect the master branch only from merge commits (which are the ones that introduce all the commits from the feature branch into the master branch).
- No need to disable/re-enable this protection on each release
This purpose of this Issue is to open a discussion about this problem and to agree in a solution. | 64 | Protect master branch
After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.:
- 00cc036fea7c7745cfe722360036ed306796a3f2
- 13ae8c98602bbad8197de3b9b425f4c78f582af1
- ...
I propose to protect our master branch, so that we avoid we can accidentally make this kind of mistakes in the future:
- [x] For Pull Requests using GitHub, allow only squash merging, so that only a single commit per Pull Request is merged into the master branch
- Currently, simple merge commits are already disabled
- I propose to disable rebase merging as well
- ~~Protect the master branch from direct pushes (to avoid accidentally pushing of merge commits)~~
- ~~This protection would reject direct pushes to master branch~~
- ~~If so, for each release (when we need to commit directly to the master branch), we should previously disable the protection and re-enable it again after the release~~
- [x] Protect the master branch only from direct pushing of **merge commits**
- GitHub offers the possibility to protect the master branch only from merge commits (which are the ones that introduce all the commits from the feature branch into the master branch).
- No need to disable/re-enable this protection on each release
This purpose of this Issue is to open a discussion about this problem and to agree in a solution.
@lhoestq now the 2 are implemented.
Please note that for the the second protection, finally I have chosen to protect the master branch only from **merge commits** (see update comment above), so no need to disable/re-enable the protection on each release (direct commits, different from merge commits, can be pushed to the remote master branch; and eventually reverted without messing up the repo history). |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | Hi ! I guess the caching mechanism should have considered the new `filter` to be different from the old one, and don't use cached results from the old `filter`.
To avoid other users from having this issue we could make the caching differentiate the two, what do you think ? | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
| 50 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
Hi ! I guess the caching mechanism should have considered the new `filter` to be different from the old one, and don't use cached results from the old `filter`.
To avoid other users from having this issue we could make the caching differentiate the two, what do you think ? |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | If it's easy enough to implement, then yes please 😄 But this issue can be low-priority, since I've only encountered it in a couple of `transformers` CI tests. | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
| 28 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
If it's easy enough to implement, then yes please 😄 But this issue can be low-priority, since I've only encountered it in a couple of `transformers` CI tests. |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | Well it can cause issue with anyone that updates `datasets` and re-run some code that uses filter, so I'm creating a PR | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
| 22 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
Well it can cause issue with anyone that updates `datasets` and re-run some code that uses filter, so I'm creating a PR |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | I just merged a fix, let me know if you're still having this kind of issues :)
We'll do a release soon to make this fix available | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
| 27 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
I just merged a fix, let me know if you're still having this kind of issues :)
We'll do a release soon to make this fix available |
https://github.com/huggingface/datasets/issues/2937 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied | Hi @daqieq, thanks for reporting.
Unfortunately, I was not able to reproduce this bug:
```ipython
In [1]: from datasets import load_dataset
...: ds = load_dataset('wiki_bio')
Downloading: 7.58kB [00:00, 26.3kB/s]
Downloading: 2.71kB [00:00, ?B/s]
Using custom data configuration default
Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\
1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...
Downloading: 334MB [01:17, 4.32MB/s]
Dataset wiki_bio downloaded and prepared to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9. Subsequent calls will reuse thi
s data.
```
This kind of error messages usually happen because:
- Your running Python script hasn't write access to that directory
- You have another program (the File Explorer?) already browsing inside that directory | ## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('wiki_bio')
```
## Expected results
It is expected that the dataset downloads without any errors.
## Actual results
PermissionError see trace below:
```
Using custom data configuration default
Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 644, in download_and_prepare
self._save_info()
File "C:\Users\username\.conda\envs\hf\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 598, in incomplete_dir
os.rename(tmp_dir, dirname)
PermissionError: [WinError 5] Access is denied: 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9.incomplete' -> 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9'
```
By commenting out the os.rename() [L604](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L604) and the shutil.rmtree() [L607](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L607) lines, in my virtual environment, I was able to get the load process to complete, rename the directory manually and then rerun the `load_dataset('wiki_bio')` to get what I needed.
It seems that os.rename() in the `incomplete_dir` content manager is the culprit. Here's another project [Conan](https://github.com/conan-io/conan/issues/6560) with similar issue with os.rename() if it helps debug this issue.
## Environment info
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.22449-SP0
- Python version: 3.8.12
- PyArrow version: 5.0.0
| 109 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('wiki_bio')
```
## Expected results
It is expected that the dataset downloads without any errors.
## Actual results
PermissionError see trace below:
```
Using custom data configuration default
Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 644, in download_and_prepare
self._save_info()
File "C:\Users\username\.conda\envs\hf\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 598, in incomplete_dir
os.rename(tmp_dir, dirname)
PermissionError: [WinError 5] Access is denied: 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9.incomplete' -> 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9'
```
By commenting out the os.rename() [L604](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L604) and the shutil.rmtree() [L607](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L607) lines, in my virtual environment, I was able to get the load process to complete, rename the directory manually and then rerun the `load_dataset('wiki_bio')` to get what I needed.
It seems that os.rename() in the `incomplete_dir` content manager is the culprit. Here's another project [Conan](https://github.com/conan-io/conan/issues/6560) with similar issue with os.rename() if it helps debug this issue.
## Environment info
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.22449-SP0
- Python version: 3.8.12
- PyArrow version: 5.0.0
Hi @daqieq, thanks for reporting.
Unfortunately, I was not able to reproduce this bug:
```ipython
In [1]: from datasets import load_dataset
...: ds = load_dataset('wiki_bio')
Downloading: 7.58kB [00:00, 26.3kB/s]
Downloading: 2.71kB [00:00, ?B/s]
Using custom data configuration default
Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\
1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...
Downloading: 334MB [01:17, 4.32MB/s]
Dataset wiki_bio downloaded and prepared to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9. Subsequent calls will reuse thi
s data.
```
This kind of error messages usually happen because:
- Your running Python script hasn't write access to that directory
- You have another program (the File Explorer?) already browsing inside that directory |
https://github.com/huggingface/datasets/issues/2937 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied | Thanks @albertvillanova for looking at it! I tried on my personal Windows machine and it downloaded just fine.
Running on my work machine and on a colleague's machine it is consistently hitting this error. It's not a write access issue because the `.incomplete` directory is written just fine. It just won't rename and then it deletes the directory in the `finally` step. Also the zip file is written and extracted fine in the downloads directory.
That leaves another program that might be interfering, and there are plenty of those in my work machine ... (full antivirus, data loss prevention, etc.). So the question remains, why not extend the `try` block to allow catching the error and circle back to the rename after the unknown program is finished doing its 'stuff'. This is the approach that I read about in the linked repo (see my comments above).
If it's not high priority, that's fine. However, if someone were to write an PR that solved this issue in our environment in an `except` clause, would it be reviewed for inclusion in a future release? Just wondering whether I should spend any more time on this issue. | ## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('wiki_bio')
```
## Expected results
It is expected that the dataset downloads without any errors.
## Actual results
PermissionError see trace below:
```
Using custom data configuration default
Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 644, in download_and_prepare
self._save_info()
File "C:\Users\username\.conda\envs\hf\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 598, in incomplete_dir
os.rename(tmp_dir, dirname)
PermissionError: [WinError 5] Access is denied: 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9.incomplete' -> 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9'
```
By commenting out the os.rename() [L604](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L604) and the shutil.rmtree() [L607](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L607) lines, in my virtual environment, I was able to get the load process to complete, rename the directory manually and then rerun the `load_dataset('wiki_bio')` to get what I needed.
It seems that os.rename() in the `incomplete_dir` content manager is the culprit. Here's another project [Conan](https://github.com/conan-io/conan/issues/6560) with similar issue with os.rename() if it helps debug this issue.
## Environment info
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.22449-SP0
- Python version: 3.8.12
- PyArrow version: 5.0.0
| 194 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('wiki_bio')
```
## Expected results
It is expected that the dataset downloads without any errors.
## Actual results
PermissionError see trace below:
```
Using custom data configuration default
Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 644, in download_and_prepare
self._save_info()
File "C:\Users\username\.conda\envs\hf\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 598, in incomplete_dir
os.rename(tmp_dir, dirname)
PermissionError: [WinError 5] Access is denied: 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9.incomplete' -> 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9'
```
By commenting out the os.rename() [L604](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L604) and the shutil.rmtree() [L607](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L607) lines, in my virtual environment, I was able to get the load process to complete, rename the directory manually and then rerun the `load_dataset('wiki_bio')` to get what I needed.
It seems that os.rename() in the `incomplete_dir` content manager is the culprit. Here's another project [Conan](https://github.com/conan-io/conan/issues/6560) with similar issue with os.rename() if it helps debug this issue.
## Environment info
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.22449-SP0
- Python version: 3.8.12
- PyArrow version: 5.0.0
Thanks @albertvillanova for looking at it! I tried on my personal Windows machine and it downloaded just fine.
Running on my work machine and on a colleague's machine it is consistently hitting this error. It's not a write access issue because the `.incomplete` directory is written just fine. It just won't rename and then it deletes the directory in the `finally` step. Also the zip file is written and extracted fine in the downloads directory.
That leaves another program that might be interfering, and there are plenty of those in my work machine ... (full antivirus, data loss prevention, etc.). So the question remains, why not extend the `try` block to allow catching the error and circle back to the rename after the unknown program is finished doing its 'stuff'. This is the approach that I read about in the linked repo (see my comments above).
If it's not high priority, that's fine. However, if someone were to write an PR that solved this issue in our environment in an `except` clause, would it be reviewed for inclusion in a future release? Just wondering whether I should spend any more time on this issue. |
https://github.com/huggingface/datasets/issues/2934 | to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows | I did some investigation and, as it seems, the bug stems from [this line](https://github.com/huggingface/datasets/blob/8004d7c3e1d74b29c3e5b0d1660331cd26758363/src/datasets/arrow_dataset.py#L325). The lifecycle of the dataset from the linked line is bound to one of the returned `tf.data.Dataset`. So my (hacky) solution involves wrapping the linked dataset with `weakref.proxy` and adding a custom `__del__` to `tf.python.data.ops.dataset_ops.TensorSliceDataset` (this is the type of a dataset that is returned by `tf.data.Dataset.from_tensor_slices`; this works for TF 2.x, but I'm not sure `tf.python.data.ops.dataset_ops` is a valid path for TF 1.x) that deletes the linked dataset, which is assigned to the dataset object as a property. Will open a draft PR soon! | To reproduce:
```python
import datasets as ds
import weakref
import gc
d = ds.load_dataset("mnist", split="train")
ref = weakref.ref(d._data.table)
tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label")
del tfd, d
gc.collect()
assert ref() is None, "Error: there is at least one reference left"
```
This causes issues because the table holds a reference to an open arrow file that should be closed. So on windows it's not possible to delete or move the arrow file afterwards.
Moreover the CI test of the `to_tf_dataset` method isn't able to clean up the temporary arrow files because of this.
cc @Rocketknight1 | 99 | to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows
To reproduce:
```python
import datasets as ds
import weakref
import gc
d = ds.load_dataset("mnist", split="train")
ref = weakref.ref(d._data.table)
tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label")
del tfd, d
gc.collect()
assert ref() is None, "Error: there is at least one reference left"
```
This causes issues because the table holds a reference to an open arrow file that should be closed. So on windows it's not possible to delete or move the arrow file afterwards.
Moreover the CI test of the `to_tf_dataset` method isn't able to clean up the temporary arrow files because of this.
cc @Rocketknight1
I did some investigation and, as it seems, the bug stems from [this line](https://github.com/huggingface/datasets/blob/8004d7c3e1d74b29c3e5b0d1660331cd26758363/src/datasets/arrow_dataset.py#L325). The lifecycle of the dataset from the linked line is bound to one of the returned `tf.data.Dataset`. So my (hacky) solution involves wrapping the linked dataset with `weakref.proxy` and adding a custom `__del__` to `tf.python.data.ops.dataset_ops.TensorSliceDataset` (this is the type of a dataset that is returned by `tf.data.Dataset.from_tensor_slices`; this works for TF 2.x, but I'm not sure `tf.python.data.ops.dataset_ops` is a valid path for TF 1.x) that deletes the linked dataset, which is assigned to the dataset object as a property. Will open a draft PR soon! |
https://github.com/huggingface/datasets/issues/2924 | "File name too long" error for file locks | Hi, the filename here is less than 255
```python
>>> len("_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock")
154
```
so not sure why it's considered too long for your filesystem.
(also note that the lock files we use always have smaller filenames than 255)
https://github.com/huggingface/datasets/blob/5d1a9f1e3c6c495dc0610b459e39d2eb8893f152/src/datasets/utils/filelock.py#L135-L135 | ## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Steps to reproduce the bug
Where the user cache dir (e.g. `~/.cache`) is on a file system that limits filenames to 255 chars (e.g. ext4):
```python
from datasets import load_dataset
load_dataset("gar1t/test")
```
## Expected results
Expect the function to return without an error.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<python_venv>/lib/python3.9/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 644, in download_and_prepare
self._save_info()
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 765, in _save_info
with FileLock(lock_path):
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 323, in __enter__
self.acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 272, in acquire
self._acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 403, in _acquire
fd = os.open(self._lock_file, open_mode)
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 5.0.0
| 39 | "File name too long" error for file locks
## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Steps to reproduce the bug
Where the user cache dir (e.g. `~/.cache`) is on a file system that limits filenames to 255 chars (e.g. ext4):
```python
from datasets import load_dataset
load_dataset("gar1t/test")
```
## Expected results
Expect the function to return without an error.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<python_venv>/lib/python3.9/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 644, in download_and_prepare
self._save_info()
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 765, in _save_info
with FileLock(lock_path):
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 323, in __enter__
self.acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 272, in acquire
self._acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 403, in _acquire
fd = os.open(self._lock_file, open_mode)
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 5.0.0
Hi, the filename here is less than 255
```python
>>> len("_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock")
154
```
so not sure why it's considered too long for your filesystem.
(also note that the lock files we use always have smaller filenames than 255)
https://github.com/huggingface/datasets/blob/5d1a9f1e3c6c495dc0610b459e39d2eb8893f152/src/datasets/utils/filelock.py#L135-L135 |
https://github.com/huggingface/datasets/issues/2924 | "File name too long" error for file locks | Yes, you're right! I need to get you more info here. Either there's something going with the name itself that the file system doesn't like (an encoding that blows up the name length??) or perhaps there's something with the path that's causing the entire string to be used as a name. I haven't seen this on any system before and the Internet's not forthcoming with any info. | ## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Steps to reproduce the bug
Where the user cache dir (e.g. `~/.cache`) is on a file system that limits filenames to 255 chars (e.g. ext4):
```python
from datasets import load_dataset
load_dataset("gar1t/test")
```
## Expected results
Expect the function to return without an error.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<python_venv>/lib/python3.9/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 644, in download_and_prepare
self._save_info()
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 765, in _save_info
with FileLock(lock_path):
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 323, in __enter__
self.acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 272, in acquire
self._acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 403, in _acquire
fd = os.open(self._lock_file, open_mode)
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 5.0.0
| 67 | "File name too long" error for file locks
## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Steps to reproduce the bug
Where the user cache dir (e.g. `~/.cache`) is on a file system that limits filenames to 255 chars (e.g. ext4):
```python
from datasets import load_dataset
load_dataset("gar1t/test")
```
## Expected results
Expect the function to return without an error.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<python_venv>/lib/python3.9/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 644, in download_and_prepare
self._save_info()
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 765, in _save_info
with FileLock(lock_path):
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 323, in __enter__
self.acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 272, in acquire
self._acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 403, in _acquire
fd = os.open(self._lock_file, open_mode)
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 5.0.0
Yes, you're right! I need to get you more info here. Either there's something going with the name itself that the file system doesn't like (an encoding that blows up the name length??) or perhaps there's something with the path that's causing the entire string to be used as a name. I haven't seen this on any system before and the Internet's not forthcoming with any info. |
https://github.com/huggingface/datasets/issues/2918 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming | Hi @SBrandeis, thanks for reporting! ^^
I think this is an issue with `fsspec`: https://github.com/intake/filesystem_spec/issues/389
I will ask them if they are planning to fix it... | ## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
cc @lhoestq
## Steps to reproduce the bug
```python
from datasets import load_dataset
iter_dset = iter(
load_dataset("scitldr", name="FullText", split="test", streaming=True)
)
next(iter_dset)
```
## Expected results
Returns the first sample of the dataset
## Actual results
Calling `__next__` crashes with the following Traceback:
```python
----> 1 next(dset_iter)
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
339
340 def __iter__(self):
--> 341 for key, example in self._iter():
342 if self.features:
343 # we encode the example for ClassLabel feature types for example
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in _iter(self)
336 else:
337 ex_iterable = self._ex_iterable
--> 338 yield from ex_iterable
339
340 def __iter__(self):
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
76
77 def __iter__(self):
---> 78 for key, example in self.generate_examples_fn(**self.kwargs):
79 yield key, example
80
~\.cache\huggingface\modules\datasets_modules\datasets\scitldr\72d6e2195786c57e1d343066fb2cc4f93ea39c5e381e53e6ae7c44bbfd1f05ef\scitldr.py in _generate_examples(self, filepath, split)
162
163 with open(filepath, encoding="utf-8") as f:
--> 164 for id_, row in enumerate(f):
165 data = json.loads(row)
166 if self.config.name == "AIC":
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in read(self, length)
496 else:
497 length = min(self.size - self.loc, length)
--> 498 return super().read(length)
499
500 async def async_fetch_all(self):
~\miniconda3\envs\datasets\lib\site-packages\fsspec\spec.py in read(self, length)
1481 # don't even bother calling fetch
1482 return b""
-> 1483 out = self.cache._fetch(self.loc, self.loc + length)
1484 self.loc += len(out)
1485 return out
~\miniconda3\envs\datasets\lib\site-packages\fsspec\caching.py in _fetch(self, start, end)
378 elif start < self.start:
379 if self.end - end > self.blocksize:
--> 380 self.cache = self.fetcher(start, bend)
381 self.start = start
382 else:
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in wrapper(*args, **kwargs)
86 def wrapper(*args, **kwargs):
87 self = obj or args[0]
---> 88 return sync(self.loop, func, *args, **kwargs)
89
90 return wrapper
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in sync(loop, func, timeout, *args, **kwargs)
67 raise FSTimeoutError
68 if isinstance(result[0], BaseException):
---> 69 raise result[0]
70 return result[0]
71
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in _runner(event, coro, result, timeout)
23 coro = asyncio.wait_for(coro, timeout=timeout)
24 try:
---> 25 result[0] = await coro
26 except Exception as ex:
27 result[0] = ex
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in async_fetch_range(self, start, end)
538 if r.status == 206:
539 # partial content, as expected
--> 540 out = await r.read()
541 elif "Content-Length" in r.headers:
542 cl = int(r.headers["Content-Length"])
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\client_reqrep.py in read(self)
1030 if self._body is None:
1031 try:
-> 1032 self._body = await self.content.read()
1033 for trace in self._traces:
1034 await trace.send_response_chunk_received(
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\streams.py in read(self, n)
342 async def read(self, n: int = -1) -> bytes:
343 if self._exception is not None:
--> 344 raise self._exception
345
346 # migration problem; with DataQueue you have to catch
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
## Environment info
- `datasets` version: 1.12.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.5
- PyArrow version: 2.0.0
- aiohttp version: 3.7.4.post0
| 26 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
cc @lhoestq
## Steps to reproduce the bug
```python
from datasets import load_dataset
iter_dset = iter(
load_dataset("scitldr", name="FullText", split="test", streaming=True)
)
next(iter_dset)
```
## Expected results
Returns the first sample of the dataset
## Actual results
Calling `__next__` crashes with the following Traceback:
```python
----> 1 next(dset_iter)
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
339
340 def __iter__(self):
--> 341 for key, example in self._iter():
342 if self.features:
343 # we encode the example for ClassLabel feature types for example
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in _iter(self)
336 else:
337 ex_iterable = self._ex_iterable
--> 338 yield from ex_iterable
339
340 def __iter__(self):
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
76
77 def __iter__(self):
---> 78 for key, example in self.generate_examples_fn(**self.kwargs):
79 yield key, example
80
~\.cache\huggingface\modules\datasets_modules\datasets\scitldr\72d6e2195786c57e1d343066fb2cc4f93ea39c5e381e53e6ae7c44bbfd1f05ef\scitldr.py in _generate_examples(self, filepath, split)
162
163 with open(filepath, encoding="utf-8") as f:
--> 164 for id_, row in enumerate(f):
165 data = json.loads(row)
166 if self.config.name == "AIC":
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in read(self, length)
496 else:
497 length = min(self.size - self.loc, length)
--> 498 return super().read(length)
499
500 async def async_fetch_all(self):
~\miniconda3\envs\datasets\lib\site-packages\fsspec\spec.py in read(self, length)
1481 # don't even bother calling fetch
1482 return b""
-> 1483 out = self.cache._fetch(self.loc, self.loc + length)
1484 self.loc += len(out)
1485 return out
~\miniconda3\envs\datasets\lib\site-packages\fsspec\caching.py in _fetch(self, start, end)
378 elif start < self.start:
379 if self.end - end > self.blocksize:
--> 380 self.cache = self.fetcher(start, bend)
381 self.start = start
382 else:
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in wrapper(*args, **kwargs)
86 def wrapper(*args, **kwargs):
87 self = obj or args[0]
---> 88 return sync(self.loop, func, *args, **kwargs)
89
90 return wrapper
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in sync(loop, func, timeout, *args, **kwargs)
67 raise FSTimeoutError
68 if isinstance(result[0], BaseException):
---> 69 raise result[0]
70 return result[0]
71
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in _runner(event, coro, result, timeout)
23 coro = asyncio.wait_for(coro, timeout=timeout)
24 try:
---> 25 result[0] = await coro
26 except Exception as ex:
27 result[0] = ex
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in async_fetch_range(self, start, end)
538 if r.status == 206:
539 # partial content, as expected
--> 540 out = await r.read()
541 elif "Content-Length" in r.headers:
542 cl = int(r.headers["Content-Length"])
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\client_reqrep.py in read(self)
1030 if self._body is None:
1031 try:
-> 1032 self._body = await self.content.read()
1033 for trace in self._traces:
1034 await trace.send_response_chunk_received(
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\streams.py in read(self, n)
342 async def read(self, n: int = -1) -> bytes:
343 if self._exception is not None:
--> 344 raise self._exception
345
346 # migration problem; with DataQueue you have to catch
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
## Environment info
- `datasets` version: 1.12.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.5
- PyArrow version: 2.0.0
- aiohttp version: 3.7.4.post0
Hi @SBrandeis, thanks for reporting! ^^
I think this is an issue with `fsspec`: https://github.com/intake/filesystem_spec/issues/389
I will ask them if they are planning to fix it... |
https://github.com/huggingface/datasets/issues/2918 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming | Code to reproduce the bug: `ClientPayloadError: 400, message='Can not decode content-encoding: gzip'`
```python
In [1]: import fsspec
In [2]: import json
In [3]: with fsspec.open('https://raw.githubusercontent.com/allenai/scitldr/master/SciTLDR-Data/SciTLDR-FullText/test.jsonl', encoding="utf-8") as f:
...: for row in f:
...: data = json.loads(row)
...:
---------------------------------------------------------------------------
ClientPayloadError Traceback (most recent call last)
``` | ## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
cc @lhoestq
## Steps to reproduce the bug
```python
from datasets import load_dataset
iter_dset = iter(
load_dataset("scitldr", name="FullText", split="test", streaming=True)
)
next(iter_dset)
```
## Expected results
Returns the first sample of the dataset
## Actual results
Calling `__next__` crashes with the following Traceback:
```python
----> 1 next(dset_iter)
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
339
340 def __iter__(self):
--> 341 for key, example in self._iter():
342 if self.features:
343 # we encode the example for ClassLabel feature types for example
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in _iter(self)
336 else:
337 ex_iterable = self._ex_iterable
--> 338 yield from ex_iterable
339
340 def __iter__(self):
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
76
77 def __iter__(self):
---> 78 for key, example in self.generate_examples_fn(**self.kwargs):
79 yield key, example
80
~\.cache\huggingface\modules\datasets_modules\datasets\scitldr\72d6e2195786c57e1d343066fb2cc4f93ea39c5e381e53e6ae7c44bbfd1f05ef\scitldr.py in _generate_examples(self, filepath, split)
162
163 with open(filepath, encoding="utf-8") as f:
--> 164 for id_, row in enumerate(f):
165 data = json.loads(row)
166 if self.config.name == "AIC":
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in read(self, length)
496 else:
497 length = min(self.size - self.loc, length)
--> 498 return super().read(length)
499
500 async def async_fetch_all(self):
~\miniconda3\envs\datasets\lib\site-packages\fsspec\spec.py in read(self, length)
1481 # don't even bother calling fetch
1482 return b""
-> 1483 out = self.cache._fetch(self.loc, self.loc + length)
1484 self.loc += len(out)
1485 return out
~\miniconda3\envs\datasets\lib\site-packages\fsspec\caching.py in _fetch(self, start, end)
378 elif start < self.start:
379 if self.end - end > self.blocksize:
--> 380 self.cache = self.fetcher(start, bend)
381 self.start = start
382 else:
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in wrapper(*args, **kwargs)
86 def wrapper(*args, **kwargs):
87 self = obj or args[0]
---> 88 return sync(self.loop, func, *args, **kwargs)
89
90 return wrapper
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in sync(loop, func, timeout, *args, **kwargs)
67 raise FSTimeoutError
68 if isinstance(result[0], BaseException):
---> 69 raise result[0]
70 return result[0]
71
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in _runner(event, coro, result, timeout)
23 coro = asyncio.wait_for(coro, timeout=timeout)
24 try:
---> 25 result[0] = await coro
26 except Exception as ex:
27 result[0] = ex
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in async_fetch_range(self, start, end)
538 if r.status == 206:
539 # partial content, as expected
--> 540 out = await r.read()
541 elif "Content-Length" in r.headers:
542 cl = int(r.headers["Content-Length"])
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\client_reqrep.py in read(self)
1030 if self._body is None:
1031 try:
-> 1032 self._body = await self.content.read()
1033 for trace in self._traces:
1034 await trace.send_response_chunk_received(
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\streams.py in read(self, n)
342 async def read(self, n: int = -1) -> bytes:
343 if self._exception is not None:
--> 344 raise self._exception
345
346 # migration problem; with DataQueue you have to catch
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
## Environment info
- `datasets` version: 1.12.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.5
- PyArrow version: 2.0.0
- aiohttp version: 3.7.4.post0
| 46 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
cc @lhoestq
## Steps to reproduce the bug
```python
from datasets import load_dataset
iter_dset = iter(
load_dataset("scitldr", name="FullText", split="test", streaming=True)
)
next(iter_dset)
```
## Expected results
Returns the first sample of the dataset
## Actual results
Calling `__next__` crashes with the following Traceback:
```python
----> 1 next(dset_iter)
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
339
340 def __iter__(self):
--> 341 for key, example in self._iter():
342 if self.features:
343 # we encode the example for ClassLabel feature types for example
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in _iter(self)
336 else:
337 ex_iterable = self._ex_iterable
--> 338 yield from ex_iterable
339
340 def __iter__(self):
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
76
77 def __iter__(self):
---> 78 for key, example in self.generate_examples_fn(**self.kwargs):
79 yield key, example
80
~\.cache\huggingface\modules\datasets_modules\datasets\scitldr\72d6e2195786c57e1d343066fb2cc4f93ea39c5e381e53e6ae7c44bbfd1f05ef\scitldr.py in _generate_examples(self, filepath, split)
162
163 with open(filepath, encoding="utf-8") as f:
--> 164 for id_, row in enumerate(f):
165 data = json.loads(row)
166 if self.config.name == "AIC":
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in read(self, length)
496 else:
497 length = min(self.size - self.loc, length)
--> 498 return super().read(length)
499
500 async def async_fetch_all(self):
~\miniconda3\envs\datasets\lib\site-packages\fsspec\spec.py in read(self, length)
1481 # don't even bother calling fetch
1482 return b""
-> 1483 out = self.cache._fetch(self.loc, self.loc + length)
1484 self.loc += len(out)
1485 return out
~\miniconda3\envs\datasets\lib\site-packages\fsspec\caching.py in _fetch(self, start, end)
378 elif start < self.start:
379 if self.end - end > self.blocksize:
--> 380 self.cache = self.fetcher(start, bend)
381 self.start = start
382 else:
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in wrapper(*args, **kwargs)
86 def wrapper(*args, **kwargs):
87 self = obj or args[0]
---> 88 return sync(self.loop, func, *args, **kwargs)
89
90 return wrapper
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in sync(loop, func, timeout, *args, **kwargs)
67 raise FSTimeoutError
68 if isinstance(result[0], BaseException):
---> 69 raise result[0]
70 return result[0]
71
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in _runner(event, coro, result, timeout)
23 coro = asyncio.wait_for(coro, timeout=timeout)
24 try:
---> 25 result[0] = await coro
26 except Exception as ex:
27 result[0] = ex
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in async_fetch_range(self, start, end)
538 if r.status == 206:
539 # partial content, as expected
--> 540 out = await r.read()
541 elif "Content-Length" in r.headers:
542 cl = int(r.headers["Content-Length"])
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\client_reqrep.py in read(self)
1030 if self._body is None:
1031 try:
-> 1032 self._body = await self.content.read()
1033 for trace in self._traces:
1034 await trace.send_response_chunk_received(
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\streams.py in read(self, n)
342 async def read(self, n: int = -1) -> bytes:
343 if self._exception is not None:
--> 344 raise self._exception
345
346 # migration problem; with DataQueue you have to catch
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
## Environment info
- `datasets` version: 1.12.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.5
- PyArrow version: 2.0.0
- aiohttp version: 3.7.4.post0
Code to reproduce the bug: `ClientPayloadError: 400, message='Can not decode content-encoding: gzip'`
```python
In [1]: import fsspec
In [2]: import json
In [3]: with fsspec.open('https://raw.githubusercontent.com/allenai/scitldr/master/SciTLDR-Data/SciTLDR-FullText/test.jsonl', encoding="utf-8") as f:
...: for row in f:
...: data = json.loads(row)
...:
---------------------------------------------------------------------------
ClientPayloadError Traceback (most recent call last)
``` |
https://github.com/huggingface/datasets/issues/2917 | windows download abnormal | Hi ! Is there some kind of proxy that is configured in your browser that gives you access to internet ? If it's the case it could explain why it doesn't work in the code, since the proxy wouldn't be used | ## Describe the bug
The script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why??
## Steps to reproduce the bug
```python3.7 + windows
![image](https://user-images.githubusercontent.com/52347799/133436174-4303f847-55d5-434f-a749-08da3bb9b654.png)
# Sample code to reproduce the bug
```
## Expected results
It can be downloaded normally.
## Actual results
it cann't
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:1.11.0
- Platform:windows
- Python version:3.7
- PyArrow version:
| 41 | windows download abnormal
## Describe the bug
The script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why??
## Steps to reproduce the bug
```python3.7 + windows
![image](https://user-images.githubusercontent.com/52347799/133436174-4303f847-55d5-434f-a749-08da3bb9b654.png)
# Sample code to reproduce the bug
```
## Expected results
It can be downloaded normally.
## Actual results
it cann't
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:1.11.0
- Platform:windows
- Python version:3.7
- PyArrow version:
Hi ! Is there some kind of proxy that is configured in your browser that gives you access to internet ? If it's the case it could explain why it doesn't work in the code, since the proxy wouldn't be used |
https://github.com/huggingface/datasets/issues/2913 | timit_asr dataset only includes one text phrase | Hi @margotwagner,
This bug was fixed in #1995. Upgrading the datasets should work (min v1.8.0 ideally) | ## Describe the bug
The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases.
## Steps to reproduce the bug
Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-english
1. Install the dataset and other packages
```python
!pip install datasets>=1.5.0
!pip install transformers==4.4.0
!pip install soundfile
!pip install jiwer
```
2. Load the dataset
```python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
```
3. Remove columns that we don't want
```python
timit = timit.remove_columns(["phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"])
```
4. Write a short function to display some random samples of the dataset.
```python
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
display(HTML(df.to_html()))
show_random_elements(timit["train"].remove_columns(["file"]))
```
## Expected results
10 random different transcription phrases.
## Actual results
10 of the same transcription phrase "Would such an act of refusal be useful?"
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.4.1
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.5
- PyArrow version: not listed
| 16 | timit_asr dataset only includes one text phrase
## Describe the bug
The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases.
## Steps to reproduce the bug
Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-english
1. Install the dataset and other packages
```python
!pip install datasets>=1.5.0
!pip install transformers==4.4.0
!pip install soundfile
!pip install jiwer
```
2. Load the dataset
```python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
```
3. Remove columns that we don't want
```python
timit = timit.remove_columns(["phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"])
```
4. Write a short function to display some random samples of the dataset.
```python
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
display(HTML(df.to_html()))
show_random_elements(timit["train"].remove_columns(["file"]))
```
## Expected results
10 random different transcription phrases.
## Actual results
10 of the same transcription phrase "Would such an act of refusal be useful?"
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.4.1
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.5
- PyArrow version: not listed
Hi @margotwagner,
This bug was fixed in #1995. Upgrading the datasets should work (min v1.8.0 ideally) |
https://github.com/huggingface/datasets/issues/2913 | timit_asr dataset only includes one text phrase | Hi @margotwagner,
Yes, as @bhavitvyamalik has commented, this bug was fixed in `datasets` version 1.5.0. You need to update it, as your current version is 1.4.1:
> Environment info
> - `datasets` version: 1.4.1 | ## Describe the bug
The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases.
## Steps to reproduce the bug
Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-english
1. Install the dataset and other packages
```python
!pip install datasets>=1.5.0
!pip install transformers==4.4.0
!pip install soundfile
!pip install jiwer
```
2. Load the dataset
```python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
```
3. Remove columns that we don't want
```python
timit = timit.remove_columns(["phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"])
```
4. Write a short function to display some random samples of the dataset.
```python
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
display(HTML(df.to_html()))
show_random_elements(timit["train"].remove_columns(["file"]))
```
## Expected results
10 random different transcription phrases.
## Actual results
10 of the same transcription phrase "Would such an act of refusal be useful?"
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.4.1
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.5
- PyArrow version: not listed
| 34 | timit_asr dataset only includes one text phrase
## Describe the bug
The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases.
## Steps to reproduce the bug
Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-english
1. Install the dataset and other packages
```python
!pip install datasets>=1.5.0
!pip install transformers==4.4.0
!pip install soundfile
!pip install jiwer
```
2. Load the dataset
```python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
```
3. Remove columns that we don't want
```python
timit = timit.remove_columns(["phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"])
```
4. Write a short function to display some random samples of the dataset.
```python
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
display(HTML(df.to_html()))
show_random_elements(timit["train"].remove_columns(["file"]))
```
## Expected results
10 random different transcription phrases.
## Actual results
10 of the same transcription phrase "Would such an act of refusal be useful?"
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.4.1
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.5
- PyArrow version: not listed
Hi @margotwagner,
Yes, as @bhavitvyamalik has commented, this bug was fixed in `datasets` version 1.5.0. You need to update it, as your current version is 1.4.1:
> Environment info
> - `datasets` version: 1.4.1 |
https://github.com/huggingface/datasets/issues/2904 | FORCE_REDOWNLOAD does not work | Hi ! Thanks for reporting. The error seems to happen only if you use compressed files.
The second dataset is prepared in another dataset cache directory than the first - which is normal, since the source file is different. However, it doesn't uncompress the new data file because it finds the old uncompressed data in the extraction cache directory.
If we fix the extraction cache mechanism to uncompress a local file if it changed then it should fix the issue.
Currently the extraction cache mechanism only takes into account the path of the compressed file, which is an issue. | ## Describe the bug
With GenerateMode.FORCE_REDOWNLOAD, the documentation says
+------------------------------------+-----------+---------+
| | Downloads | Dataset |
+====================================+===========+=========+
| `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse |
+------------------------------------+-----------+---------+
| `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh |
+------------------------------------+-----------+---------+
| `FORCE_REDOWNLOAD` | Fresh | Fresh |
+------------------------------------+-----------+---------+
However, the old dataset is loaded even when FORCE_REDOWNLOAD is chosen.
## Steps to reproduce the bug
```python
import pandas as pd
from datasets import load_dataset, GenerateMode
pd.DataFrame(range(5), columns=['numbers']).to_csv('/tmp/test.tsv.gz', index=False)
ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD)
print(ee)
pd.DataFrame(range(10), columns=['numerals']).to_csv('/tmp/test.tsv.gz', index=False)
ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD)
print(ee)
```
## Expected results
Dataset({
features: ['numbers'],
num_rows: 5
})
Dataset({
features: ['numerals'],
num_rows: 10
})
## Actual results
Dataset({
features: ['numbers'],
num_rows: 5
})
Dataset({
features: ['numbers'],
num_rows: 5
})
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: Linux-4.14.181-108.257.amzn1.x86_64-x86_64-with-glibc2.10
- Python version: 3.7.10
- PyArrow version: 3.0.0
| 99 | FORCE_REDOWNLOAD does not work
## Describe the bug
With GenerateMode.FORCE_REDOWNLOAD, the documentation says
+------------------------------------+-----------+---------+
| | Downloads | Dataset |
+====================================+===========+=========+
| `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse |
+------------------------------------+-----------+---------+
| `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh |
+------------------------------------+-----------+---------+
| `FORCE_REDOWNLOAD` | Fresh | Fresh |
+------------------------------------+-----------+---------+
However, the old dataset is loaded even when FORCE_REDOWNLOAD is chosen.
## Steps to reproduce the bug
```python
import pandas as pd
from datasets import load_dataset, GenerateMode
pd.DataFrame(range(5), columns=['numbers']).to_csv('/tmp/test.tsv.gz', index=False)
ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD)
print(ee)
pd.DataFrame(range(10), columns=['numerals']).to_csv('/tmp/test.tsv.gz', index=False)
ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD)
print(ee)
```
## Expected results
Dataset({
features: ['numbers'],
num_rows: 5
})
Dataset({
features: ['numerals'],
num_rows: 10
})
## Actual results
Dataset({
features: ['numbers'],
num_rows: 5
})
Dataset({
features: ['numbers'],
num_rows: 5
})
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: Linux-4.14.181-108.257.amzn1.x86_64-x86_64-with-glibc2.10
- Python version: 3.7.10
- PyArrow version: 3.0.0
Hi ! Thanks for reporting. The error seems to happen only if you use compressed files.
The second dataset is prepared in another dataset cache directory than the first - which is normal, since the source file is different. However, it doesn't uncompress the new data file because it finds the old uncompressed data in the extraction cache directory.
If we fix the extraction cache mechanism to uncompress a local file if it changed then it should fix the issue.
Currently the extraction cache mechanism only takes into account the path of the compressed file, which is an issue. |
https://github.com/huggingface/datasets/issues/2902 | Add WIT Dataset | WikiMedia is now hosting the pixel values directly which should make it a lot easier!
The files can be found here:
https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/
https://analytics.wikimedia.org/published/datasets/one-off/caption_competition/training/image_pixels/ | ## Adding a Dataset
- **Name:** *WIT*
- **Description:** *Wikipedia-based Image Text Dataset*
- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)*
- **Data:** *https://github.com/google-research-datasets/wit*
- **Motivation:** (excerpt from their Github README.md)
> - The largest multimodal dataset (publicly available at the time of this writing) by the number of image-text examples.
> - A massively multilingual dataset (first of its kind) with coverage for over 100+ languages.
> - A collection of diverse set of concepts and real world entities.
> - Brings forth challenging real-world test sets.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| 23 | Add WIT Dataset
## Adding a Dataset
- **Name:** *WIT*
- **Description:** *Wikipedia-based Image Text Dataset*
- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)*
- **Data:** *https://github.com/google-research-datasets/wit*
- **Motivation:** (excerpt from their Github README.md)
> - The largest multimodal dataset (publicly available at the time of this writing) by the number of image-text examples.
> - A massively multilingual dataset (first of its kind) with coverage for over 100+ languages.
> - A collection of diverse set of concepts and real world entities.
> - Brings forth challenging real-world test sets.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
WikiMedia is now hosting the pixel values directly which should make it a lot easier!
The files can be found here:
https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/
https://analytics.wikimedia.org/published/datasets/one-off/caption_competition/training/image_pixels/ |
https://github.com/huggingface/datasets/issues/2902 | Add WIT Dataset | > @hassiahk is working on it #2810
Thank you @bhavitvyamalik! Added this issue so we could track progress 😄 . Just linked the PR as well for visibility. | ## Adding a Dataset
- **Name:** *WIT*
- **Description:** *Wikipedia-based Image Text Dataset*
- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)*
- **Data:** *https://github.com/google-research-datasets/wit*
- **Motivation:** (excerpt from their Github README.md)
> - The largest multimodal dataset (publicly available at the time of this writing) by the number of image-text examples.
> - A massively multilingual dataset (first of its kind) with coverage for over 100+ languages.
> - A collection of diverse set of concepts and real world entities.
> - Brings forth challenging real-world test sets.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| 28 | Add WIT Dataset
## Adding a Dataset
- **Name:** *WIT*
- **Description:** *Wikipedia-based Image Text Dataset*
- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)*
- **Data:** *https://github.com/google-research-datasets/wit*
- **Motivation:** (excerpt from their Github README.md)
> - The largest multimodal dataset (publicly available at the time of this writing) by the number of image-text examples.
> - A massively multilingual dataset (first of its kind) with coverage for over 100+ languages.
> - A collection of diverse set of concepts and real world entities.
> - Brings forth challenging real-world test sets.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
> @hassiahk is working on it #2810
Thank you @bhavitvyamalik! Added this issue so we could track progress 😄 . Just linked the PR as well for visibility. |
https://github.com/huggingface/datasets/issues/2901 | Incompatibility with pytest | Sorry, my bad... When implementing `xpathopen`, I just considered the use case in the COUNTER dataset... I'm fixing it! | ## Describe the bug
pytest complains about xpathopen / path.open("w")
## Steps to reproduce the bug
Create a test file, `test.py`:
```python
import datasets as ds
def load_dataset():
ds.load_dataset("counter", split="train", streaming=True)
```
And launch it with pytest:
```bash
python -m pytest test.py
```
## Expected results
It should give something like:
```
collected 1 item
test.py . [100%]
======= 1 passed in 3.15s =======
```
## Actual results
```
============================================================================================================================= test session starts ==============================================================================================================================
platform linux -- Python 3.8.11, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /home/slesage/hf/datasets-preview-backend, configfile: pyproject.toml
plugins: anyio-3.3.1
collected 1 item
tests/queries/test_rows.py . [100%]Traceback (most recent call last):
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pytest/__main__.py", line 5, in <module>
raise SystemExit(pytest.console_main())
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 185, in console_main
code = main()
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 162, in main
ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
return outcome.get_result()
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 316, in pytest_cmdline_main
return wrap_session(config, _main)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 304, in wrap_session
config.hook.pytest_sessionfinish(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 55, in _multicall
gen.send(outcome)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/terminal.py", line 803, in pytest_sessionfinish
outcome.get_result()
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 428, in pytest_sessionfinish
config.cache.set("cache/nodeids", sorted(self.cached_nodeids))
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 188, in set
f = path.open("w")
TypeError: xpathopen() takes 1 positional argument but 2 were given
```
## Environment info
- `datasets` version: 1.12.0
- Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| 19 | Incompatibility with pytest
## Describe the bug
pytest complains about xpathopen / path.open("w")
## Steps to reproduce the bug
Create a test file, `test.py`:
```python
import datasets as ds
def load_dataset():
ds.load_dataset("counter", split="train", streaming=True)
```
And launch it with pytest:
```bash
python -m pytest test.py
```
## Expected results
It should give something like:
```
collected 1 item
test.py . [100%]
======= 1 passed in 3.15s =======
```
## Actual results
```
============================================================================================================================= test session starts ==============================================================================================================================
platform linux -- Python 3.8.11, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /home/slesage/hf/datasets-preview-backend, configfile: pyproject.toml
plugins: anyio-3.3.1
collected 1 item
tests/queries/test_rows.py . [100%]Traceback (most recent call last):
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pytest/__main__.py", line 5, in <module>
raise SystemExit(pytest.console_main())
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 185, in console_main
code = main()
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 162, in main
ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
return outcome.get_result()
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 316, in pytest_cmdline_main
return wrap_session(config, _main)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 304, in wrap_session
config.hook.pytest_sessionfinish(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 55, in _multicall
gen.send(outcome)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/terminal.py", line 803, in pytest_sessionfinish
outcome.get_result()
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 428, in pytest_sessionfinish
config.cache.set("cache/nodeids", sorted(self.cached_nodeids))
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 188, in set
f = path.open("w")
TypeError: xpathopen() takes 1 positional argument but 2 were given
```
## Environment info
- `datasets` version: 1.12.0
- Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
Sorry, my bad... When implementing `xpathopen`, I just considered the use case in the COUNTER dataset... I'm fixing it! |
https://github.com/huggingface/datasets/issues/2892 | Error when encoding a dataset with None objects with a Sequence feature | This has been fixed by https://github.com/huggingface/datasets/pull/2900
We're doing a new release 1.12 today to make the fix available :) | There is an error when encoding a dataset with None objects with a Sequence feature
To reproduce:
```python
from datasets import Dataset, Features, Value, Sequence
data = {"a": [[0], None]}
features = Features({"a": Sequence(Value("int32"))})
dataset = Dataset.from_dict(data, features=features)
```
raises
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-24-40add67f8751> in <module>
2 data = {"a": [[0], None]}
3 features = Features({"a": Sequence(Value("int32"))})
----> 4 dataset = Dataset.from_dict(data, features=features)
[...]
~/datasets/features.py in encode_nested_example(schema, obj)
888 if isinstance(obj, str): # don't interpret a string as a list
889 raise ValueError("Got a string but expected a list instead: '{}'".format(obj))
--> 890 return [encode_nested_example(schema.feature, o) for o in obj]
891 # Object with special encoding:
892 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks
TypeError: 'NoneType' object is not iterable
```
Instead, if should run without error, as if the `features` were not passed | 19 | Error when encoding a dataset with None objects with a Sequence feature
There is an error when encoding a dataset with None objects with a Sequence feature
To reproduce:
```python
from datasets import Dataset, Features, Value, Sequence
data = {"a": [[0], None]}
features = Features({"a": Sequence(Value("int32"))})
dataset = Dataset.from_dict(data, features=features)
```
raises
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-24-40add67f8751> in <module>
2 data = {"a": [[0], None]}
3 features = Features({"a": Sequence(Value("int32"))})
----> 4 dataset = Dataset.from_dict(data, features=features)
[...]
~/datasets/features.py in encode_nested_example(schema, obj)
888 if isinstance(obj, str): # don't interpret a string as a list
889 raise ValueError("Got a string but expected a list instead: '{}'".format(obj))
--> 890 return [encode_nested_example(schema.feature, o) for o in obj]
891 # Object with special encoding:
892 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks
TypeError: 'NoneType' object is not iterable
```
Instead, if should run without error, as if the `features` were not passed
This has been fixed by https://github.com/huggingface/datasets/pull/2900
We're doing a new release 1.12 today to make the fix available :) |
https://github.com/huggingface/datasets/issues/2888 | v1.11.1 release date | @albertvillanova i think this issue is still valid and should not be closed till `>1.11.0` is published :) | Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago.
When do you plan to publush v1.11.1 release? | 18 | v1.11.1 release date
Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago.
When do you plan to publush v1.11.1 release?
@albertvillanova i think this issue is still valid and should not be closed till `>1.11.0` is published :) |
https://github.com/huggingface/datasets/issues/2885 | Adding an Elastic Search index to a Dataset | Hi, is this bug deterministic in your poetry env ? I mean, does it always stop at 90% or is it random ?
Also, can you try using another version of Elasticsearch ? Maybe there's an issue with the one of you poetry env | ## Describe the bug
When trying to index documents from the squad dataset, the connection to ElasticSearch seems to break:
Reusing dataset squad (/Users/andreasmotz/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)
90%|████████████████████████████████████████████▉ | 9501/10570 [00:01<00:00, 6335.61docs/s]
No error is thrown, but the indexing breaks ~90%.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
from elasticsearch import Elasticsearch
es = Elasticsearch()
squad = load_dataset('squad', split='validation')
index_name = "corpus"
es_config = {
"settings": {
"number_of_shards": 1,
"analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}},
},
"mappings": {
"properties": {
"idx" : {"type" : "keyword"},
"title" : {"type" : "keyword"},
"text": {
"type": "text",
"analyzer": "standard",
"similarity": "BM25"
},
}
},
}
class IndexBuilder:
"""
Elastic search indexing of a corpus
"""
def __init__(
self,
*args,
#corpus : None,
dataset : squad,
index_name = str,
query = str,
config = dict,
**kwargs,
):
#instantiate HuggingFace dataset
self.dataset = dataset
#instantiate ElasticSearch config
self.config = config
self.es = Elasticsearch()
self.index_name = index_name
self.query = query
def elastic_index(self):
print(self.es.info)
self.es.indices.delete(index=self.index_name, ignore=[400, 404])
search_index = self.dataset.add_elasticsearch_index(column='context', host='localhost', port='9200', es_index_name=self.index_name, es_index_config=self.config)
return search_index
def exact_match_method(self, index):
scores, retrieved_examples = index.get_nearest_examples('context', query=self.query, k=1)
return scores, retrieved_examples
if __name__ == "__main__":
print(type(squad))
Index = IndexBuilder(dataset=squad, index_name='corpus_index', query='Where was Chopin born?', config=es_config)
search_index = Index.elastic_index()
scores, examples = Index.exact_match_method(search_index)
print(scores, examples)
for name in squad.column_names:
print(type(squad[name]))
```
## Environment info
We run the code in Poetry. This might be the issue, since the script runs successfully in our local environment.
Poetry:
- Python version: 3.8
- PyArrow: 4.0.1
- Elasticsearch: 7.13.4
- datasets: 1.10.2
Local:
- Python version: 3.8
- PyArrow: 3.0.0
- Elasticsearch: 7.7.1
- datasets: 1.7.0
| 44 | Adding an Elastic Search index to a Dataset
## Describe the bug
When trying to index documents from the squad dataset, the connection to ElasticSearch seems to break:
Reusing dataset squad (/Users/andreasmotz/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)
90%|████████████████████████████████████████████▉ | 9501/10570 [00:01<00:00, 6335.61docs/s]
No error is thrown, but the indexing breaks ~90%.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
from elasticsearch import Elasticsearch
es = Elasticsearch()
squad = load_dataset('squad', split='validation')
index_name = "corpus"
es_config = {
"settings": {
"number_of_shards": 1,
"analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}},
},
"mappings": {
"properties": {
"idx" : {"type" : "keyword"},
"title" : {"type" : "keyword"},
"text": {
"type": "text",
"analyzer": "standard",
"similarity": "BM25"
},
}
},
}
class IndexBuilder:
"""
Elastic search indexing of a corpus
"""
def __init__(
self,
*args,
#corpus : None,
dataset : squad,
index_name = str,
query = str,
config = dict,
**kwargs,
):
#instantiate HuggingFace dataset
self.dataset = dataset
#instantiate ElasticSearch config
self.config = config
self.es = Elasticsearch()
self.index_name = index_name
self.query = query
def elastic_index(self):
print(self.es.info)
self.es.indices.delete(index=self.index_name, ignore=[400, 404])
search_index = self.dataset.add_elasticsearch_index(column='context', host='localhost', port='9200', es_index_name=self.index_name, es_index_config=self.config)
return search_index
def exact_match_method(self, index):
scores, retrieved_examples = index.get_nearest_examples('context', query=self.query, k=1)
return scores, retrieved_examples
if __name__ == "__main__":
print(type(squad))
Index = IndexBuilder(dataset=squad, index_name='corpus_index', query='Where was Chopin born?', config=es_config)
search_index = Index.elastic_index()
scores, examples = Index.exact_match_method(search_index)
print(scores, examples)
for name in squad.column_names:
print(type(squad[name]))
```
## Environment info
We run the code in Poetry. This might be the issue, since the script runs successfully in our local environment.
Poetry:
- Python version: 3.8
- PyArrow: 4.0.1
- Elasticsearch: 7.13.4
- datasets: 1.10.2
Local:
- Python version: 3.8
- PyArrow: 3.0.0
- Elasticsearch: 7.7.1
- datasets: 1.7.0
Hi, is this bug deterministic in your poetry env ? I mean, does it always stop at 90% or is it random ?
Also, can you try using another version of Elasticsearch ? Maybe there's an issue with the one of you poetry env |
https://github.com/huggingface/datasets/issues/2882 | `load_dataset('docred')` results in a `NonMatchingChecksumError` | Hi @tmpr, thanks for reporting.
Two weeks ago (23th Aug), the host of the source `docred` dataset updated one of the files (`dev.json`): you can see it [here](https://drive.google.com/drive/folders/1c5-0YwnoJx8NS6CV2f-NoTHR__BdkNqw).
Therefore, the checksum needs to be updated.
Normally, in the meantime, you could avoid the error by passing `ignore_verifications=True` to `load_dataset`. However, as the old link points to a non-existing file, the link must be updated too.
I'm fixing all this.
| ## Describe the bug
I get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`.
## Steps to reproduce the bug
It is quasi only this code:
```python
import datasets
data = datasets.load_dataset('docred')
```
## Expected results
The DocRED dataset should be loaded without any problems.
## Actual results
```
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-4-b1b83f25a16c> in <module>
----> 1 d = datasets.load_dataset('docred')
~/anaconda3/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
845
846 # Download and prepare data
--> 847 builder_instance.download_and_prepare(
848 download_config=download_config,
849 download_mode=download_mode,
~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
613 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
614 if not downloaded_from_gcs:
--> 615 self._download_and_prepare(
616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
617 )
~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
673 # Checksums verification
674 if verify_infos:
--> 675 verify_checksums(
676 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
677 )
~/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1fDmfUUo5G7gfaoqWWvK81u08m71TK2g7']
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Linux-5.11.0-7633-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 5.0.0
This error also happened on my Windows-partition, after freshly installing python 3.9 and `datasets`.
## Remarks
- I have already called `rm -rf /home/<user>/.cache/huggingface`, i.e., I have tried clearing the cache.
- The problem does not exist for other datasets, i.e., it seems to be DocRED-specific. | 69 | `load_dataset('docred')` results in a `NonMatchingChecksumError`
## Describe the bug
I get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`.
## Steps to reproduce the bug
It is quasi only this code:
```python
import datasets
data = datasets.load_dataset('docred')
```
## Expected results
The DocRED dataset should be loaded without any problems.
## Actual results
```
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-4-b1b83f25a16c> in <module>
----> 1 d = datasets.load_dataset('docred')
~/anaconda3/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
845
846 # Download and prepare data
--> 847 builder_instance.download_and_prepare(
848 download_config=download_config,
849 download_mode=download_mode,
~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
613 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
614 if not downloaded_from_gcs:
--> 615 self._download_and_prepare(
616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
617 )
~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
673 # Checksums verification
674 if verify_infos:
--> 675 verify_checksums(
676 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
677 )
~/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1fDmfUUo5G7gfaoqWWvK81u08m71TK2g7']
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Linux-5.11.0-7633-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 5.0.0
This error also happened on my Windows-partition, after freshly installing python 3.9 and `datasets`.
## Remarks
- I have already called `rm -rf /home/<user>/.cache/huggingface`, i.e., I have tried clearing the cache.
- The problem does not exist for other datasets, i.e., it seems to be DocRED-specific.
Hi @tmpr, thanks for reporting.
Two weeks ago (23th Aug), the host of the source `docred` dataset updated one of the files (`dev.json`): you can see it [here](https://drive.google.com/drive/folders/1c5-0YwnoJx8NS6CV2f-NoTHR__BdkNqw).
Therefore, the checksum needs to be updated.
Normally, in the meantime, you could avoid the error by passing `ignore_verifications=True` to `load_dataset`. However, as the old link points to a non-existing file, the link must be updated too.
I'm fixing all this.
|
https://github.com/huggingface/datasets/issues/2879 | In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?" | Hi @rcgale, thanks for reporting.
Please note that this bug was fixed on `datasets` version 1.5.0: https://github.com/huggingface/datasets/commit/a23c73e526e1c30263834164f16f1fdf76722c8c#diff-f12a7a42d4673bb6c2ca5a40c92c29eb4fe3475908c84fd4ce4fad5dc2514878
If you update `datasets` version, that should work.
On the other hand, would it be possible for @patrickvonplaten to update the [blog post](https://huggingface.co/blog/fine-tune-wav2vec2-english) with the correct version of `datasets`? | ## Describe the bug
Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same.
## Steps to reproduce the bug
I was following this tutorial
- https://huggingface.co/blog/fine-tune-wav2vec2-english
But here's a distilled repro:
```python
!pip install datasets==1.4.1
from datasets import load_dataset
timit = load_dataset("timit_asr", cache_dir="./temp")
unique_transcripts = set(timit["train"]["text"])
print(unique_transcripts)
assert len(unique_transcripts) > 1
```
## Expected results
Expected the correct TIMIT data. Or an error saying that this version of `datasets` can't produce it.
## Actual results
Every train transcript was "Would such an act of refusal be useful?" Every test transcript was "The bungalow was pleasantly situated near the shore."
## Environment info
- `datasets` version: 1.4.1
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.9
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: tried both
- Using distributed or parallel set-up in script?: no
-
| 46 | In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?"
## Describe the bug
Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same.
## Steps to reproduce the bug
I was following this tutorial
- https://huggingface.co/blog/fine-tune-wav2vec2-english
But here's a distilled repro:
```python
!pip install datasets==1.4.1
from datasets import load_dataset
timit = load_dataset("timit_asr", cache_dir="./temp")
unique_transcripts = set(timit["train"]["text"])
print(unique_transcripts)
assert len(unique_transcripts) > 1
```
## Expected results
Expected the correct TIMIT data. Or an error saying that this version of `datasets` can't produce it.
## Actual results
Every train transcript was "Would such an act of refusal be useful?" Every test transcript was "The bungalow was pleasantly situated near the shore."
## Environment info
- `datasets` version: 1.4.1
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.9
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: tried both
- Using distributed or parallel set-up in script?: no
-
Hi @rcgale, thanks for reporting.
Please note that this bug was fixed on `datasets` version 1.5.0: https://github.com/huggingface/datasets/commit/a23c73e526e1c30263834164f16f1fdf76722c8c#diff-f12a7a42d4673bb6c2ca5a40c92c29eb4fe3475908c84fd4ce4fad5dc2514878
If you update `datasets` version, that should work.
On the other hand, would it be possible for @patrickvonplaten to update the [blog post](https://huggingface.co/blog/fine-tune-wav2vec2-english) with the correct version of `datasets`? |
https://github.com/huggingface/datasets/issues/2879 | In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?" | I just proposed a change in the blog post.
I had assumed there was a data format change that broke a previous version of the code, since presumably @patrickvonplaten tested the tutorial with the version they explicitly referenced. But that fix you linked suggests a problem in the code, which surprised me.
I still wonder, though, is there a way for downloads to be invalidated server-side? If the client can announce its version during a download request, perhaps the server could reject known incompatibilities? It would save much valuable time if `datasets` raised an informative error on a known problem ("Error: the requested data set requires `datasets>=1.5.0`."). This kind of API versioning is a prudent move anyhow, as there will surely come a time when you'll need to make a breaking change to data. | ## Describe the bug
Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same.
## Steps to reproduce the bug
I was following this tutorial
- https://huggingface.co/blog/fine-tune-wav2vec2-english
But here's a distilled repro:
```python
!pip install datasets==1.4.1
from datasets import load_dataset
timit = load_dataset("timit_asr", cache_dir="./temp")
unique_transcripts = set(timit["train"]["text"])
print(unique_transcripts)
assert len(unique_transcripts) > 1
```
## Expected results
Expected the correct TIMIT data. Or an error saying that this version of `datasets` can't produce it.
## Actual results
Every train transcript was "Would such an act of refusal be useful?" Every test transcript was "The bungalow was pleasantly situated near the shore."
## Environment info
- `datasets` version: 1.4.1
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.9
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: tried both
- Using distributed or parallel set-up in script?: no
-
| 134 | In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?"
## Describe the bug
Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same.
## Steps to reproduce the bug
I was following this tutorial
- https://huggingface.co/blog/fine-tune-wav2vec2-english
But here's a distilled repro:
```python
!pip install datasets==1.4.1
from datasets import load_dataset
timit = load_dataset("timit_asr", cache_dir="./temp")
unique_transcripts = set(timit["train"]["text"])
print(unique_transcripts)
assert len(unique_transcripts) > 1
```
## Expected results
Expected the correct TIMIT data. Or an error saying that this version of `datasets` can't produce it.
## Actual results
Every train transcript was "Would such an act of refusal be useful?" Every test transcript was "The bungalow was pleasantly situated near the shore."
## Environment info
- `datasets` version: 1.4.1
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.9
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: tried both
- Using distributed or parallel set-up in script?: no
-
I just proposed a change in the blog post.
I had assumed there was a data format change that broke a previous version of the code, since presumably @patrickvonplaten tested the tutorial with the version they explicitly referenced. But that fix you linked suggests a problem in the code, which surprised me.
I still wonder, though, is there a way for downloads to be invalidated server-side? If the client can announce its version during a download request, perhaps the server could reject known incompatibilities? It would save much valuable time if `datasets` raised an informative error on a known problem ("Error: the requested data set requires `datasets>=1.5.0`."). This kind of API versioning is a prudent move anyhow, as there will surely come a time when you'll need to make a breaking change to data. |
https://github.com/huggingface/datasets/issues/2871 | datasets.config.PYARROW_VERSION has no attribute 'major' | Hi @bwang482,
I'm sorry but I'm not able to reproduce your bug.
Please note that in our current master branch, we made a commit (d03223d4d64b89e76b48b00602aba5aa2f817f1e) that simultaneously modified:
- test_dataset_common.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-a1bc225bd9a5bade373d1f140e24d09cbbdc97971c2f73bb627daaa803ada002L289 that introduces the usage of `datasets.config.PYARROW_VERSION.major`
- but also changed config.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-e021fcfc41811fb970fab889b8d245e68382bca8208e63eaafc9a396a336f8f2L40, so that `datasets.config.PYARROW_VERSION.major` exists
| In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS.
```
import datasets
datasets.config.PYARROW_VERSION.major
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module>
1 import datasets
----> 2 datasets.config.PYARROW_VERSION.major
AttributeError: 'str' object has no attribute 'major'
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 4.0.1
| 47 | datasets.config.PYARROW_VERSION has no attribute 'major'
In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS.
```
import datasets
datasets.config.PYARROW_VERSION.major
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module>
1 import datasets
----> 2 datasets.config.PYARROW_VERSION.major
AttributeError: 'str' object has no attribute 'major'
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 4.0.1
Hi @bwang482,
I'm sorry but I'm not able to reproduce your bug.
Please note that in our current master branch, we made a commit (d03223d4d64b89e76b48b00602aba5aa2f817f1e) that simultaneously modified:
- test_dataset_common.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-a1bc225bd9a5bade373d1f140e24d09cbbdc97971c2f73bb627daaa803ada002L289 that introduces the usage of `datasets.config.PYARROW_VERSION.major`
- but also changed config.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-e021fcfc41811fb970fab889b8d245e68382bca8208e63eaafc9a396a336f8f2L40, so that `datasets.config.PYARROW_VERSION.major` exists
|
https://github.com/huggingface/datasets/issues/2871 | datasets.config.PYARROW_VERSION has no attribute 'major' | Reopening this. Although the `test_dataset_common.py` script works fine now.
Has this got something to do with my pull request not passing `ci/circleci: run_dataset_script_tests_pyarrow` tests?
https://github.com/huggingface/datasets/pull/2873 | In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS.
```
import datasets
datasets.config.PYARROW_VERSION.major
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module>
1 import datasets
----> 2 datasets.config.PYARROW_VERSION.major
AttributeError: 'str' object has no attribute 'major'
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 4.0.1
| 25 | datasets.config.PYARROW_VERSION has no attribute 'major'
In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS.
```
import datasets
datasets.config.PYARROW_VERSION.major
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module>
1 import datasets
----> 2 datasets.config.PYARROW_VERSION.major
AttributeError: 'str' object has no attribute 'major'
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 4.0.1
Reopening this. Although the `test_dataset_common.py` script works fine now.
Has this got something to do with my pull request not passing `ci/circleci: run_dataset_script_tests_pyarrow` tests?
https://github.com/huggingface/datasets/pull/2873 |
https://github.com/huggingface/datasets/issues/2871 | datasets.config.PYARROW_VERSION has no attribute 'major' | Hi @bwang482,
If you click on `Details` (on the right of your non passing CI test names: `ci/circleci: run_dataset_script_tests_pyarrow`), you can have more information about the non-passing tests.
For example, for ["ci/circleci: run_dataset_script_tests_pyarrow_1" details](https://circleci.com/gh/huggingface/datasets/46324?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link), you can see that the only non-passing test has to do with the dataset card (missing information in the `README.md` file): `test_changed_dataset_card`
```
=========================== short test summary info ============================
FAILED tests/test_dataset_cards.py::test_changed_dataset_card[swedish_medical_ner]
= 1 failed, 3214 passed, 2874 skipped, 2 xfailed, 1 xpassed, 15 warnings in 175.59s (0:02:55) =
```
Therefore, your PR non-passing test has nothing to do with this issue. | In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS.
```
import datasets
datasets.config.PYARROW_VERSION.major
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module>
1 import datasets
----> 2 datasets.config.PYARROW_VERSION.major
AttributeError: 'str' object has no attribute 'major'
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 4.0.1
| 95 | datasets.config.PYARROW_VERSION has no attribute 'major'
In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS.
```
import datasets
datasets.config.PYARROW_VERSION.major
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module>
1 import datasets
----> 2 datasets.config.PYARROW_VERSION.major
AttributeError: 'str' object has no attribute 'major'
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 4.0.1
Hi @bwang482,
If you click on `Details` (on the right of your non passing CI test names: `ci/circleci: run_dataset_script_tests_pyarrow`), you can have more information about the non-passing tests.
For example, for ["ci/circleci: run_dataset_script_tests_pyarrow_1" details](https://circleci.com/gh/huggingface/datasets/46324?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link), you can see that the only non-passing test has to do with the dataset card (missing information in the `README.md` file): `test_changed_dataset_card`
```
=========================== short test summary info ============================
FAILED tests/test_dataset_cards.py::test_changed_dataset_card[swedish_medical_ner]
= 1 failed, 3214 passed, 2874 skipped, 2 xfailed, 1 xpassed, 15 warnings in 175.59s (0:02:55) =
```
Therefore, your PR non-passing test has nothing to do with this issue. |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | Hi, @Chenfei-Kang.
I'm sorry, but I'm not able to reproduce your bug:
```python
from datasets import load_dataset
ds = load_dataset("glue", 'cola')
ds
```
```
DatasetDict({
train: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 8551
})
validation: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 1043
})
test: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 1063
})
})
```
Could you please give more details and environment info (platform, PyArrow version)? | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
| 66 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
Hi, @Chenfei-Kang.
I'm sorry, but I'm not able to reproduce your bug:
```python
from datasets import load_dataset
ds = load_dataset("glue", 'cola')
ds
```
```
DatasetDict({
train: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 8551
})
validation: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 1043
})
test: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 1063
})
})
```
Could you please give more details and environment info (platform, PyArrow version)? |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | > Hi, @Chenfei-Kang.
>
> I'm sorry, but I'm not able to reproduce your bug:
>
> ```python
> from datasets import load_dataset
>
> ds = load_dataset("glue", 'cola')
> ds
> ```
>
> ```
> DatasetDict({
> train: Dataset({
> features: ['sentence', 'label', 'idx'],
> num_rows: 8551
> })
> validation: Dataset({
> features: ['sentence', 'label', 'idx'],
> num_rows: 1043
> })
> test: Dataset({
> features: ['sentence', 'label', 'idx'],
> num_rows: 1063
> })
> })
> ```
>
> Could you please give more details and environment info (platform, PyArrow version)?
Sorry to reply you so late.
platform: pycharm 2021 + anaconda with python 3.7
PyArrow version: 5.0.0
huggingface-hub: 0.0.16
datasets: 1.9.0
| ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
| 116 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
> Hi, @Chenfei-Kang.
>
> I'm sorry, but I'm not able to reproduce your bug:
>
> ```python
> from datasets import load_dataset
>
> ds = load_dataset("glue", 'cola')
> ds
> ```
>
> ```
> DatasetDict({
> train: Dataset({
> features: ['sentence', 'label', 'idx'],
> num_rows: 8551
> })
> validation: Dataset({
> features: ['sentence', 'label', 'idx'],
> num_rows: 1043
> })
> test: Dataset({
> features: ['sentence', 'label', 'idx'],
> num_rows: 1063
> })
> })
> ```
>
> Could you please give more details and environment info (platform, PyArrow version)?
Sorry to reply you so late.
platform: pycharm 2021 + anaconda with python 3.7
PyArrow version: 5.0.0
huggingface-hub: 0.0.16
datasets: 1.9.0
|
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | - For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?
- In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error? | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
| 69 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
- For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?
- In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error? |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | > * For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?
> * In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error?
1. For the platform, here are the output:
- datasets` version: 1.11.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.10
- PyArrow version: 5.0.0
2. For the code and error:
```python
from datasets import load_dataset, load_metric
dataset = load_dataset("glue", "cola")
```
```python
Traceback (most recent call last):
....
....
File "my_file.py", line 2, in <module>
dataset = load_dataset("glue", "cola")
File "My environments\lib\site-packages\datasets\load.py", line 830, in load_dataset
**config_kwargs,
File "My environments\lib\site-packages\datasets\load.py", line 710, in load_dataset_builder
**config_kwargs,
TypeError: 'NoneType' object is not callable
```
Thank you! | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
| 154 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
> * For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?
> * In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error?
1. For the platform, here are the output:
- datasets` version: 1.11.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.10
- PyArrow version: 5.0.0
2. For the code and error:
```python
from datasets import load_dataset, load_metric
dataset = load_dataset("glue", "cola")
```
```python
Traceback (most recent call last):
....
....
File "my_file.py", line 2, in <module>
dataset = load_dataset("glue", "cola")
File "My environments\lib\site-packages\datasets\load.py", line 830, in load_dataset
**config_kwargs,
File "My environments\lib\site-packages\datasets\load.py", line 710, in load_dataset_builder
**config_kwargs,
TypeError: 'NoneType' object is not callable
```
Thank you! |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem. | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
| 20 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem. |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | One naive question: do you have internet access from the machine where you execute the code? | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
| 16 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
One naive question: do you have internet access from the machine where you execute the code? |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | > For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem.
But I can download other task dataset such as `dataset = load_dataset('squad')`. I don't know what went wrong. Thank you so much! | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
| 43 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
> For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem.
But I can download other task dataset such as `dataset = load_dataset('squad')`. I don't know what went wrong. Thank you so much! |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | Hi @severo, thanks for reporting.
Just note that currently not all canonical datasets support streaming mode: this is one case!
All datasets that use `pathlib` joins (using `/`) instead of `os.path.join` (as in this dataset) do not support streaming mode yet. | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| 41 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
Hi @severo, thanks for reporting.
Just note that currently not all canonical datasets support streaming mode: this is one case!
All datasets that use `pathlib` joins (using `/`) instead of `os.path.join` (as in this dataset) do not support streaming mode yet. |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | OK. Do you think it's possible to detect this, and raise an exception (maybe `NotImplementedError`, or a specific `StreamingError`)? | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| 19 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
OK. Do you think it's possible to detect this, and raise an exception (maybe `NotImplementedError`, or a specific `StreamingError`)? |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | We should definitely support datasets using `pathlib` in streaming mode...
For non-supported datasets in streaming mode, we have already a request of raising an error/warning: see #2654. | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| 27 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
We should definitely support datasets using `pathlib` in streaming mode...
For non-supported datasets in streaming mode, we have already a request of raising an error/warning: see #2654. |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | Hi @severo, please note that "counter" dataset will be streamable (at least until it arrives at the missing file, error already in normal mode) once these PRs are merged:
- #2874
- #2876
- #2880
I have tested it. 😉 | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| 40 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
Hi @severo, please note that "counter" dataset will be streamable (at least until it arrives at the missing file, error already in normal mode) once these PRs are merged:
- #2874
- #2876
- #2880
I have tested it. 😉 |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | Now (on master), we get:
```
import datasets as ds
ds.load_dataset('counter', split="train", streaming=False)
```
```
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets/src/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets/src/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets/src/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets/src/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets/src/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
The error is now the same with or without streaming. I close the issue, thanks @albertvillanova and @lhoestq!
| ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| 191 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
Now (on master), we get:
```
import datasets as ds
ds.load_dataset('counter', split="train", streaming=False)
```
```
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets/src/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets/src/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets/src/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets/src/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets/src/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
The error is now the same with or without streaming. I close the issue, thanks @albertvillanova and @lhoestq!
|
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | Note that we might want to open an issue to fix the "counter" dataset by itself, but I let it up to you. | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| 23 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
Note that we might want to open an issue to fix the "counter" dataset by itself, but I let it up to you. |
https://github.com/huggingface/datasets/issues/2860 | Cannot download TOTTO dataset | Hola @mrm8488, thanks for reporting.
Apparently, the data source host changed their URL one week ago: https://github.com/google-research-datasets/ToTTo/commit/cebeb430ec2a97747e704d16a9354f7d9073ff8f
I'm fixing it. | Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip
`datasets version: 1.11.0`
# How to reproduce:
```py
from datasets import load_dataset
dataset = load_dataset('totto')
```
| 20 | Cannot download TOTTO dataset
Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip
`datasets version: 1.11.0`
# How to reproduce:
```py
from datasets import load_dataset
dataset = load_dataset('totto')
```
Hola @mrm8488, thanks for reporting.
Apparently, the data source host changed their URL one week ago: https://github.com/google-research-datasets/ToTTo/commit/cebeb430ec2a97747e704d16a9354f7d9073ff8f
I'm fixing it. |
https://github.com/huggingface/datasets/issues/2945 | Protect master branch | @lhoestq now the 2 are implemented.
Please note that for the the second protection, finally I have chosen to protect the master branch only from **merge commits** (see update comment above), so no need to disable/re-enable the protection on each release (direct commits, different from merge commits, can be pushed to the remote master branch; and eventually reverted without messing up the repo history). | After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.:
- 00cc036fea7c7745cfe722360036ed306796a3f2
- 13ae8c98602bbad8197de3b9b425f4c78f582af1
- ...
I propose to protect our master branch, so that we avoid we can accidentally make this kind of mistakes in the future:
- [x] For Pull Requests using GitHub, allow only squash merging, so that only a single commit per Pull Request is merged into the master branch
- Currently, simple merge commits are already disabled
- I propose to disable rebase merging as well
- ~~Protect the master branch from direct pushes (to avoid accidentally pushing of merge commits)~~
- ~~This protection would reject direct pushes to master branch~~
- ~~If so, for each release (when we need to commit directly to the master branch), we should previously disable the protection and re-enable it again after the release~~
- [x] Protect the master branch only from direct pushing of **merge commits**
- GitHub offers the possibility to protect the master branch only from merge commits (which are the ones that introduce all the commits from the feature branch into the master branch).
- No need to disable/re-enable this protection on each release
This purpose of this Issue is to open a discussion about this problem and to agree in a solution. | 64 | Protect master branch
After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.:
- 00cc036fea7c7745cfe722360036ed306796a3f2
- 13ae8c98602bbad8197de3b9b425f4c78f582af1
- ...
I propose to protect our master branch, so that we avoid we can accidentally make this kind of mistakes in the future:
- [x] For Pull Requests using GitHub, allow only squash merging, so that only a single commit per Pull Request is merged into the master branch
- Currently, simple merge commits are already disabled
- I propose to disable rebase merging as well
- ~~Protect the master branch from direct pushes (to avoid accidentally pushing of merge commits)~~
- ~~This protection would reject direct pushes to master branch~~
- ~~If so, for each release (when we need to commit directly to the master branch), we should previously disable the protection and re-enable it again after the release~~
- [x] Protect the master branch only from direct pushing of **merge commits**
- GitHub offers the possibility to protect the master branch only from merge commits (which are the ones that introduce all the commits from the feature branch into the master branch).
- No need to disable/re-enable this protection on each release
This purpose of this Issue is to open a discussion about this problem and to agree in a solution.
@lhoestq now the 2 are implemented.
Please note that for the the second protection, finally I have chosen to protect the master branch only from **merge commits** (see update comment above), so no need to disable/re-enable the protection on each release (direct commits, different from merge commits, can be pushed to the remote master branch; and eventually reverted without messing up the repo history). |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | Hi ! I guess the caching mechanism should have considered the new `filter` to be different from the old one, and don't use cached results from the old `filter`.
To avoid other users from having this issue we could make the caching differentiate the two, what do you think ? | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
| 50 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
Hi ! I guess the caching mechanism should have considered the new `filter` to be different from the old one, and don't use cached results from the old `filter`.
To avoid other users from having this issue we could make the caching differentiate the two, what do you think ? |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | If it's easy enough to implement, then yes please 😄 But this issue can be low-priority, since I've only encountered it in a couple of `transformers` CI tests. | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
| 28 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
If it's easy enough to implement, then yes please 😄 But this issue can be low-priority, since I've only encountered it in a couple of `transformers` CI tests. |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | Well it can cause issue with anyone that updates `datasets` and re-run some code that uses filter, so I'm creating a PR | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
| 22 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
Well it can cause issue with anyone that updates `datasets` and re-run some code that uses filter, so I'm creating a PR |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | I just merged a fix, let me know if you're still having this kind of issues :)
We'll do a release soon to make this fix available | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
| 27 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
I just merged a fix, let me know if you're still having this kind of issues :)
We'll do a release soon to make this fix available |
https://github.com/huggingface/datasets/issues/2937 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied | Hi @daqieq, thanks for reporting.
Unfortunately, I was not able to reproduce this bug:
```ipython
In [1]: from datasets import load_dataset
...: ds = load_dataset('wiki_bio')
Downloading: 7.58kB [00:00, 26.3kB/s]
Downloading: 2.71kB [00:00, ?B/s]
Using custom data configuration default
Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\
1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...
Downloading: 334MB [01:17, 4.32MB/s]
Dataset wiki_bio downloaded and prepared to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9. Subsequent calls will reuse thi
s data.
```
This kind of error messages usually happen because:
- Your running Python script hasn't write access to that directory
- You have another program (the File Explorer?) already browsing inside that directory | ## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('wiki_bio')
```
## Expected results
It is expected that the dataset downloads without any errors.
## Actual results
PermissionError see trace below:
```
Using custom data configuration default
Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 644, in download_and_prepare
self._save_info()
File "C:\Users\username\.conda\envs\hf\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 598, in incomplete_dir
os.rename(tmp_dir, dirname)
PermissionError: [WinError 5] Access is denied: 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9.incomplete' -> 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9'
```
By commenting out the os.rename() [L604](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L604) and the shutil.rmtree() [L607](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L607) lines, in my virtual environment, I was able to get the load process to complete, rename the directory manually and then rerun the `load_dataset('wiki_bio')` to get what I needed.
It seems that os.rename() in the `incomplete_dir` content manager is the culprit. Here's another project [Conan](https://github.com/conan-io/conan/issues/6560) with similar issue with os.rename() if it helps debug this issue.
## Environment info
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.22449-SP0
- Python version: 3.8.12
- PyArrow version: 5.0.0
| 109 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('wiki_bio')
```
## Expected results
It is expected that the dataset downloads without any errors.
## Actual results
PermissionError see trace below:
```
Using custom data configuration default
Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 644, in download_and_prepare
self._save_info()
File "C:\Users\username\.conda\envs\hf\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 598, in incomplete_dir
os.rename(tmp_dir, dirname)
PermissionError: [WinError 5] Access is denied: 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9.incomplete' -> 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9'
```
By commenting out the os.rename() [L604](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L604) and the shutil.rmtree() [L607](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L607) lines, in my virtual environment, I was able to get the load process to complete, rename the directory manually and then rerun the `load_dataset('wiki_bio')` to get what I needed.
It seems that os.rename() in the `incomplete_dir` content manager is the culprit. Here's another project [Conan](https://github.com/conan-io/conan/issues/6560) with similar issue with os.rename() if it helps debug this issue.
## Environment info
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.22449-SP0
- Python version: 3.8.12
- PyArrow version: 5.0.0
Hi @daqieq, thanks for reporting.
Unfortunately, I was not able to reproduce this bug:
```ipython
In [1]: from datasets import load_dataset
...: ds = load_dataset('wiki_bio')
Downloading: 7.58kB [00:00, 26.3kB/s]
Downloading: 2.71kB [00:00, ?B/s]
Using custom data configuration default
Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\
1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...
Downloading: 334MB [01:17, 4.32MB/s]
Dataset wiki_bio downloaded and prepared to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9. Subsequent calls will reuse thi
s data.
```
This kind of error messages usually happen because:
- Your running Python script hasn't write access to that directory
- You have another program (the File Explorer?) already browsing inside that directory |
https://github.com/huggingface/datasets/issues/2937 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied | Thanks @albertvillanova for looking at it! I tried on my personal Windows machine and it downloaded just fine.
Running on my work machine and on a colleague's machine it is consistently hitting this error. It's not a write access issue because the `.incomplete` directory is written just fine. It just won't rename and then it deletes the directory in the `finally` step. Also the zip file is written and extracted fine in the downloads directory.
That leaves another program that might be interfering, and there are plenty of those in my work machine ... (full antivirus, data loss prevention, etc.). So the question remains, why not extend the `try` block to allow catching the error and circle back to the rename after the unknown program is finished doing its 'stuff'. This is the approach that I read about in the linked repo (see my comments above).
If it's not high priority, that's fine. However, if someone were to write an PR that solved this issue in our environment in an `except` clause, would it be reviewed for inclusion in a future release? Just wondering whether I should spend any more time on this issue. | ## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('wiki_bio')
```
## Expected results
It is expected that the dataset downloads without any errors.
## Actual results
PermissionError see trace below:
```
Using custom data configuration default
Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 644, in download_and_prepare
self._save_info()
File "C:\Users\username\.conda\envs\hf\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 598, in incomplete_dir
os.rename(tmp_dir, dirname)
PermissionError: [WinError 5] Access is denied: 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9.incomplete' -> 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9'
```
By commenting out the os.rename() [L604](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L604) and the shutil.rmtree() [L607](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L607) lines, in my virtual environment, I was able to get the load process to complete, rename the directory manually and then rerun the `load_dataset('wiki_bio')` to get what I needed.
It seems that os.rename() in the `incomplete_dir` content manager is the culprit. Here's another project [Conan](https://github.com/conan-io/conan/issues/6560) with similar issue with os.rename() if it helps debug this issue.
## Environment info
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.22449-SP0
- Python version: 3.8.12
- PyArrow version: 5.0.0
| 194 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('wiki_bio')
```
## Expected results
It is expected that the dataset downloads without any errors.
## Actual results
PermissionError see trace below:
```
Using custom data configuration default
Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 644, in download_and_prepare
self._save_info()
File "C:\Users\username\.conda\envs\hf\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 598, in incomplete_dir
os.rename(tmp_dir, dirname)
PermissionError: [WinError 5] Access is denied: 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9.incomplete' -> 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9'
```
By commenting out the os.rename() [L604](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L604) and the shutil.rmtree() [L607](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L607) lines, in my virtual environment, I was able to get the load process to complete, rename the directory manually and then rerun the `load_dataset('wiki_bio')` to get what I needed.
It seems that os.rename() in the `incomplete_dir` content manager is the culprit. Here's another project [Conan](https://github.com/conan-io/conan/issues/6560) with similar issue with os.rename() if it helps debug this issue.
## Environment info
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.22449-SP0
- Python version: 3.8.12
- PyArrow version: 5.0.0
Thanks @albertvillanova for looking at it! I tried on my personal Windows machine and it downloaded just fine.
Running on my work machine and on a colleague's machine it is consistently hitting this error. It's not a write access issue because the `.incomplete` directory is written just fine. It just won't rename and then it deletes the directory in the `finally` step. Also the zip file is written and extracted fine in the downloads directory.
That leaves another program that might be interfering, and there are plenty of those in my work machine ... (full antivirus, data loss prevention, etc.). So the question remains, why not extend the `try` block to allow catching the error and circle back to the rename after the unknown program is finished doing its 'stuff'. This is the approach that I read about in the linked repo (see my comments above).
If it's not high priority, that's fine. However, if someone were to write an PR that solved this issue in our environment in an `except` clause, would it be reviewed for inclusion in a future release? Just wondering whether I should spend any more time on this issue. |
https://github.com/huggingface/datasets/issues/2934 | to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows | I did some investigation and, as it seems, the bug stems from [this line](https://github.com/huggingface/datasets/blob/8004d7c3e1d74b29c3e5b0d1660331cd26758363/src/datasets/arrow_dataset.py#L325). The lifecycle of the dataset from the linked line is bound to one of the returned `tf.data.Dataset`. So my (hacky) solution involves wrapping the linked dataset with `weakref.proxy` and adding a custom `__del__` to `tf.python.data.ops.dataset_ops.TensorSliceDataset` (this is the type of a dataset that is returned by `tf.data.Dataset.from_tensor_slices`; this works for TF 2.x, but I'm not sure `tf.python.data.ops.dataset_ops` is a valid path for TF 1.x) that deletes the linked dataset, which is assigned to the dataset object as a property. Will open a draft PR soon! | To reproduce:
```python
import datasets as ds
import weakref
import gc
d = ds.load_dataset("mnist", split="train")
ref = weakref.ref(d._data.table)
tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label")
del tfd, d
gc.collect()
assert ref() is None, "Error: there is at least one reference left"
```
This causes issues because the table holds a reference to an open arrow file that should be closed. So on windows it's not possible to delete or move the arrow file afterwards.
Moreover the CI test of the `to_tf_dataset` method isn't able to clean up the temporary arrow files because of this.
cc @Rocketknight1 | 99 | to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows
To reproduce:
```python
import datasets as ds
import weakref
import gc
d = ds.load_dataset("mnist", split="train")
ref = weakref.ref(d._data.table)
tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label")
del tfd, d
gc.collect()
assert ref() is None, "Error: there is at least one reference left"
```
This causes issues because the table holds a reference to an open arrow file that should be closed. So on windows it's not possible to delete or move the arrow file afterwards.
Moreover the CI test of the `to_tf_dataset` method isn't able to clean up the temporary arrow files because of this.
cc @Rocketknight1
I did some investigation and, as it seems, the bug stems from [this line](https://github.com/huggingface/datasets/blob/8004d7c3e1d74b29c3e5b0d1660331cd26758363/src/datasets/arrow_dataset.py#L325). The lifecycle of the dataset from the linked line is bound to one of the returned `tf.data.Dataset`. So my (hacky) solution involves wrapping the linked dataset with `weakref.proxy` and adding a custom `__del__` to `tf.python.data.ops.dataset_ops.TensorSliceDataset` (this is the type of a dataset that is returned by `tf.data.Dataset.from_tensor_slices`; this works for TF 2.x, but I'm not sure `tf.python.data.ops.dataset_ops` is a valid path for TF 1.x) that deletes the linked dataset, which is assigned to the dataset object as a property. Will open a draft PR soon! |
https://github.com/huggingface/datasets/issues/2924 | "File name too long" error for file locks | Hi, the filename here is less than 255
```python
>>> len("_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock")
154
```
so not sure why it's considered too long for your filesystem.
(also note that the lock files we use always have smaller filenames than 255)
https://github.com/huggingface/datasets/blob/5d1a9f1e3c6c495dc0610b459e39d2eb8893f152/src/datasets/utils/filelock.py#L135-L135 | ## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Steps to reproduce the bug
Where the user cache dir (e.g. `~/.cache`) is on a file system that limits filenames to 255 chars (e.g. ext4):
```python
from datasets import load_dataset
load_dataset("gar1t/test")
```
## Expected results
Expect the function to return without an error.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<python_venv>/lib/python3.9/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 644, in download_and_prepare
self._save_info()
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 765, in _save_info
with FileLock(lock_path):
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 323, in __enter__
self.acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 272, in acquire
self._acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 403, in _acquire
fd = os.open(self._lock_file, open_mode)
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 5.0.0
| 39 | "File name too long" error for file locks
## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Steps to reproduce the bug
Where the user cache dir (e.g. `~/.cache`) is on a file system that limits filenames to 255 chars (e.g. ext4):
```python
from datasets import load_dataset
load_dataset("gar1t/test")
```
## Expected results
Expect the function to return without an error.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<python_venv>/lib/python3.9/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 644, in download_and_prepare
self._save_info()
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 765, in _save_info
with FileLock(lock_path):
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 323, in __enter__
self.acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 272, in acquire
self._acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 403, in _acquire
fd = os.open(self._lock_file, open_mode)
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 5.0.0
Hi, the filename here is less than 255
```python
>>> len("_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock")
154
```
so not sure why it's considered too long for your filesystem.
(also note that the lock files we use always have smaller filenames than 255)
https://github.com/huggingface/datasets/blob/5d1a9f1e3c6c495dc0610b459e39d2eb8893f152/src/datasets/utils/filelock.py#L135-L135 |
https://github.com/huggingface/datasets/issues/2924 | "File name too long" error for file locks | Yes, you're right! I need to get you more info here. Either there's something going with the name itself that the file system doesn't like (an encoding that blows up the name length??) or perhaps there's something with the path that's causing the entire string to be used as a name. I haven't seen this on any system before and the Internet's not forthcoming with any info. | ## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Steps to reproduce the bug
Where the user cache dir (e.g. `~/.cache`) is on a file system that limits filenames to 255 chars (e.g. ext4):
```python
from datasets import load_dataset
load_dataset("gar1t/test")
```
## Expected results
Expect the function to return without an error.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<python_venv>/lib/python3.9/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 644, in download_and_prepare
self._save_info()
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 765, in _save_info
with FileLock(lock_path):
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 323, in __enter__
self.acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 272, in acquire
self._acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 403, in _acquire
fd = os.open(self._lock_file, open_mode)
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 5.0.0
| 67 | "File name too long" error for file locks
## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Steps to reproduce the bug
Where the user cache dir (e.g. `~/.cache`) is on a file system that limits filenames to 255 chars (e.g. ext4):
```python
from datasets import load_dataset
load_dataset("gar1t/test")
```
## Expected results
Expect the function to return without an error.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<python_venv>/lib/python3.9/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 644, in download_and_prepare
self._save_info()
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 765, in _save_info
with FileLock(lock_path):
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 323, in __enter__
self.acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 272, in acquire
self._acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 403, in _acquire
fd = os.open(self._lock_file, open_mode)
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 5.0.0
Yes, you're right! I need to get you more info here. Either there's something going with the name itself that the file system doesn't like (an encoding that blows up the name length??) or perhaps there's something with the path that's causing the entire string to be used as a name. I haven't seen this on any system before and the Internet's not forthcoming with any info. |
https://github.com/huggingface/datasets/issues/2918 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming | Hi @SBrandeis, thanks for reporting! ^^
I think this is an issue with `fsspec`: https://github.com/intake/filesystem_spec/issues/389
I will ask them if they are planning to fix it... | ## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
cc @lhoestq
## Steps to reproduce the bug
```python
from datasets import load_dataset
iter_dset = iter(
load_dataset("scitldr", name="FullText", split="test", streaming=True)
)
next(iter_dset)
```
## Expected results
Returns the first sample of the dataset
## Actual results
Calling `__next__` crashes with the following Traceback:
```python
----> 1 next(dset_iter)
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
339
340 def __iter__(self):
--> 341 for key, example in self._iter():
342 if self.features:
343 # we encode the example for ClassLabel feature types for example
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in _iter(self)
336 else:
337 ex_iterable = self._ex_iterable
--> 338 yield from ex_iterable
339
340 def __iter__(self):
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
76
77 def __iter__(self):
---> 78 for key, example in self.generate_examples_fn(**self.kwargs):
79 yield key, example
80
~\.cache\huggingface\modules\datasets_modules\datasets\scitldr\72d6e2195786c57e1d343066fb2cc4f93ea39c5e381e53e6ae7c44bbfd1f05ef\scitldr.py in _generate_examples(self, filepath, split)
162
163 with open(filepath, encoding="utf-8") as f:
--> 164 for id_, row in enumerate(f):
165 data = json.loads(row)
166 if self.config.name == "AIC":
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in read(self, length)
496 else:
497 length = min(self.size - self.loc, length)
--> 498 return super().read(length)
499
500 async def async_fetch_all(self):
~\miniconda3\envs\datasets\lib\site-packages\fsspec\spec.py in read(self, length)
1481 # don't even bother calling fetch
1482 return b""
-> 1483 out = self.cache._fetch(self.loc, self.loc + length)
1484 self.loc += len(out)
1485 return out
~\miniconda3\envs\datasets\lib\site-packages\fsspec\caching.py in _fetch(self, start, end)
378 elif start < self.start:
379 if self.end - end > self.blocksize:
--> 380 self.cache = self.fetcher(start, bend)
381 self.start = start
382 else:
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in wrapper(*args, **kwargs)
86 def wrapper(*args, **kwargs):
87 self = obj or args[0]
---> 88 return sync(self.loop, func, *args, **kwargs)
89
90 return wrapper
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in sync(loop, func, timeout, *args, **kwargs)
67 raise FSTimeoutError
68 if isinstance(result[0], BaseException):
---> 69 raise result[0]
70 return result[0]
71
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in _runner(event, coro, result, timeout)
23 coro = asyncio.wait_for(coro, timeout=timeout)
24 try:
---> 25 result[0] = await coro
26 except Exception as ex:
27 result[0] = ex
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in async_fetch_range(self, start, end)
538 if r.status == 206:
539 # partial content, as expected
--> 540 out = await r.read()
541 elif "Content-Length" in r.headers:
542 cl = int(r.headers["Content-Length"])
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\client_reqrep.py in read(self)
1030 if self._body is None:
1031 try:
-> 1032 self._body = await self.content.read()
1033 for trace in self._traces:
1034 await trace.send_response_chunk_received(
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\streams.py in read(self, n)
342 async def read(self, n: int = -1) -> bytes:
343 if self._exception is not None:
--> 344 raise self._exception
345
346 # migration problem; with DataQueue you have to catch
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
## Environment info
- `datasets` version: 1.12.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.5
- PyArrow version: 2.0.0
- aiohttp version: 3.7.4.post0
| 26 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
cc @lhoestq
## Steps to reproduce the bug
```python
from datasets import load_dataset
iter_dset = iter(
load_dataset("scitldr", name="FullText", split="test", streaming=True)
)
next(iter_dset)
```
## Expected results
Returns the first sample of the dataset
## Actual results
Calling `__next__` crashes with the following Traceback:
```python
----> 1 next(dset_iter)
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
339
340 def __iter__(self):
--> 341 for key, example in self._iter():
342 if self.features:
343 # we encode the example for ClassLabel feature types for example
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in _iter(self)
336 else:
337 ex_iterable = self._ex_iterable
--> 338 yield from ex_iterable
339
340 def __iter__(self):
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
76
77 def __iter__(self):
---> 78 for key, example in self.generate_examples_fn(**self.kwargs):
79 yield key, example
80
~\.cache\huggingface\modules\datasets_modules\datasets\scitldr\72d6e2195786c57e1d343066fb2cc4f93ea39c5e381e53e6ae7c44bbfd1f05ef\scitldr.py in _generate_examples(self, filepath, split)
162
163 with open(filepath, encoding="utf-8") as f:
--> 164 for id_, row in enumerate(f):
165 data = json.loads(row)
166 if self.config.name == "AIC":
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in read(self, length)
496 else:
497 length = min(self.size - self.loc, length)
--> 498 return super().read(length)
499
500 async def async_fetch_all(self):
~\miniconda3\envs\datasets\lib\site-packages\fsspec\spec.py in read(self, length)
1481 # don't even bother calling fetch
1482 return b""
-> 1483 out = self.cache._fetch(self.loc, self.loc + length)
1484 self.loc += len(out)
1485 return out
~\miniconda3\envs\datasets\lib\site-packages\fsspec\caching.py in _fetch(self, start, end)
378 elif start < self.start:
379 if self.end - end > self.blocksize:
--> 380 self.cache = self.fetcher(start, bend)
381 self.start = start
382 else:
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in wrapper(*args, **kwargs)
86 def wrapper(*args, **kwargs):
87 self = obj or args[0]
---> 88 return sync(self.loop, func, *args, **kwargs)
89
90 return wrapper
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in sync(loop, func, timeout, *args, **kwargs)
67 raise FSTimeoutError
68 if isinstance(result[0], BaseException):
---> 69 raise result[0]
70 return result[0]
71
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in _runner(event, coro, result, timeout)
23 coro = asyncio.wait_for(coro, timeout=timeout)
24 try:
---> 25 result[0] = await coro
26 except Exception as ex:
27 result[0] = ex
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in async_fetch_range(self, start, end)
538 if r.status == 206:
539 # partial content, as expected
--> 540 out = await r.read()
541 elif "Content-Length" in r.headers:
542 cl = int(r.headers["Content-Length"])
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\client_reqrep.py in read(self)
1030 if self._body is None:
1031 try:
-> 1032 self._body = await self.content.read()
1033 for trace in self._traces:
1034 await trace.send_response_chunk_received(
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\streams.py in read(self, n)
342 async def read(self, n: int = -1) -> bytes:
343 if self._exception is not None:
--> 344 raise self._exception
345
346 # migration problem; with DataQueue you have to catch
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
## Environment info
- `datasets` version: 1.12.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.5
- PyArrow version: 2.0.0
- aiohttp version: 3.7.4.post0
Hi @SBrandeis, thanks for reporting! ^^
I think this is an issue with `fsspec`: https://github.com/intake/filesystem_spec/issues/389
I will ask them if they are planning to fix it... |
https://github.com/huggingface/datasets/issues/2918 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming | Code to reproduce the bug: `ClientPayloadError: 400, message='Can not decode content-encoding: gzip'`
```python
In [1]: import fsspec
In [2]: import json
In [3]: with fsspec.open('https://raw.githubusercontent.com/allenai/scitldr/master/SciTLDR-Data/SciTLDR-FullText/test.jsonl', encoding="utf-8") as f:
...: for row in f:
...: data = json.loads(row)
...:
---------------------------------------------------------------------------
ClientPayloadError Traceback (most recent call last)
``` | ## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
cc @lhoestq
## Steps to reproduce the bug
```python
from datasets import load_dataset
iter_dset = iter(
load_dataset("scitldr", name="FullText", split="test", streaming=True)
)
next(iter_dset)
```
## Expected results
Returns the first sample of the dataset
## Actual results
Calling `__next__` crashes with the following Traceback:
```python
----> 1 next(dset_iter)
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
339
340 def __iter__(self):
--> 341 for key, example in self._iter():
342 if self.features:
343 # we encode the example for ClassLabel feature types for example
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in _iter(self)
336 else:
337 ex_iterable = self._ex_iterable
--> 338 yield from ex_iterable
339
340 def __iter__(self):
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
76
77 def __iter__(self):
---> 78 for key, example in self.generate_examples_fn(**self.kwargs):
79 yield key, example
80
~\.cache\huggingface\modules\datasets_modules\datasets\scitldr\72d6e2195786c57e1d343066fb2cc4f93ea39c5e381e53e6ae7c44bbfd1f05ef\scitldr.py in _generate_examples(self, filepath, split)
162
163 with open(filepath, encoding="utf-8") as f:
--> 164 for id_, row in enumerate(f):
165 data = json.loads(row)
166 if self.config.name == "AIC":
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in read(self, length)
496 else:
497 length = min(self.size - self.loc, length)
--> 498 return super().read(length)
499
500 async def async_fetch_all(self):
~\miniconda3\envs\datasets\lib\site-packages\fsspec\spec.py in read(self, length)
1481 # don't even bother calling fetch
1482 return b""
-> 1483 out = self.cache._fetch(self.loc, self.loc + length)
1484 self.loc += len(out)
1485 return out
~\miniconda3\envs\datasets\lib\site-packages\fsspec\caching.py in _fetch(self, start, end)
378 elif start < self.start:
379 if self.end - end > self.blocksize:
--> 380 self.cache = self.fetcher(start, bend)
381 self.start = start
382 else:
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in wrapper(*args, **kwargs)
86 def wrapper(*args, **kwargs):
87 self = obj or args[0]
---> 88 return sync(self.loop, func, *args, **kwargs)
89
90 return wrapper
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in sync(loop, func, timeout, *args, **kwargs)
67 raise FSTimeoutError
68 if isinstance(result[0], BaseException):
---> 69 raise result[0]
70 return result[0]
71
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in _runner(event, coro, result, timeout)
23 coro = asyncio.wait_for(coro, timeout=timeout)
24 try:
---> 25 result[0] = await coro
26 except Exception as ex:
27 result[0] = ex
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in async_fetch_range(self, start, end)
538 if r.status == 206:
539 # partial content, as expected
--> 540 out = await r.read()
541 elif "Content-Length" in r.headers:
542 cl = int(r.headers["Content-Length"])
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\client_reqrep.py in read(self)
1030 if self._body is None:
1031 try:
-> 1032 self._body = await self.content.read()
1033 for trace in self._traces:
1034 await trace.send_response_chunk_received(
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\streams.py in read(self, n)
342 async def read(self, n: int = -1) -> bytes:
343 if self._exception is not None:
--> 344 raise self._exception
345
346 # migration problem; with DataQueue you have to catch
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
## Environment info
- `datasets` version: 1.12.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.5
- PyArrow version: 2.0.0
- aiohttp version: 3.7.4.post0
| 46 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
cc @lhoestq
## Steps to reproduce the bug
```python
from datasets import load_dataset
iter_dset = iter(
load_dataset("scitldr", name="FullText", split="test", streaming=True)
)
next(iter_dset)
```
## Expected results
Returns the first sample of the dataset
## Actual results
Calling `__next__` crashes with the following Traceback:
```python
----> 1 next(dset_iter)
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
339
340 def __iter__(self):
--> 341 for key, example in self._iter():
342 if self.features:
343 # we encode the example for ClassLabel feature types for example
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in _iter(self)
336 else:
337 ex_iterable = self._ex_iterable
--> 338 yield from ex_iterable
339
340 def __iter__(self):
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
76
77 def __iter__(self):
---> 78 for key, example in self.generate_examples_fn(**self.kwargs):
79 yield key, example
80
~\.cache\huggingface\modules\datasets_modules\datasets\scitldr\72d6e2195786c57e1d343066fb2cc4f93ea39c5e381e53e6ae7c44bbfd1f05ef\scitldr.py in _generate_examples(self, filepath, split)
162
163 with open(filepath, encoding="utf-8") as f:
--> 164 for id_, row in enumerate(f):
165 data = json.loads(row)
166 if self.config.name == "AIC":
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in read(self, length)
496 else:
497 length = min(self.size - self.loc, length)
--> 498 return super().read(length)
499
500 async def async_fetch_all(self):
~\miniconda3\envs\datasets\lib\site-packages\fsspec\spec.py in read(self, length)
1481 # don't even bother calling fetch
1482 return b""
-> 1483 out = self.cache._fetch(self.loc, self.loc + length)
1484 self.loc += len(out)
1485 return out
~\miniconda3\envs\datasets\lib\site-packages\fsspec\caching.py in _fetch(self, start, end)
378 elif start < self.start:
379 if self.end - end > self.blocksize:
--> 380 self.cache = self.fetcher(start, bend)
381 self.start = start
382 else:
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in wrapper(*args, **kwargs)
86 def wrapper(*args, **kwargs):
87 self = obj or args[0]
---> 88 return sync(self.loop, func, *args, **kwargs)
89
90 return wrapper
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in sync(loop, func, timeout, *args, **kwargs)
67 raise FSTimeoutError
68 if isinstance(result[0], BaseException):
---> 69 raise result[0]
70 return result[0]
71
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in _runner(event, coro, result, timeout)
23 coro = asyncio.wait_for(coro, timeout=timeout)
24 try:
---> 25 result[0] = await coro
26 except Exception as ex:
27 result[0] = ex
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in async_fetch_range(self, start, end)
538 if r.status == 206:
539 # partial content, as expected
--> 540 out = await r.read()
541 elif "Content-Length" in r.headers:
542 cl = int(r.headers["Content-Length"])
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\client_reqrep.py in read(self)
1030 if self._body is None:
1031 try:
-> 1032 self._body = await self.content.read()
1033 for trace in self._traces:
1034 await trace.send_response_chunk_received(
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\streams.py in read(self, n)
342 async def read(self, n: int = -1) -> bytes:
343 if self._exception is not None:
--> 344 raise self._exception
345
346 # migration problem; with DataQueue you have to catch
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
## Environment info
- `datasets` version: 1.12.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.5
- PyArrow version: 2.0.0
- aiohttp version: 3.7.4.post0
Code to reproduce the bug: `ClientPayloadError: 400, message='Can not decode content-encoding: gzip'`
```python
In [1]: import fsspec
In [2]: import json
In [3]: with fsspec.open('https://raw.githubusercontent.com/allenai/scitldr/master/SciTLDR-Data/SciTLDR-FullText/test.jsonl', encoding="utf-8") as f:
...: for row in f:
...: data = json.loads(row)
...:
---------------------------------------------------------------------------
ClientPayloadError Traceback (most recent call last)
``` |
https://github.com/huggingface/datasets/issues/2917 | windows download abnormal | Hi ! Is there some kind of proxy that is configured in your browser that gives you access to internet ? If it's the case it could explain why it doesn't work in the code, since the proxy wouldn't be used | ## Describe the bug
The script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why??
## Steps to reproduce the bug
```python3.7 + windows
![image](https://user-images.githubusercontent.com/52347799/133436174-4303f847-55d5-434f-a749-08da3bb9b654.png)
# Sample code to reproduce the bug
```
## Expected results
It can be downloaded normally.
## Actual results
it cann't
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:1.11.0
- Platform:windows
- Python version:3.7
- PyArrow version:
| 41 | windows download abnormal
## Describe the bug
The script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why??
## Steps to reproduce the bug
```python3.7 + windows
![image](https://user-images.githubusercontent.com/52347799/133436174-4303f847-55d5-434f-a749-08da3bb9b654.png)
# Sample code to reproduce the bug
```
## Expected results
It can be downloaded normally.
## Actual results
it cann't
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:1.11.0
- Platform:windows
- Python version:3.7
- PyArrow version:
Hi ! Is there some kind of proxy that is configured in your browser that gives you access to internet ? If it's the case it could explain why it doesn't work in the code, since the proxy wouldn't be used |
https://github.com/huggingface/datasets/issues/2913 | timit_asr dataset only includes one text phrase | Hi @margotwagner,
This bug was fixed in #1995. Upgrading the datasets should work (min v1.8.0 ideally) | ## Describe the bug
The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases.
## Steps to reproduce the bug
Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-english
1. Install the dataset and other packages
```python
!pip install datasets>=1.5.0
!pip install transformers==4.4.0
!pip install soundfile
!pip install jiwer
```
2. Load the dataset
```python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
```
3. Remove columns that we don't want
```python
timit = timit.remove_columns(["phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"])
```
4. Write a short function to display some random samples of the dataset.
```python
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
display(HTML(df.to_html()))
show_random_elements(timit["train"].remove_columns(["file"]))
```
## Expected results
10 random different transcription phrases.
## Actual results
10 of the same transcription phrase "Would such an act of refusal be useful?"
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.4.1
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.5
- PyArrow version: not listed
| 16 | timit_asr dataset only includes one text phrase
## Describe the bug
The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases.
## Steps to reproduce the bug
Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-english
1. Install the dataset and other packages
```python
!pip install datasets>=1.5.0
!pip install transformers==4.4.0
!pip install soundfile
!pip install jiwer
```
2. Load the dataset
```python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
```
3. Remove columns that we don't want
```python
timit = timit.remove_columns(["phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"])
```
4. Write a short function to display some random samples of the dataset.
```python
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
display(HTML(df.to_html()))
show_random_elements(timit["train"].remove_columns(["file"]))
```
## Expected results
10 random different transcription phrases.
## Actual results
10 of the same transcription phrase "Would such an act of refusal be useful?"
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.4.1
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.5
- PyArrow version: not listed
Hi @margotwagner,
This bug was fixed in #1995. Upgrading the datasets should work (min v1.8.0 ideally) |
https://github.com/huggingface/datasets/issues/2913 | timit_asr dataset only includes one text phrase | Hi @margotwagner,
Yes, as @bhavitvyamalik has commented, this bug was fixed in `datasets` version 1.5.0. You need to update it, as your current version is 1.4.1:
> Environment info
> - `datasets` version: 1.4.1 | ## Describe the bug
The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases.
## Steps to reproduce the bug
Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-english
1. Install the dataset and other packages
```python
!pip install datasets>=1.5.0
!pip install transformers==4.4.0
!pip install soundfile
!pip install jiwer
```
2. Load the dataset
```python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
```
3. Remove columns that we don't want
```python
timit = timit.remove_columns(["phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"])
```
4. Write a short function to display some random samples of the dataset.
```python
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
display(HTML(df.to_html()))
show_random_elements(timit["train"].remove_columns(["file"]))
```
## Expected results
10 random different transcription phrases.
## Actual results
10 of the same transcription phrase "Would such an act of refusal be useful?"
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.4.1
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.5
- PyArrow version: not listed
| 34 | timit_asr dataset only includes one text phrase
## Describe the bug
The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases.
## Steps to reproduce the bug
Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-english
1. Install the dataset and other packages
```python
!pip install datasets>=1.5.0
!pip install transformers==4.4.0
!pip install soundfile
!pip install jiwer
```
2. Load the dataset
```python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
```
3. Remove columns that we don't want
```python
timit = timit.remove_columns(["phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"])
```
4. Write a short function to display some random samples of the dataset.
```python
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
display(HTML(df.to_html()))
show_random_elements(timit["train"].remove_columns(["file"]))
```
## Expected results
10 random different transcription phrases.
## Actual results
10 of the same transcription phrase "Would such an act of refusal be useful?"
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.4.1
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.5
- PyArrow version: not listed
Hi @margotwagner,
Yes, as @bhavitvyamalik has commented, this bug was fixed in `datasets` version 1.5.0. You need to update it, as your current version is 1.4.1:
> Environment info
> - `datasets` version: 1.4.1 |
https://github.com/huggingface/datasets/issues/2904 | FORCE_REDOWNLOAD does not work | Hi ! Thanks for reporting. The error seems to happen only if you use compressed files.
The second dataset is prepared in another dataset cache directory than the first - which is normal, since the source file is different. However, it doesn't uncompress the new data file because it finds the old uncompressed data in the extraction cache directory.
If we fix the extraction cache mechanism to uncompress a local file if it changed then it should fix the issue.
Currently the extraction cache mechanism only takes into account the path of the compressed file, which is an issue. | ## Describe the bug
With GenerateMode.FORCE_REDOWNLOAD, the documentation says
+------------------------------------+-----------+---------+
| | Downloads | Dataset |
+====================================+===========+=========+
| `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse |
+------------------------------------+-----------+---------+
| `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh |
+------------------------------------+-----------+---------+
| `FORCE_REDOWNLOAD` | Fresh | Fresh |
+------------------------------------+-----------+---------+
However, the old dataset is loaded even when FORCE_REDOWNLOAD is chosen.
## Steps to reproduce the bug
```python
import pandas as pd
from datasets import load_dataset, GenerateMode
pd.DataFrame(range(5), columns=['numbers']).to_csv('/tmp/test.tsv.gz', index=False)
ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD)
print(ee)
pd.DataFrame(range(10), columns=['numerals']).to_csv('/tmp/test.tsv.gz', index=False)
ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD)
print(ee)
```
## Expected results
Dataset({
features: ['numbers'],
num_rows: 5
})
Dataset({
features: ['numerals'],
num_rows: 10
})
## Actual results
Dataset({
features: ['numbers'],
num_rows: 5
})
Dataset({
features: ['numbers'],
num_rows: 5
})
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: Linux-4.14.181-108.257.amzn1.x86_64-x86_64-with-glibc2.10
- Python version: 3.7.10
- PyArrow version: 3.0.0
| 99 | FORCE_REDOWNLOAD does not work
## Describe the bug
With GenerateMode.FORCE_REDOWNLOAD, the documentation says
+------------------------------------+-----------+---------+
| | Downloads | Dataset |
+====================================+===========+=========+
| `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse |
+------------------------------------+-----------+---------+
| `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh |
+------------------------------------+-----------+---------+
| `FORCE_REDOWNLOAD` | Fresh | Fresh |
+------------------------------------+-----------+---------+
However, the old dataset is loaded even when FORCE_REDOWNLOAD is chosen.
## Steps to reproduce the bug
```python
import pandas as pd
from datasets import load_dataset, GenerateMode
pd.DataFrame(range(5), columns=['numbers']).to_csv('/tmp/test.tsv.gz', index=False)
ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD)
print(ee)
pd.DataFrame(range(10), columns=['numerals']).to_csv('/tmp/test.tsv.gz', index=False)
ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD)
print(ee)
```
## Expected results
Dataset({
features: ['numbers'],
num_rows: 5
})
Dataset({
features: ['numerals'],
num_rows: 10
})
## Actual results
Dataset({
features: ['numbers'],
num_rows: 5
})
Dataset({
features: ['numbers'],
num_rows: 5
})
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: Linux-4.14.181-108.257.amzn1.x86_64-x86_64-with-glibc2.10
- Python version: 3.7.10
- PyArrow version: 3.0.0
Hi ! Thanks for reporting. The error seems to happen only if you use compressed files.
The second dataset is prepared in another dataset cache directory than the first - which is normal, since the source file is different. However, it doesn't uncompress the new data file because it finds the old uncompressed data in the extraction cache directory.
If we fix the extraction cache mechanism to uncompress a local file if it changed then it should fix the issue.
Currently the extraction cache mechanism only takes into account the path of the compressed file, which is an issue. |
https://github.com/huggingface/datasets/issues/2902 | Add WIT Dataset | WikiMedia is now hosting the pixel values directly which should make it a lot easier!
The files can be found here:
https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/
https://analytics.wikimedia.org/published/datasets/one-off/caption_competition/training/image_pixels/ | ## Adding a Dataset
- **Name:** *WIT*
- **Description:** *Wikipedia-based Image Text Dataset*
- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)*
- **Data:** *https://github.com/google-research-datasets/wit*
- **Motivation:** (excerpt from their Github README.md)
> - The largest multimodal dataset (publicly available at the time of this writing) by the number of image-text examples.
> - A massively multilingual dataset (first of its kind) with coverage for over 100+ languages.
> - A collection of diverse set of concepts and real world entities.
> - Brings forth challenging real-world test sets.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| 23 | Add WIT Dataset
## Adding a Dataset
- **Name:** *WIT*
- **Description:** *Wikipedia-based Image Text Dataset*
- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)*
- **Data:** *https://github.com/google-research-datasets/wit*
- **Motivation:** (excerpt from their Github README.md)
> - The largest multimodal dataset (publicly available at the time of this writing) by the number of image-text examples.
> - A massively multilingual dataset (first of its kind) with coverage for over 100+ languages.
> - A collection of diverse set of concepts and real world entities.
> - Brings forth challenging real-world test sets.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
WikiMedia is now hosting the pixel values directly which should make it a lot easier!
The files can be found here:
https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/
https://analytics.wikimedia.org/published/datasets/one-off/caption_competition/training/image_pixels/ |
https://github.com/huggingface/datasets/issues/2902 | Add WIT Dataset | > @hassiahk is working on it #2810
Thank you @bhavitvyamalik! Added this issue so we could track progress 😄 . Just linked the PR as well for visibility. | ## Adding a Dataset
- **Name:** *WIT*
- **Description:** *Wikipedia-based Image Text Dataset*
- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)*
- **Data:** *https://github.com/google-research-datasets/wit*
- **Motivation:** (excerpt from their Github README.md)
> - The largest multimodal dataset (publicly available at the time of this writing) by the number of image-text examples.
> - A massively multilingual dataset (first of its kind) with coverage for over 100+ languages.
> - A collection of diverse set of concepts and real world entities.
> - Brings forth challenging real-world test sets.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| 28 | Add WIT Dataset
## Adding a Dataset
- **Name:** *WIT*
- **Description:** *Wikipedia-based Image Text Dataset*
- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)*
- **Data:** *https://github.com/google-research-datasets/wit*
- **Motivation:** (excerpt from their Github README.md)
> - The largest multimodal dataset (publicly available at the time of this writing) by the number of image-text examples.
> - A massively multilingual dataset (first of its kind) with coverage for over 100+ languages.
> - A collection of diverse set of concepts and real world entities.
> - Brings forth challenging real-world test sets.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
> @hassiahk is working on it #2810
Thank you @bhavitvyamalik! Added this issue so we could track progress 😄 . Just linked the PR as well for visibility. |
https://github.com/huggingface/datasets/issues/2901 | Incompatibility with pytest | Sorry, my bad... When implementing `xpathopen`, I just considered the use case in the COUNTER dataset... I'm fixing it! | ## Describe the bug
pytest complains about xpathopen / path.open("w")
## Steps to reproduce the bug
Create a test file, `test.py`:
```python
import datasets as ds
def load_dataset():
ds.load_dataset("counter", split="train", streaming=True)
```
And launch it with pytest:
```bash
python -m pytest test.py
```
## Expected results
It should give something like:
```
collected 1 item
test.py . [100%]
======= 1 passed in 3.15s =======
```
## Actual results
```
============================================================================================================================= test session starts ==============================================================================================================================
platform linux -- Python 3.8.11, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /home/slesage/hf/datasets-preview-backend, configfile: pyproject.toml
plugins: anyio-3.3.1
collected 1 item
tests/queries/test_rows.py . [100%]Traceback (most recent call last):
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pytest/__main__.py", line 5, in <module>
raise SystemExit(pytest.console_main())
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 185, in console_main
code = main()
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 162, in main
ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
return outcome.get_result()
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 316, in pytest_cmdline_main
return wrap_session(config, _main)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 304, in wrap_session
config.hook.pytest_sessionfinish(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 55, in _multicall
gen.send(outcome)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/terminal.py", line 803, in pytest_sessionfinish
outcome.get_result()
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 428, in pytest_sessionfinish
config.cache.set("cache/nodeids", sorted(self.cached_nodeids))
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 188, in set
f = path.open("w")
TypeError: xpathopen() takes 1 positional argument but 2 were given
```
## Environment info
- `datasets` version: 1.12.0
- Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| 19 | Incompatibility with pytest
## Describe the bug
pytest complains about xpathopen / path.open("w")
## Steps to reproduce the bug
Create a test file, `test.py`:
```python
import datasets as ds
def load_dataset():
ds.load_dataset("counter", split="train", streaming=True)
```
And launch it with pytest:
```bash
python -m pytest test.py
```
## Expected results
It should give something like:
```
collected 1 item
test.py . [100%]
======= 1 passed in 3.15s =======
```
## Actual results
```
============================================================================================================================= test session starts ==============================================================================================================================
platform linux -- Python 3.8.11, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /home/slesage/hf/datasets-preview-backend, configfile: pyproject.toml
plugins: anyio-3.3.1
collected 1 item
tests/queries/test_rows.py . [100%]Traceback (most recent call last):
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pytest/__main__.py", line 5, in <module>
raise SystemExit(pytest.console_main())
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 185, in console_main
code = main()
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 162, in main
ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
return outcome.get_result()
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 316, in pytest_cmdline_main
return wrap_session(config, _main)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 304, in wrap_session
config.hook.pytest_sessionfinish(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 55, in _multicall
gen.send(outcome)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/terminal.py", line 803, in pytest_sessionfinish
outcome.get_result()
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 428, in pytest_sessionfinish
config.cache.set("cache/nodeids", sorted(self.cached_nodeids))
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 188, in set
f = path.open("w")
TypeError: xpathopen() takes 1 positional argument but 2 were given
```
## Environment info
- `datasets` version: 1.12.0
- Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
Sorry, my bad... When implementing `xpathopen`, I just considered the use case in the COUNTER dataset... I'm fixing it! |
https://github.com/huggingface/datasets/issues/2892 | Error when encoding a dataset with None objects with a Sequence feature | This has been fixed by https://github.com/huggingface/datasets/pull/2900
We're doing a new release 1.12 today to make the fix available :) | There is an error when encoding a dataset with None objects with a Sequence feature
To reproduce:
```python
from datasets import Dataset, Features, Value, Sequence
data = {"a": [[0], None]}
features = Features({"a": Sequence(Value("int32"))})
dataset = Dataset.from_dict(data, features=features)
```
raises
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-24-40add67f8751> in <module>
2 data = {"a": [[0], None]}
3 features = Features({"a": Sequence(Value("int32"))})
----> 4 dataset = Dataset.from_dict(data, features=features)
[...]
~/datasets/features.py in encode_nested_example(schema, obj)
888 if isinstance(obj, str): # don't interpret a string as a list
889 raise ValueError("Got a string but expected a list instead: '{}'".format(obj))
--> 890 return [encode_nested_example(schema.feature, o) for o in obj]
891 # Object with special encoding:
892 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks
TypeError: 'NoneType' object is not iterable
```
Instead, if should run without error, as if the `features` were not passed | 19 | Error when encoding a dataset with None objects with a Sequence feature
There is an error when encoding a dataset with None objects with a Sequence feature
To reproduce:
```python
from datasets import Dataset, Features, Value, Sequence
data = {"a": [[0], None]}
features = Features({"a": Sequence(Value("int32"))})
dataset = Dataset.from_dict(data, features=features)
```
raises
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-24-40add67f8751> in <module>
2 data = {"a": [[0], None]}
3 features = Features({"a": Sequence(Value("int32"))})
----> 4 dataset = Dataset.from_dict(data, features=features)
[...]
~/datasets/features.py in encode_nested_example(schema, obj)
888 if isinstance(obj, str): # don't interpret a string as a list
889 raise ValueError("Got a string but expected a list instead: '{}'".format(obj))
--> 890 return [encode_nested_example(schema.feature, o) for o in obj]
891 # Object with special encoding:
892 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks
TypeError: 'NoneType' object is not iterable
```
Instead, if should run without error, as if the `features` were not passed
This has been fixed by https://github.com/huggingface/datasets/pull/2900
We're doing a new release 1.12 today to make the fix available :) |
https://github.com/huggingface/datasets/issues/2888 | v1.11.1 release date | @albertvillanova i think this issue is still valid and should not be closed till `>1.11.0` is published :) | Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago.
When do you plan to publush v1.11.1 release? | 18 | v1.11.1 release date
Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago.
When do you plan to publush v1.11.1 release?
@albertvillanova i think this issue is still valid and should not be closed till `>1.11.0` is published :) |
https://github.com/huggingface/datasets/issues/2885 | Adding an Elastic Search index to a Dataset | Hi, is this bug deterministic in your poetry env ? I mean, does it always stop at 90% or is it random ?
Also, can you try using another version of Elasticsearch ? Maybe there's an issue with the one of you poetry env | ## Describe the bug
When trying to index documents from the squad dataset, the connection to ElasticSearch seems to break:
Reusing dataset squad (/Users/andreasmotz/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)
90%|████████████████████████████████████████████▉ | 9501/10570 [00:01<00:00, 6335.61docs/s]
No error is thrown, but the indexing breaks ~90%.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
from elasticsearch import Elasticsearch
es = Elasticsearch()
squad = load_dataset('squad', split='validation')
index_name = "corpus"
es_config = {
"settings": {
"number_of_shards": 1,
"analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}},
},
"mappings": {
"properties": {
"idx" : {"type" : "keyword"},
"title" : {"type" : "keyword"},
"text": {
"type": "text",
"analyzer": "standard",
"similarity": "BM25"
},
}
},
}
class IndexBuilder:
"""
Elastic search indexing of a corpus
"""
def __init__(
self,
*args,
#corpus : None,
dataset : squad,
index_name = str,
query = str,
config = dict,
**kwargs,
):
#instantiate HuggingFace dataset
self.dataset = dataset
#instantiate ElasticSearch config
self.config = config
self.es = Elasticsearch()
self.index_name = index_name
self.query = query
def elastic_index(self):
print(self.es.info)
self.es.indices.delete(index=self.index_name, ignore=[400, 404])
search_index = self.dataset.add_elasticsearch_index(column='context', host='localhost', port='9200', es_index_name=self.index_name, es_index_config=self.config)
return search_index
def exact_match_method(self, index):
scores, retrieved_examples = index.get_nearest_examples('context', query=self.query, k=1)
return scores, retrieved_examples
if __name__ == "__main__":
print(type(squad))
Index = IndexBuilder(dataset=squad, index_name='corpus_index', query='Where was Chopin born?', config=es_config)
search_index = Index.elastic_index()
scores, examples = Index.exact_match_method(search_index)
print(scores, examples)
for name in squad.column_names:
print(type(squad[name]))
```
## Environment info
We run the code in Poetry. This might be the issue, since the script runs successfully in our local environment.
Poetry:
- Python version: 3.8
- PyArrow: 4.0.1
- Elasticsearch: 7.13.4
- datasets: 1.10.2
Local:
- Python version: 3.8
- PyArrow: 3.0.0
- Elasticsearch: 7.7.1
- datasets: 1.7.0
| 44 | Adding an Elastic Search index to a Dataset
## Describe the bug
When trying to index documents from the squad dataset, the connection to ElasticSearch seems to break:
Reusing dataset squad (/Users/andreasmotz/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)
90%|████████████████████████████████████████████▉ | 9501/10570 [00:01<00:00, 6335.61docs/s]
No error is thrown, but the indexing breaks ~90%.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
from elasticsearch import Elasticsearch
es = Elasticsearch()
squad = load_dataset('squad', split='validation')
index_name = "corpus"
es_config = {
"settings": {
"number_of_shards": 1,
"analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}},
},
"mappings": {
"properties": {
"idx" : {"type" : "keyword"},
"title" : {"type" : "keyword"},
"text": {
"type": "text",
"analyzer": "standard",
"similarity": "BM25"
},
}
},
}
class IndexBuilder:
"""
Elastic search indexing of a corpus
"""
def __init__(
self,
*args,
#corpus : None,
dataset : squad,
index_name = str,
query = str,
config = dict,
**kwargs,
):
#instantiate HuggingFace dataset
self.dataset = dataset
#instantiate ElasticSearch config
self.config = config
self.es = Elasticsearch()
self.index_name = index_name
self.query = query
def elastic_index(self):
print(self.es.info)
self.es.indices.delete(index=self.index_name, ignore=[400, 404])
search_index = self.dataset.add_elasticsearch_index(column='context', host='localhost', port='9200', es_index_name=self.index_name, es_index_config=self.config)
return search_index
def exact_match_method(self, index):
scores, retrieved_examples = index.get_nearest_examples('context', query=self.query, k=1)
return scores, retrieved_examples
if __name__ == "__main__":
print(type(squad))
Index = IndexBuilder(dataset=squad, index_name='corpus_index', query='Where was Chopin born?', config=es_config)
search_index = Index.elastic_index()
scores, examples = Index.exact_match_method(search_index)
print(scores, examples)
for name in squad.column_names:
print(type(squad[name]))
```
## Environment info
We run the code in Poetry. This might be the issue, since the script runs successfully in our local environment.
Poetry:
- Python version: 3.8
- PyArrow: 4.0.1
- Elasticsearch: 7.13.4
- datasets: 1.10.2
Local:
- Python version: 3.8
- PyArrow: 3.0.0
- Elasticsearch: 7.7.1
- datasets: 1.7.0
Hi, is this bug deterministic in your poetry env ? I mean, does it always stop at 90% or is it random ?
Also, can you try using another version of Elasticsearch ? Maybe there's an issue with the one of you poetry env |
https://github.com/huggingface/datasets/issues/2882 | `load_dataset('docred')` results in a `NonMatchingChecksumError` | Hi @tmpr, thanks for reporting.
Two weeks ago (23th Aug), the host of the source `docred` dataset updated one of the files (`dev.json`): you can see it [here](https://drive.google.com/drive/folders/1c5-0YwnoJx8NS6CV2f-NoTHR__BdkNqw).
Therefore, the checksum needs to be updated.
Normally, in the meantime, you could avoid the error by passing `ignore_verifications=True` to `load_dataset`. However, as the old link points to a non-existing file, the link must be updated too.
I'm fixing all this.
| ## Describe the bug
I get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`.
## Steps to reproduce the bug
It is quasi only this code:
```python
import datasets
data = datasets.load_dataset('docred')
```
## Expected results
The DocRED dataset should be loaded without any problems.
## Actual results
```
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-4-b1b83f25a16c> in <module>
----> 1 d = datasets.load_dataset('docred')
~/anaconda3/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
845
846 # Download and prepare data
--> 847 builder_instance.download_and_prepare(
848 download_config=download_config,
849 download_mode=download_mode,
~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
613 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
614 if not downloaded_from_gcs:
--> 615 self._download_and_prepare(
616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
617 )
~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
673 # Checksums verification
674 if verify_infos:
--> 675 verify_checksums(
676 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
677 )
~/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1fDmfUUo5G7gfaoqWWvK81u08m71TK2g7']
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Linux-5.11.0-7633-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 5.0.0
This error also happened on my Windows-partition, after freshly installing python 3.9 and `datasets`.
## Remarks
- I have already called `rm -rf /home/<user>/.cache/huggingface`, i.e., I have tried clearing the cache.
- The problem does not exist for other datasets, i.e., it seems to be DocRED-specific. | 69 | `load_dataset('docred')` results in a `NonMatchingChecksumError`
## Describe the bug
I get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`.
## Steps to reproduce the bug
It is quasi only this code:
```python
import datasets
data = datasets.load_dataset('docred')
```
## Expected results
The DocRED dataset should be loaded without any problems.
## Actual results
```
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-4-b1b83f25a16c> in <module>
----> 1 d = datasets.load_dataset('docred')
~/anaconda3/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
845
846 # Download and prepare data
--> 847 builder_instance.download_and_prepare(
848 download_config=download_config,
849 download_mode=download_mode,
~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
613 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
614 if not downloaded_from_gcs:
--> 615 self._download_and_prepare(
616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
617 )
~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
673 # Checksums verification
674 if verify_infos:
--> 675 verify_checksums(
676 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
677 )
~/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1fDmfUUo5G7gfaoqWWvK81u08m71TK2g7']
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Linux-5.11.0-7633-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 5.0.0
This error also happened on my Windows-partition, after freshly installing python 3.9 and `datasets`.
## Remarks
- I have already called `rm -rf /home/<user>/.cache/huggingface`, i.e., I have tried clearing the cache.
- The problem does not exist for other datasets, i.e., it seems to be DocRED-specific.
Hi @tmpr, thanks for reporting.
Two weeks ago (23th Aug), the host of the source `docred` dataset updated one of the files (`dev.json`): you can see it [here](https://drive.google.com/drive/folders/1c5-0YwnoJx8NS6CV2f-NoTHR__BdkNqw).
Therefore, the checksum needs to be updated.
Normally, in the meantime, you could avoid the error by passing `ignore_verifications=True` to `load_dataset`. However, as the old link points to a non-existing file, the link must be updated too.
I'm fixing all this.
|
https://github.com/huggingface/datasets/issues/2879 | In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?" | Hi @rcgale, thanks for reporting.
Please note that this bug was fixed on `datasets` version 1.5.0: https://github.com/huggingface/datasets/commit/a23c73e526e1c30263834164f16f1fdf76722c8c#diff-f12a7a42d4673bb6c2ca5a40c92c29eb4fe3475908c84fd4ce4fad5dc2514878
If you update `datasets` version, that should work.
On the other hand, would it be possible for @patrickvonplaten to update the [blog post](https://huggingface.co/blog/fine-tune-wav2vec2-english) with the correct version of `datasets`? | ## Describe the bug
Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same.
## Steps to reproduce the bug
I was following this tutorial
- https://huggingface.co/blog/fine-tune-wav2vec2-english
But here's a distilled repro:
```python
!pip install datasets==1.4.1
from datasets import load_dataset
timit = load_dataset("timit_asr", cache_dir="./temp")
unique_transcripts = set(timit["train"]["text"])
print(unique_transcripts)
assert len(unique_transcripts) > 1
```
## Expected results
Expected the correct TIMIT data. Or an error saying that this version of `datasets` can't produce it.
## Actual results
Every train transcript was "Would such an act of refusal be useful?" Every test transcript was "The bungalow was pleasantly situated near the shore."
## Environment info
- `datasets` version: 1.4.1
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.9
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: tried both
- Using distributed or parallel set-up in script?: no
-
| 46 | In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?"
## Describe the bug
Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same.
## Steps to reproduce the bug
I was following this tutorial
- https://huggingface.co/blog/fine-tune-wav2vec2-english
But here's a distilled repro:
```python
!pip install datasets==1.4.1
from datasets import load_dataset
timit = load_dataset("timit_asr", cache_dir="./temp")
unique_transcripts = set(timit["train"]["text"])
print(unique_transcripts)
assert len(unique_transcripts) > 1
```
## Expected results
Expected the correct TIMIT data. Or an error saying that this version of `datasets` can't produce it.
## Actual results
Every train transcript was "Would such an act of refusal be useful?" Every test transcript was "The bungalow was pleasantly situated near the shore."
## Environment info
- `datasets` version: 1.4.1
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.9
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: tried both
- Using distributed or parallel set-up in script?: no
-
Hi @rcgale, thanks for reporting.
Please note that this bug was fixed on `datasets` version 1.5.0: https://github.com/huggingface/datasets/commit/a23c73e526e1c30263834164f16f1fdf76722c8c#diff-f12a7a42d4673bb6c2ca5a40c92c29eb4fe3475908c84fd4ce4fad5dc2514878
If you update `datasets` version, that should work.
On the other hand, would it be possible for @patrickvonplaten to update the [blog post](https://huggingface.co/blog/fine-tune-wav2vec2-english) with the correct version of `datasets`? |
https://github.com/huggingface/datasets/issues/2879 | In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?" | I just proposed a change in the blog post.
I had assumed there was a data format change that broke a previous version of the code, since presumably @patrickvonplaten tested the tutorial with the version they explicitly referenced. But that fix you linked suggests a problem in the code, which surprised me.
I still wonder, though, is there a way for downloads to be invalidated server-side? If the client can announce its version during a download request, perhaps the server could reject known incompatibilities? It would save much valuable time if `datasets` raised an informative error on a known problem ("Error: the requested data set requires `datasets>=1.5.0`."). This kind of API versioning is a prudent move anyhow, as there will surely come a time when you'll need to make a breaking change to data. | ## Describe the bug
Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same.
## Steps to reproduce the bug
I was following this tutorial
- https://huggingface.co/blog/fine-tune-wav2vec2-english
But here's a distilled repro:
```python
!pip install datasets==1.4.1
from datasets import load_dataset
timit = load_dataset("timit_asr", cache_dir="./temp")
unique_transcripts = set(timit["train"]["text"])
print(unique_transcripts)
assert len(unique_transcripts) > 1
```
## Expected results
Expected the correct TIMIT data. Or an error saying that this version of `datasets` can't produce it.
## Actual results
Every train transcript was "Would such an act of refusal be useful?" Every test transcript was "The bungalow was pleasantly situated near the shore."
## Environment info
- `datasets` version: 1.4.1
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.9
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: tried both
- Using distributed or parallel set-up in script?: no
-
| 134 | In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?"
## Describe the bug
Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same.
## Steps to reproduce the bug
I was following this tutorial
- https://huggingface.co/blog/fine-tune-wav2vec2-english
But here's a distilled repro:
```python
!pip install datasets==1.4.1
from datasets import load_dataset
timit = load_dataset("timit_asr", cache_dir="./temp")
unique_transcripts = set(timit["train"]["text"])
print(unique_transcripts)
assert len(unique_transcripts) > 1
```
## Expected results
Expected the correct TIMIT data. Or an error saying that this version of `datasets` can't produce it.
## Actual results
Every train transcript was "Would such an act of refusal be useful?" Every test transcript was "The bungalow was pleasantly situated near the shore."
## Environment info
- `datasets` version: 1.4.1
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.9
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: tried both
- Using distributed or parallel set-up in script?: no
-
I just proposed a change in the blog post.
I had assumed there was a data format change that broke a previous version of the code, since presumably @patrickvonplaten tested the tutorial with the version they explicitly referenced. But that fix you linked suggests a problem in the code, which surprised me.
I still wonder, though, is there a way for downloads to be invalidated server-side? If the client can announce its version during a download request, perhaps the server could reject known incompatibilities? It would save much valuable time if `datasets` raised an informative error on a known problem ("Error: the requested data set requires `datasets>=1.5.0`."). This kind of API versioning is a prudent move anyhow, as there will surely come a time when you'll need to make a breaking change to data. |
https://github.com/huggingface/datasets/issues/2871 | datasets.config.PYARROW_VERSION has no attribute 'major' | Hi @bwang482,
I'm sorry but I'm not able to reproduce your bug.
Please note that in our current master branch, we made a commit (d03223d4d64b89e76b48b00602aba5aa2f817f1e) that simultaneously modified:
- test_dataset_common.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-a1bc225bd9a5bade373d1f140e24d09cbbdc97971c2f73bb627daaa803ada002L289 that introduces the usage of `datasets.config.PYARROW_VERSION.major`
- but also changed config.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-e021fcfc41811fb970fab889b8d245e68382bca8208e63eaafc9a396a336f8f2L40, so that `datasets.config.PYARROW_VERSION.major` exists
| In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS.
```
import datasets
datasets.config.PYARROW_VERSION.major
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module>
1 import datasets
----> 2 datasets.config.PYARROW_VERSION.major
AttributeError: 'str' object has no attribute 'major'
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 4.0.1
| 47 | datasets.config.PYARROW_VERSION has no attribute 'major'
In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS.
```
import datasets
datasets.config.PYARROW_VERSION.major
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module>
1 import datasets
----> 2 datasets.config.PYARROW_VERSION.major
AttributeError: 'str' object has no attribute 'major'
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 4.0.1
Hi @bwang482,
I'm sorry but I'm not able to reproduce your bug.
Please note that in our current master branch, we made a commit (d03223d4d64b89e76b48b00602aba5aa2f817f1e) that simultaneously modified:
- test_dataset_common.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-a1bc225bd9a5bade373d1f140e24d09cbbdc97971c2f73bb627daaa803ada002L289 that introduces the usage of `datasets.config.PYARROW_VERSION.major`
- but also changed config.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-e021fcfc41811fb970fab889b8d245e68382bca8208e63eaafc9a396a336f8f2L40, so that `datasets.config.PYARROW_VERSION.major` exists
|
https://github.com/huggingface/datasets/issues/2871 | datasets.config.PYARROW_VERSION has no attribute 'major' | Reopening this. Although the `test_dataset_common.py` script works fine now.
Has this got something to do with my pull request not passing `ci/circleci: run_dataset_script_tests_pyarrow` tests?
https://github.com/huggingface/datasets/pull/2873 | In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS.
```
import datasets
datasets.config.PYARROW_VERSION.major
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module>
1 import datasets
----> 2 datasets.config.PYARROW_VERSION.major
AttributeError: 'str' object has no attribute 'major'
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 4.0.1
| 25 | datasets.config.PYARROW_VERSION has no attribute 'major'
In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS.
```
import datasets
datasets.config.PYARROW_VERSION.major
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module>
1 import datasets
----> 2 datasets.config.PYARROW_VERSION.major
AttributeError: 'str' object has no attribute 'major'
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 4.0.1
Reopening this. Although the `test_dataset_common.py` script works fine now.
Has this got something to do with my pull request not passing `ci/circleci: run_dataset_script_tests_pyarrow` tests?
https://github.com/huggingface/datasets/pull/2873 |
https://github.com/huggingface/datasets/issues/2871 | datasets.config.PYARROW_VERSION has no attribute 'major' | Hi @bwang482,
If you click on `Details` (on the right of your non passing CI test names: `ci/circleci: run_dataset_script_tests_pyarrow`), you can have more information about the non-passing tests.
For example, for ["ci/circleci: run_dataset_script_tests_pyarrow_1" details](https://circleci.com/gh/huggingface/datasets/46324?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link), you can see that the only non-passing test has to do with the dataset card (missing information in the `README.md` file): `test_changed_dataset_card`
```
=========================== short test summary info ============================
FAILED tests/test_dataset_cards.py::test_changed_dataset_card[swedish_medical_ner]
= 1 failed, 3214 passed, 2874 skipped, 2 xfailed, 1 xpassed, 15 warnings in 175.59s (0:02:55) =
```
Therefore, your PR non-passing test has nothing to do with this issue. | In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS.
```
import datasets
datasets.config.PYARROW_VERSION.major
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module>
1 import datasets
----> 2 datasets.config.PYARROW_VERSION.major
AttributeError: 'str' object has no attribute 'major'
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 4.0.1
| 95 | datasets.config.PYARROW_VERSION has no attribute 'major'
In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS.
```
import datasets
datasets.config.PYARROW_VERSION.major
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module>
1 import datasets
----> 2 datasets.config.PYARROW_VERSION.major
AttributeError: 'str' object has no attribute 'major'
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 4.0.1
Hi @bwang482,
If you click on `Details` (on the right of your non passing CI test names: `ci/circleci: run_dataset_script_tests_pyarrow`), you can have more information about the non-passing tests.
For example, for ["ci/circleci: run_dataset_script_tests_pyarrow_1" details](https://circleci.com/gh/huggingface/datasets/46324?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link), you can see that the only non-passing test has to do with the dataset card (missing information in the `README.md` file): `test_changed_dataset_card`
```
=========================== short test summary info ============================
FAILED tests/test_dataset_cards.py::test_changed_dataset_card[swedish_medical_ner]
= 1 failed, 3214 passed, 2874 skipped, 2 xfailed, 1 xpassed, 15 warnings in 175.59s (0:02:55) =
```
Therefore, your PR non-passing test has nothing to do with this issue. |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | Hi, @Chenfei-Kang.
I'm sorry, but I'm not able to reproduce your bug:
```python
from datasets import load_dataset
ds = load_dataset("glue", 'cola')
ds
```
```
DatasetDict({
train: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 8551
})
validation: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 1043
})
test: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 1063
})
})
```
Could you please give more details and environment info (platform, PyArrow version)? | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
| 66 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
Hi, @Chenfei-Kang.
I'm sorry, but I'm not able to reproduce your bug:
```python
from datasets import load_dataset
ds = load_dataset("glue", 'cola')
ds
```
```
DatasetDict({
train: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 8551
})
validation: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 1043
})
test: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 1063
})
})
```
Could you please give more details and environment info (platform, PyArrow version)? |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | > Hi, @Chenfei-Kang.
>
> I'm sorry, but I'm not able to reproduce your bug:
>
> ```python
> from datasets import load_dataset
>
> ds = load_dataset("glue", 'cola')
> ds
> ```
>
> ```
> DatasetDict({
> train: Dataset({
> features: ['sentence', 'label', 'idx'],
> num_rows: 8551
> })
> validation: Dataset({
> features: ['sentence', 'label', 'idx'],
> num_rows: 1043
> })
> test: Dataset({
> features: ['sentence', 'label', 'idx'],
> num_rows: 1063
> })
> })
> ```
>
> Could you please give more details and environment info (platform, PyArrow version)?
Sorry to reply you so late.
platform: pycharm 2021 + anaconda with python 3.7
PyArrow version: 5.0.0
huggingface-hub: 0.0.16
datasets: 1.9.0
| ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
| 116 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
> Hi, @Chenfei-Kang.
>
> I'm sorry, but I'm not able to reproduce your bug:
>
> ```python
> from datasets import load_dataset
>
> ds = load_dataset("glue", 'cola')
> ds
> ```
>
> ```
> DatasetDict({
> train: Dataset({
> features: ['sentence', 'label', 'idx'],
> num_rows: 8551
> })
> validation: Dataset({
> features: ['sentence', 'label', 'idx'],
> num_rows: 1043
> })
> test: Dataset({
> features: ['sentence', 'label', 'idx'],
> num_rows: 1063
> })
> })
> ```
>
> Could you please give more details and environment info (platform, PyArrow version)?
Sorry to reply you so late.
platform: pycharm 2021 + anaconda with python 3.7
PyArrow version: 5.0.0
huggingface-hub: 0.0.16
datasets: 1.9.0
|
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | - For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?
- In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error? | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
| 69 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
- For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?
- In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error? |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | > * For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?
> * In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error?
1. For the platform, here are the output:
- datasets` version: 1.11.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.10
- PyArrow version: 5.0.0
2. For the code and error:
```python
from datasets import load_dataset, load_metric
dataset = load_dataset("glue", "cola")
```
```python
Traceback (most recent call last):
....
....
File "my_file.py", line 2, in <module>
dataset = load_dataset("glue", "cola")
File "My environments\lib\site-packages\datasets\load.py", line 830, in load_dataset
**config_kwargs,
File "My environments\lib\site-packages\datasets\load.py", line 710, in load_dataset_builder
**config_kwargs,
TypeError: 'NoneType' object is not callable
```
Thank you! | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
| 154 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
> * For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?
> * In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error?
1. For the platform, here are the output:
- datasets` version: 1.11.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.10
- PyArrow version: 5.0.0
2. For the code and error:
```python
from datasets import load_dataset, load_metric
dataset = load_dataset("glue", "cola")
```
```python
Traceback (most recent call last):
....
....
File "my_file.py", line 2, in <module>
dataset = load_dataset("glue", "cola")
File "My environments\lib\site-packages\datasets\load.py", line 830, in load_dataset
**config_kwargs,
File "My environments\lib\site-packages\datasets\load.py", line 710, in load_dataset_builder
**config_kwargs,
TypeError: 'NoneType' object is not callable
```
Thank you! |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem. | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
| 20 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem. |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | One naive question: do you have internet access from the machine where you execute the code? | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
| 16 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
One naive question: do you have internet access from the machine where you execute the code? |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | > For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem.
But I can download other task dataset such as `dataset = load_dataset('squad')`. I don't know what went wrong. Thank you so much! | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
| 43 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
> For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem.
But I can download other task dataset such as `dataset = load_dataset('squad')`. I don't know what went wrong. Thank you so much! |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | Hi @severo, thanks for reporting.
Just note that currently not all canonical datasets support streaming mode: this is one case!
All datasets that use `pathlib` joins (using `/`) instead of `os.path.join` (as in this dataset) do not support streaming mode yet. | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| 41 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
Hi @severo, thanks for reporting.
Just note that currently not all canonical datasets support streaming mode: this is one case!
All datasets that use `pathlib` joins (using `/`) instead of `os.path.join` (as in this dataset) do not support streaming mode yet. |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | OK. Do you think it's possible to detect this, and raise an exception (maybe `NotImplementedError`, or a specific `StreamingError`)? | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| 19 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
OK. Do you think it's possible to detect this, and raise an exception (maybe `NotImplementedError`, or a specific `StreamingError`)? |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | We should definitely support datasets using `pathlib` in streaming mode...
For non-supported datasets in streaming mode, we have already a request of raising an error/warning: see #2654. | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| 27 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
We should definitely support datasets using `pathlib` in streaming mode...
For non-supported datasets in streaming mode, we have already a request of raising an error/warning: see #2654. |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | Hi @severo, please note that "counter" dataset will be streamable (at least until it arrives at the missing file, error already in normal mode) once these PRs are merged:
- #2874
- #2876
- #2880
I have tested it. 😉 | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| 40 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
Hi @severo, please note that "counter" dataset will be streamable (at least until it arrives at the missing file, error already in normal mode) once these PRs are merged:
- #2874
- #2876
- #2880
I have tested it. 😉 |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | Now (on master), we get:
```
import datasets as ds
ds.load_dataset('counter', split="train", streaming=False)
```
```
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets/src/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets/src/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets/src/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets/src/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets/src/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
The error is now the same with or without streaming. I close the issue, thanks @albertvillanova and @lhoestq!
| ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| 191 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
Now (on master), we get:
```
import datasets as ds
ds.load_dataset('counter', split="train", streaming=False)
```
```
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets/src/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets/src/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets/src/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets/src/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets/src/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
The error is now the same with or without streaming. I close the issue, thanks @albertvillanova and @lhoestq!
|
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | Note that we might want to open an issue to fix the "counter" dataset by itself, but I let it up to you. | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| 23 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
Note that we might want to open an issue to fix the "counter" dataset by itself, but I let it up to you. |
https://github.com/huggingface/datasets/issues/2860 | Cannot download TOTTO dataset | Hola @mrm8488, thanks for reporting.
Apparently, the data source host changed their URL one week ago: https://github.com/google-research-datasets/ToTTo/commit/cebeb430ec2a97747e704d16a9354f7d9073ff8f
I'm fixing it. | Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip
`datasets version: 1.11.0`
# How to reproduce:
```py
from datasets import load_dataset
dataset = load_dataset('totto')
```
| 20 | Cannot download TOTTO dataset
Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip
`datasets version: 1.11.0`
# How to reproduce:
```py
from datasets import load_dataset
dataset = load_dataset('totto')
```
Hola @mrm8488, thanks for reporting.
Apparently, the data source host changed their URL one week ago: https://github.com/google-research-datasets/ToTTo/commit/cebeb430ec2a97747e704d16a9354f7d9073ff8f
I'm fixing it. |
https://github.com/huggingface/datasets/issues/2842 | always requiring the username in the dataset name when there is one | From what I can understand, you want the saved arrow file directory to have username as well instead of just dataset name if it was downloaded with the user prefix? | Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due.
So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `openwebtext-10k` and it all was good until we published the software and things broke, since there is no `openwebtext-10k`
So this feature request is asking to tighten the checking and not allow dataset loading if it was downloaded with the user prefix, but then attempted to be used w/o it.
The same in code:
```
# first run
python -c "from datasets import load_dataset; load_dataset('stas/openwebtext-10k')"
# now run immediately
python -c "from datasets import load_dataset; load_dataset('openwebtext-10k')"
# the second command should fail, but it doesn't fail now.
```
Please let me know if I explained myself clearly.
Thank you! | 30 | always requiring the username in the dataset name when there is one
Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due.
So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `openwebtext-10k` and it all was good until we published the software and things broke, since there is no `openwebtext-10k`
So this feature request is asking to tighten the checking and not allow dataset loading if it was downloaded with the user prefix, but then attempted to be used w/o it.
The same in code:
```
# first run
python -c "from datasets import load_dataset; load_dataset('stas/openwebtext-10k')"
# now run immediately
python -c "from datasets import load_dataset; load_dataset('openwebtext-10k')"
# the second command should fail, but it doesn't fail now.
```
Please let me know if I explained myself clearly.
Thank you!
From what I can understand, you want the saved arrow file directory to have username as well instead of just dataset name if it was downloaded with the user prefix? |
https://github.com/huggingface/datasets/issues/2842 | always requiring the username in the dataset name when there is one | I don't think the user cares of how this is done, but the 2nd command should fail, IMHO, as its dataset name is invalid:
```
# first run
python -c "from datasets import load_dataset; load_dataset('stas/openwebtext-10k')"
# now run immediately
python -c "from datasets import load_dataset; load_dataset('openwebtext-10k')"
# the second command should fail, but it doesn't fail now.
```
Moreover, if someone were to create `openwebtext-10k` w/o the prefix, they will now get the wrong dataset, if they previously downloaded `stas/openwebtext-10k`.
And if there are 2 users with the same dataset name `foo/ds` and `bar/ds` - currently this won't work to get the correct dataset.
So really there 3 unrelated issues hiding in the current behavior. | Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due.
So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `openwebtext-10k` and it all was good until we published the software and things broke, since there is no `openwebtext-10k`
So this feature request is asking to tighten the checking and not allow dataset loading if it was downloaded with the user prefix, but then attempted to be used w/o it.
The same in code:
```
# first run
python -c "from datasets import load_dataset; load_dataset('stas/openwebtext-10k')"
# now run immediately
python -c "from datasets import load_dataset; load_dataset('openwebtext-10k')"
# the second command should fail, but it doesn't fail now.
```
Please let me know if I explained myself clearly.
Thank you! | 115 | always requiring the username in the dataset name when there is one
Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due.
So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `openwebtext-10k` and it all was good until we published the software and things broke, since there is no `openwebtext-10k`
So this feature request is asking to tighten the checking and not allow dataset loading if it was downloaded with the user prefix, but then attempted to be used w/o it.
The same in code:
```
# first run
python -c "from datasets import load_dataset; load_dataset('stas/openwebtext-10k')"
# now run immediately
python -c "from datasets import load_dataset; load_dataset('openwebtext-10k')"
# the second command should fail, but it doesn't fail now.
```
Please let me know if I explained myself clearly.
Thank you!
I don't think the user cares of how this is done, but the 2nd command should fail, IMHO, as its dataset name is invalid:
```
# first run
python -c "from datasets import load_dataset; load_dataset('stas/openwebtext-10k')"
# now run immediately
python -c "from datasets import load_dataset; load_dataset('openwebtext-10k')"
# the second command should fail, but it doesn't fail now.
```
Moreover, if someone were to create `openwebtext-10k` w/o the prefix, they will now get the wrong dataset, if they previously downloaded `stas/openwebtext-10k`.
And if there are 2 users with the same dataset name `foo/ds` and `bar/ds` - currently this won't work to get the correct dataset.
So really there 3 unrelated issues hiding in the current behavior. |
https://github.com/huggingface/datasets/issues/2839 | OpenWebText: NonMatchingSplitsSizesError | I just regenerated the verifications metadata and noticed that nothing changed: the data file is fine (the checksum didn't change), and the number of examples is still 8013769. Not sure how you managed to get 7982430 examples.
Can you try to delete your cache ( by default at `~/.cache/huggingface/datasets`) and try again please ?
Also, on which platform are you (linux/macos/windows) ? | ## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430, dataset_name='openwebtext')}]
```
I suspect that the file we download from has changed since the size doesn't look like to match with documentation
`Downloading: 0%| | 0.00/12.9G [00:00<?, ?B/s]` This suggest the total size is 12.9GB, whereas the one documented mentions `Size of downloaded dataset files: 12283.35 MB`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("openwebtext", download_mode="force_redownload")
```
## Expected results
Loading is successful
## Actual results
Loading throws above error.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.10.2
- Platform: linux (Redhat version 8.1)
- Python version: 3.8
- PyArrow version: 4.0.1
| 62 | OpenWebText: NonMatchingSplitsSizesError
## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430, dataset_name='openwebtext')}]
```
I suspect that the file we download from has changed since the size doesn't look like to match with documentation
`Downloading: 0%| | 0.00/12.9G [00:00<?, ?B/s]` This suggest the total size is 12.9GB, whereas the one documented mentions `Size of downloaded dataset files: 12283.35 MB`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("openwebtext", download_mode="force_redownload")
```
## Expected results
Loading is successful
## Actual results
Loading throws above error.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.10.2
- Platform: linux (Redhat version 8.1)
- Python version: 3.8
- PyArrow version: 4.0.1
I just regenerated the verifications metadata and noticed that nothing changed: the data file is fine (the checksum didn't change), and the number of examples is still 8013769. Not sure how you managed to get 7982430 examples.
Can you try to delete your cache ( by default at `~/.cache/huggingface/datasets`) and try again please ?
Also, on which platform are you (linux/macos/windows) ? |
https://github.com/huggingface/datasets/issues/2839 | OpenWebText: NonMatchingSplitsSizesError | I'll try without deleting the whole cache (we have large datasets already stored). I was under the impression that `download_mode="force_redownload"` would bypass cache.
Sorry plateform should be linux (Redhat version 8.1) | ## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430, dataset_name='openwebtext')}]
```
I suspect that the file we download from has changed since the size doesn't look like to match with documentation
`Downloading: 0%| | 0.00/12.9G [00:00<?, ?B/s]` This suggest the total size is 12.9GB, whereas the one documented mentions `Size of downloaded dataset files: 12283.35 MB`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("openwebtext", download_mode="force_redownload")
```
## Expected results
Loading is successful
## Actual results
Loading throws above error.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.10.2
- Platform: linux (Redhat version 8.1)
- Python version: 3.8
- PyArrow version: 4.0.1
| 31 | OpenWebText: NonMatchingSplitsSizesError
## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430, dataset_name='openwebtext')}]
```
I suspect that the file we download from has changed since the size doesn't look like to match with documentation
`Downloading: 0%| | 0.00/12.9G [00:00<?, ?B/s]` This suggest the total size is 12.9GB, whereas the one documented mentions `Size of downloaded dataset files: 12283.35 MB`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("openwebtext", download_mode="force_redownload")
```
## Expected results
Loading is successful
## Actual results
Loading throws above error.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.10.2
- Platform: linux (Redhat version 8.1)
- Python version: 3.8
- PyArrow version: 4.0.1
I'll try without deleting the whole cache (we have large datasets already stored). I was under the impression that `download_mode="force_redownload"` would bypass cache.
Sorry plateform should be linux (Redhat version 8.1) |
https://github.com/huggingface/datasets/issues/2839 | OpenWebText: NonMatchingSplitsSizesError | Sorry I haven't had time to work on this. I'll close and re-open if I can't figure out why I'm having this issue. Thanks for taking a look ! | ## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430, dataset_name='openwebtext')}]
```
I suspect that the file we download from has changed since the size doesn't look like to match with documentation
`Downloading: 0%| | 0.00/12.9G [00:00<?, ?B/s]` This suggest the total size is 12.9GB, whereas the one documented mentions `Size of downloaded dataset files: 12283.35 MB`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("openwebtext", download_mode="force_redownload")
```
## Expected results
Loading is successful
## Actual results
Loading throws above error.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.10.2
- Platform: linux (Redhat version 8.1)
- Python version: 3.8
- PyArrow version: 4.0.1
| 29 | OpenWebText: NonMatchingSplitsSizesError
## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430, dataset_name='openwebtext')}]
```
I suspect that the file we download from has changed since the size doesn't look like to match with documentation
`Downloading: 0%| | 0.00/12.9G [00:00<?, ?B/s]` This suggest the total size is 12.9GB, whereas the one documented mentions `Size of downloaded dataset files: 12283.35 MB`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("openwebtext", download_mode="force_redownload")
```
## Expected results
Loading is successful
## Actual results
Loading throws above error.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.10.2
- Platform: linux (Redhat version 8.1)
- Python version: 3.8
- PyArrow version: 4.0.1
Sorry I haven't had time to work on this. I'll close and re-open if I can't figure out why I'm having this issue. Thanks for taking a look ! |
https://github.com/huggingface/datasets/issues/2831 | ArrowInvalid when mapping dataset with missing values | Hi ! It fails because of the feature type inference.
Because the first 1000 examples all have null values in the "match" field, then it infers that the type for this field is `null` type before writing the data on disk. But as soon as it tries to map an example with a non-null "match" field, then it fails.
To fix that you can either:
- increase the writer_batch_size to >2000 (default is 1000) so that some non-null values will be in the first batch written to disk
```python
datasets = datasets.map(lambda e: {'labels': e['match']}, remove_columns=['id'], writer_batch_size=2000)
```
- OR force the feature type with:
```python
from datasets import Features, Value
features = Features({
'conflict': Value('int64'),
'date': Value('string'),
'headline': Value('string'),
'match': Value('float64'),
'label': Value('float64')
})
datasets = datasets.map(lambda e: {'labels': e['match']}, remove_columns=['id'], features=features)
``` | ## Describe the bug
I encountered an `ArrowInvalid` when mapping dataset with missing values.
Here are the files for a minimal example. The exception is only thrown when the first line in the csv has a missing value (if you move the last line to the top it isn't thrown).
[data_small.csv](https://github.com/huggingface/datasets/files/7037838/data_small.csv)
[data.csv](https://github.com/huggingface/datasets/files/7037842/data.csv)
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("csv", data_files=['data_small.csv'])
datasets = datasets.map(lambda e: {'labels': e['match']},
remove_columns=['id'])
```
## Expected results
No error
## Actual results
```
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Invalid null value
```
## Environment info
- `datasets` version: 1.5.0
- Platform: Linux-5.11.0-25-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.7.1+cpu (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| 134 | ArrowInvalid when mapping dataset with missing values
## Describe the bug
I encountered an `ArrowInvalid` when mapping dataset with missing values.
Here are the files for a minimal example. The exception is only thrown when the first line in the csv has a missing value (if you move the last line to the top it isn't thrown).
[data_small.csv](https://github.com/huggingface/datasets/files/7037838/data_small.csv)
[data.csv](https://github.com/huggingface/datasets/files/7037842/data.csv)
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("csv", data_files=['data_small.csv'])
datasets = datasets.map(lambda e: {'labels': e['match']},
remove_columns=['id'])
```
## Expected results
No error
## Actual results
```
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Invalid null value
```
## Environment info
- `datasets` version: 1.5.0
- Platform: Linux-5.11.0-25-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.7.1+cpu (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
Hi ! It fails because of the feature type inference.
Because the first 1000 examples all have null values in the "match" field, then it infers that the type for this field is `null` type before writing the data on disk. But as soon as it tries to map an example with a non-null "match" field, then it fails.
To fix that you can either:
- increase the writer_batch_size to >2000 (default is 1000) so that some non-null values will be in the first batch written to disk
```python
datasets = datasets.map(lambda e: {'labels': e['match']}, remove_columns=['id'], writer_batch_size=2000)
```
- OR force the feature type with:
```python
from datasets import Features, Value
features = Features({
'conflict': Value('int64'),
'date': Value('string'),
'headline': Value('string'),
'match': Value('float64'),
'label': Value('float64')
})
datasets = datasets.map(lambda e: {'labels': e['match']}, remove_columns=['id'], features=features)
``` |
https://github.com/huggingface/datasets/issues/2826 | Add a Text Classification dataset: KanHope | Hi ! In your script it looks like you're trying to load the dataset `bn_hate_speech,`, not KanHope.
Moreover the error `KeyError: ' '` means that you have a feature of type ClassLabel, but for a certain example of the dataset, it looks like the label is empty (it's just a string with a space). Can you make sure that the data don't have missing labels, and that your dataset script parses the labels correctly ? | ## Adding a Dataset
- **Name:** *KanHope*
- **Description:** *A code-mixed English-Kannada dataset for Hope speech detection*
- **Paper:** *https://arxiv.org/abs/2108.04616* (I am the author of the paper}
- **Author:** *[AdeepH](https://github.com/adeepH)*
- **Data:** *https://github.com/adeepH/KanHope/tree/main/dataset*
- **Motivation:** *The dataset is amongst the very few resources available for code-mixed Dravidian languages*
- I tried following the steps as per the instructions. However, could not resolve an error. Any help would be appreciated.
- The dataset card and the scripts for the dataset *https://github.com/adeepH/datasets/tree/multilingual-hope-speech/datasets/mhs_eval*
```
Using custom data configuration default
Downloading and preparing dataset bn_hate_speech/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/bn_hate_speech/default/0.0.0/5f417ddc89777278abd29988f909f39495f0ec802090f7d8fa63b5bffb121762...
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-114-4a9cdb519e4c> in <module>()
1 from datasets import load_dataset
2
----> 3 data = load_dataset('/content/bn')
9 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
850 ignore_verifications=ignore_verifications,
851 try_from_hf_gcs=try_from_hf_gcs,
--> 852 use_auth_token=use_auth_token,
853 )
854
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
614 if not downloaded_from_gcs:
615 self._download_and_prepare(
--> 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
617 )
618 # Sync info
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
691 try:
692 # Prepare split will record examples associated to the split
--> 693 self._prepare_split(split_generator, **prepare_split_kwargs)
694 except OSError as e:
695 raise OSError(
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator)
1107 disable=bool(logging.get_verbosity() == logging.NOTSET),
1108 ):
-> 1109 example = self.info.features.encode_example(record)
1110 writer.write(example, key)
1111 finally:
/usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_example(self, example)
1015 """
1016 example = cast_to_python_objects(example)
-> 1017 return encode_nested_example(self, example)
1018
1019 def encode_batch(self, batch):
/usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_nested_example(schema, obj)
863 if isinstance(schema, dict):
864 return {
--> 865 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
866 }
867 elif isinstance(schema, (list, tuple)):
/usr/local/lib/python3.7/dist-packages/datasets/features.py in <dictcomp>(.0)
863 if isinstance(schema, dict):
864 return {
--> 865 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
866 }
867 elif isinstance(schema, (list, tuple)):
/usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_nested_example(schema, obj)
890 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks
891 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)):
--> 892 return schema.encode_example(obj)
893 # Other object should be directly convertible to a native Arrow type (like Translation and Translation)
894 return obj
/usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_example(self, example_data)
665 # If a string is given, convert to associated integer
666 if isinstance(example_data, str):
--> 667 example_data = self.str2int(example_data)
668
669 # Allowing -1 to mean no label.
/usr/local/lib/python3.7/dist-packages/datasets/features.py in str2int(self, values)
623 if value not in self._str2int:
624 value = str(value).strip()
--> 625 output.append(self._str2int[str(value)])
626 else:
627 # No names provided, try to integerize
KeyError: ' '
``` | 75 | Add a Text Classification dataset: KanHope
## Adding a Dataset
- **Name:** *KanHope*
- **Description:** *A code-mixed English-Kannada dataset for Hope speech detection*
- **Paper:** *https://arxiv.org/abs/2108.04616* (I am the author of the paper}
- **Author:** *[AdeepH](https://github.com/adeepH)*
- **Data:** *https://github.com/adeepH/KanHope/tree/main/dataset*
- **Motivation:** *The dataset is amongst the very few resources available for code-mixed Dravidian languages*
- I tried following the steps as per the instructions. However, could not resolve an error. Any help would be appreciated.
- The dataset card and the scripts for the dataset *https://github.com/adeepH/datasets/tree/multilingual-hope-speech/datasets/mhs_eval*
```
Using custom data configuration default
Downloading and preparing dataset bn_hate_speech/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/bn_hate_speech/default/0.0.0/5f417ddc89777278abd29988f909f39495f0ec802090f7d8fa63b5bffb121762...
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-114-4a9cdb519e4c> in <module>()
1 from datasets import load_dataset
2
----> 3 data = load_dataset('/content/bn')
9 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
850 ignore_verifications=ignore_verifications,
851 try_from_hf_gcs=try_from_hf_gcs,
--> 852 use_auth_token=use_auth_token,
853 )
854
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
614 if not downloaded_from_gcs:
615 self._download_and_prepare(
--> 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
617 )
618 # Sync info
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
691 try:
692 # Prepare split will record examples associated to the split
--> 693 self._prepare_split(split_generator, **prepare_split_kwargs)
694 except OSError as e:
695 raise OSError(
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator)
1107 disable=bool(logging.get_verbosity() == logging.NOTSET),
1108 ):
-> 1109 example = self.info.features.encode_example(record)
1110 writer.write(example, key)
1111 finally:
/usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_example(self, example)
1015 """
1016 example = cast_to_python_objects(example)
-> 1017 return encode_nested_example(self, example)
1018
1019 def encode_batch(self, batch):
/usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_nested_example(schema, obj)
863 if isinstance(schema, dict):
864 return {
--> 865 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
866 }
867 elif isinstance(schema, (list, tuple)):
/usr/local/lib/python3.7/dist-packages/datasets/features.py in <dictcomp>(.0)
863 if isinstance(schema, dict):
864 return {
--> 865 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
866 }
867 elif isinstance(schema, (list, tuple)):
/usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_nested_example(schema, obj)
890 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks
891 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)):
--> 892 return schema.encode_example(obj)
893 # Other object should be directly convertible to a native Arrow type (like Translation and Translation)
894 return obj
/usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_example(self, example_data)
665 # If a string is given, convert to associated integer
666 if isinstance(example_data, str):
--> 667 example_data = self.str2int(example_data)
668
669 # Allowing -1 to mean no label.
/usr/local/lib/python3.7/dist-packages/datasets/features.py in str2int(self, values)
623 if value not in self._str2int:
624 value = str(value).strip()
--> 625 output.append(self._str2int[str(value)])
626 else:
627 # No names provided, try to integerize
KeyError: ' '
```
Hi ! In your script it looks like you're trying to load the dataset `bn_hate_speech,`, not KanHope.
Moreover the error `KeyError: ' '` means that you have a feature of type ClassLabel, but for a certain example of the dataset, it looks like the label is empty (it's just a string with a space). Can you make sure that the data don't have missing labels, and that your dataset script parses the labels correctly ? |
https://github.com/huggingface/datasets/issues/2825 | The datasets.map function does not load cached dataset after moving python script | This also happened to me on COLAB.
Details:
I ran the `run_mlm.py` in two different notebooks.
In the first notebook, I do tokenization since I can get 4 CPU cores without any GPUs, and save the cache into a folder which I copy to drive.
In the second notebook, I copy the cache folder from drive and re-run the run_mlm.py script (this time I uncomment the trainer code which happens after the tokenization)
Note: I didn't change anything in the arguments, not even the preprocessing_num_workers
| ## Describe the bug
The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-processing. However, it doesn't reuse cached data sometimes. I use the common data processing in different tasks, the datasets are processed again, the only difference is that I run them in different files.
## Steps to reproduce the bug
Just run the following codes in different .py files.
```python
if __name__ == '__main__':
from datasets import load_dataset
from transformers import AutoTokenizer
raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1")
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
```
## Expected results
The map function should reload data in the second or any later runs.
## Actual results
The processing happens in each run.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: linux
- Python version: 3.7.6
- PyArrow version: 3.0.0
This is the first time I report a bug. If there is any problem or confusing description, please let me know 😄.
| 85 | The datasets.map function does not load cached dataset after moving python script
## Describe the bug
The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-processing. However, it doesn't reuse cached data sometimes. I use the common data processing in different tasks, the datasets are processed again, the only difference is that I run them in different files.
## Steps to reproduce the bug
Just run the following codes in different .py files.
```python
if __name__ == '__main__':
from datasets import load_dataset
from transformers import AutoTokenizer
raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1")
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
```
## Expected results
The map function should reload data in the second or any later runs.
## Actual results
The processing happens in each run.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: linux
- Python version: 3.7.6
- PyArrow version: 3.0.0
This is the first time I report a bug. If there is any problem or confusing description, please let me know 😄.
This also happened to me on COLAB.
Details:
I ran the `run_mlm.py` in two different notebooks.
In the first notebook, I do tokenization since I can get 4 CPU cores without any GPUs, and save the cache into a folder which I copy to drive.
In the second notebook, I copy the cache folder from drive and re-run the run_mlm.py script (this time I uncomment the trainer code which happens after the tokenization)
Note: I didn't change anything in the arguments, not even the preprocessing_num_workers
|
https://github.com/huggingface/datasets/issues/2825 | The datasets.map function does not load cached dataset after moving python script | #2854 fixed the issue :)
We'll do a new release of `datasets` soon to make the fix available.
In the meantime, feel free to try it out by installing `datasets` from source
If you have other issues or any question, feel free to re-open the issue :) | ## Describe the bug
The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-processing. However, it doesn't reuse cached data sometimes. I use the common data processing in different tasks, the datasets are processed again, the only difference is that I run them in different files.
## Steps to reproduce the bug
Just run the following codes in different .py files.
```python
if __name__ == '__main__':
from datasets import load_dataset
from transformers import AutoTokenizer
raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1")
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
```
## Expected results
The map function should reload data in the second or any later runs.
## Actual results
The processing happens in each run.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: linux
- Python version: 3.7.6
- PyArrow version: 3.0.0
This is the first time I report a bug. If there is any problem or confusing description, please let me know 😄.
| 47 | The datasets.map function does not load cached dataset after moving python script
## Describe the bug
The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-processing. However, it doesn't reuse cached data sometimes. I use the common data processing in different tasks, the datasets are processed again, the only difference is that I run them in different files.
## Steps to reproduce the bug
Just run the following codes in different .py files.
```python
if __name__ == '__main__':
from datasets import load_dataset
from transformers import AutoTokenizer
raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1")
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
```
## Expected results
The map function should reload data in the second or any later runs.
## Actual results
The processing happens in each run.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: linux
- Python version: 3.7.6
- PyArrow version: 3.0.0
This is the first time I report a bug. If there is any problem or confusing description, please let me know 😄.
#2854 fixed the issue :)
We'll do a new release of `datasets` soon to make the fix available.
In the meantime, feel free to try it out by installing `datasets` from source
If you have other issues or any question, feel free to re-open the issue :) |
https://github.com/huggingface/datasets/issues/2823 | HF_DATASETS_CACHE variable in Windows | Agh - I'm a muppet. No quote marks are needed.
set HF_DATASETS_CACHE = C:\Datasets
works as intended. | I can't seem to use a custom Cache directory in Windows. I have tried:
set HF_DATASETS_CACHE = "C:\Datasets"
set HF_DATASETS_CACHE = "C:/Datasets"
set HF_DATASETS_CACHE = "C:\\Datasets"
set HF_DATASETS_CACHE = "r'C:\Datasets'"
set HF_DATASETS_CACHE = "\Datasets"
set HF_DATASETS_CACHE = "/Datasets"
In each instance I get the "[WinError 123] The filename, directory name, or volume label syntax is incorrect" error when attempting to load a dataset | 17 | HF_DATASETS_CACHE variable in Windows
I can't seem to use a custom Cache directory in Windows. I have tried:
set HF_DATASETS_CACHE = "C:\Datasets"
set HF_DATASETS_CACHE = "C:/Datasets"
set HF_DATASETS_CACHE = "C:\\Datasets"
set HF_DATASETS_CACHE = "r'C:\Datasets'"
set HF_DATASETS_CACHE = "\Datasets"
set HF_DATASETS_CACHE = "/Datasets"
In each instance I get the "[WinError 123] The filename, directory name, or volume label syntax is incorrect" error when attempting to load a dataset
Agh - I'm a muppet. No quote marks are needed.
set HF_DATASETS_CACHE = C:\Datasets
works as intended. |
https://github.com/huggingface/datasets/issues/2821 | Cannot load linnaeus dataset | Thanks for reporting ! #2852 fixed this error
We'll do a new release of `datasets` soon :) | ## Describe the bug
The [linnaeus](https://huggingface.co/datasets/linnaeus) dataset cannot be loaded. To reproduce:
```
from datasets import load_dataset
datasets = load_dataset("linnaeus")
```
This results in:
```
Downloading and preparing dataset linnaeus/linnaeus (download: 17.36 MiB, generated: 8.74 MiB, post-processed: Unknown size, total: 26.10 MiB) to /root/.cache/huggingface/datasets/linnaeus/linnaeus/1.0.0/2ff05dbc256108233262f596e09e322dbc3db067202de14286913607cd9cb704...
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-4-7ef3a88f6276> in <module>()
1 from datasets import load_dataset
2
----> 3 datasets = load_dataset("linnaeus")
11 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
603 raise FileNotFoundError("Couldn't find file at {}".format(url))
604 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
--> 605 raise ConnectionError("Couldn't reach {}".format(url))
606
607 # Try a second time
ConnectionError: Couldn't reach https://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download/
``` | 17 | Cannot load linnaeus dataset
## Describe the bug
The [linnaeus](https://huggingface.co/datasets/linnaeus) dataset cannot be loaded. To reproduce:
```
from datasets import load_dataset
datasets = load_dataset("linnaeus")
```
This results in:
```
Downloading and preparing dataset linnaeus/linnaeus (download: 17.36 MiB, generated: 8.74 MiB, post-processed: Unknown size, total: 26.10 MiB) to /root/.cache/huggingface/datasets/linnaeus/linnaeus/1.0.0/2ff05dbc256108233262f596e09e322dbc3db067202de14286913607cd9cb704...
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-4-7ef3a88f6276> in <module>()
1 from datasets import load_dataset
2
----> 3 datasets = load_dataset("linnaeus")
11 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
603 raise FileNotFoundError("Couldn't find file at {}".format(url))
604 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
--> 605 raise ConnectionError("Couldn't reach {}".format(url))
606
607 # Try a second time
ConnectionError: Couldn't reach https://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download/
```
Thanks for reporting ! #2852 fixed this error
We'll do a new release of `datasets` soon :) |
https://github.com/huggingface/datasets/issues/2820 | Downloading “reddit” dataset keeps timing out. | ```
Using custom data configuration default
Downloading and preparing dataset reddit/default (download: 2.93 GiB, generated: 17.64 GiB, post-processed: Unknown size, total: 20.57 GiB) to /Volumes/My Passport for Mac/og-chat-data/reddit/default/1.0.0/98ba5abea674d3178f7588aa6518a5510dc0c6fa8176d9653a3546d5afcb3969...
Downloading: 13%
403M/3.14G [44:39<2:27:09, 310kB/s]
---------------------------------------------------------------------------
timeout Traceback (most recent call last)
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in _error_catcher(self)
437 try:
--> 438 yield
439
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in read(self, amt, decode_content, cache_content)
518 cache_content = False
--> 519 data = self._fp.read(amt) if not fp_closed else b""
520 if (
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/http/client.py in read(self, amt)
458 b = bytearray(amt)
--> 459 n = self.readinto(b)
460 return memoryview(b)[:n].tobytes()
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/http/client.py in readinto(self, b)
502 # (for example, reading in 1k chunks)
--> 503 n = self.fp.readinto(b)
504 if not n and b:
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/socket.py in readinto(self, b)
703 try:
--> 704 return self._sock.recv_into(b)
705 except timeout:
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/ssl.py in recv_into(self, buffer, nbytes, flags)
1240 self.__class__)
-> 1241 return self.read(nbytes, buffer)
1242 else:
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/ssl.py in read(self, len, buffer)
1098 if buffer is not None:
-> 1099 return self._sslobj.read(len, buffer)
1100 else:
timeout: The read operation timed out
During handling of the above exception, another exception occurred:
ReadTimeoutError Traceback (most recent call last)
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/requests/models.py in generate()
757 try:
--> 758 for chunk in self.raw.stream(chunk_size, decode_content=True):
759 yield chunk
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in stream(self, amt, decode_content)
575 while not is_fp_closed(self._fp):
--> 576 data = self.read(amt=amt, decode_content=decode_content)
577
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in read(self, amt, decode_content, cache_content)
540 # Content-Length are caught.
--> 541 raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
542
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/contextlib.py in __exit__(self, type, value, traceback)
134 try:
--> 135 self.gen.throw(type, value, traceback)
136 except StopIteration as exc:
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in _error_catcher(self)
442 # there is yet no clean way to get at it from this context.
--> 443 raise ReadTimeoutError(self._pool, None, "Read timed out.")
444
ReadTimeoutError: HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out.
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last)
/var/folders/3f/md0t9sgj6rz8xy01fskttqdc0000gn/T/ipykernel_89016/1133441872.py in <module>
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data")
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
845
846 # Download and prepare data
--> 847 builder_instance.download_and_prepare(
848 download_config=download_config,
849 download_mode=download_mode,
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
613 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
614 if not downloaded_from_gcs:
--> 615 self._download_and_prepare(
616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
617 )
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
669 split_dict = SplitDict(dataset_name=self.name)
670 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 671 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
672
673 # Checksums verification
~/.cache/huggingface/modules/datasets_modules/datasets/reddit/98ba5abea674d3178f7588aa6518a5510dc0c6fa8176d9653a3546d5afcb3969/reddit.py in _split_generators(self, dl_manager)
73 def _split_generators(self, dl_manager):
74 """Returns SplitGenerators."""
---> 75 dl_path = dl_manager.download_and_extract(_URL)
76 return [
77 datasets.SplitGenerator(
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/download_manager.py in download_and_extract(self, url_or_urls)
287 extracted_path(s): `str`, extracted paths of given URL(s).
288 """
--> 289 return self.extract(self.download(url_or_urls))
290
291 def get_recorded_sizes_checksums(self):
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/download_manager.py in download(self, url_or_urls)
195
196 start_time = datetime.now()
--> 197 downloaded_path_or_paths = map_nested(
198 download_func,
199 url_or_urls,
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)
194 # Singleton
195 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 196 return function(data_struct)
197
198 disable_tqdm = bool(logger.getEffectiveLevel() > logging.INFO) or not utils.is_progress_bar_enabled()
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/download_manager.py in _download(self, url_or_filename, download_config)
218 # append the relative path to the base_path
219 url_or_filename = url_or_path_join(self._base_path, url_or_filename)
--> 220 return cached_path(url_or_filename, download_config=download_config)
221
222 def iter_archive(self, path):
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
286 if is_remote_url(url_or_filename):
287 # URL, so get it from the cache (downloading if necessary)
--> 288 output_path = get_from_cache(
289 url_or_filename,
290 cache_dir=cache_dir,
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
643 ftp_get(url, temp_file)
644 else:
--> 645 http_get(
646 url,
647 temp_file,
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/file_utils.py in http_get(url, temp_file, proxies, resume_size, headers, cookies, timeout, max_retries)
451 disable=bool(logging.get_verbosity() == logging.NOTSET),
452 )
--> 453 for chunk in response.iter_content(chunk_size=1024):
454 if chunk: # filter out keep-alive new chunks
455 progress.update(len(chunk))
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/requests/models.py in generate()
763 raise ContentDecodingError(e)
764 except ReadTimeoutError as e:
--> 765 raise ConnectionError(e)
766 else:
767 # Standard file-like object.
ConnectionError: HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out.
``` | ## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data")
```
## Expected results
A clear and concise description of the expected results.
I would expect the download to finish, or at least provide a parameter to extend the read timeout window.
## Actual results
Specify the actual results or traceback.
Shown below in error message.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: macOS
- Python version: 3.9.6 (conda env)
- PyArrow version: N/A
| 646 | Downloading “reddit” dataset keeps timing out.
## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data")
```
## Expected results
A clear and concise description of the expected results.
I would expect the download to finish, or at least provide a parameter to extend the read timeout window.
## Actual results
Specify the actual results or traceback.
Shown below in error message.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: macOS
- Python version: 3.9.6 (conda env)
- PyArrow version: N/A
```
Using custom data configuration default
Downloading and preparing dataset reddit/default (download: 2.93 GiB, generated: 17.64 GiB, post-processed: Unknown size, total: 20.57 GiB) to /Volumes/My Passport for Mac/og-chat-data/reddit/default/1.0.0/98ba5abea674d3178f7588aa6518a5510dc0c6fa8176d9653a3546d5afcb3969...
Downloading: 13%
403M/3.14G [44:39<2:27:09, 310kB/s]
---------------------------------------------------------------------------
timeout Traceback (most recent call last)
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in _error_catcher(self)
437 try:
--> 438 yield
439
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in read(self, amt, decode_content, cache_content)
518 cache_content = False
--> 519 data = self._fp.read(amt) if not fp_closed else b""
520 if (
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/http/client.py in read(self, amt)
458 b = bytearray(amt)
--> 459 n = self.readinto(b)
460 return memoryview(b)[:n].tobytes()
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/http/client.py in readinto(self, b)
502 # (for example, reading in 1k chunks)
--> 503 n = self.fp.readinto(b)
504 if not n and b:
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/socket.py in readinto(self, b)
703 try:
--> 704 return self._sock.recv_into(b)
705 except timeout:
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/ssl.py in recv_into(self, buffer, nbytes, flags)
1240 self.__class__)
-> 1241 return self.read(nbytes, buffer)
1242 else:
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/ssl.py in read(self, len, buffer)
1098 if buffer is not None:
-> 1099 return self._sslobj.read(len, buffer)
1100 else:
timeout: The read operation timed out
During handling of the above exception, another exception occurred:
ReadTimeoutError Traceback (most recent call last)
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/requests/models.py in generate()
757 try:
--> 758 for chunk in self.raw.stream(chunk_size, decode_content=True):
759 yield chunk
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in stream(self, amt, decode_content)
575 while not is_fp_closed(self._fp):
--> 576 data = self.read(amt=amt, decode_content=decode_content)
577
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in read(self, amt, decode_content, cache_content)
540 # Content-Length are caught.
--> 541 raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
542
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/contextlib.py in __exit__(self, type, value, traceback)
134 try:
--> 135 self.gen.throw(type, value, traceback)
136 except StopIteration as exc:
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in _error_catcher(self)
442 # there is yet no clean way to get at it from this context.
--> 443 raise ReadTimeoutError(self._pool, None, "Read timed out.")
444
ReadTimeoutError: HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out.
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last)
/var/folders/3f/md0t9sgj6rz8xy01fskttqdc0000gn/T/ipykernel_89016/1133441872.py in <module>
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data")
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
845
846 # Download and prepare data
--> 847 builder_instance.download_and_prepare(
848 download_config=download_config,
849 download_mode=download_mode,
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
613 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
614 if not downloaded_from_gcs:
--> 615 self._download_and_prepare(
616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
617 )
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
669 split_dict = SplitDict(dataset_name=self.name)
670 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 671 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
672
673 # Checksums verification
~/.cache/huggingface/modules/datasets_modules/datasets/reddit/98ba5abea674d3178f7588aa6518a5510dc0c6fa8176d9653a3546d5afcb3969/reddit.py in _split_generators(self, dl_manager)
73 def _split_generators(self, dl_manager):
74 """Returns SplitGenerators."""
---> 75 dl_path = dl_manager.download_and_extract(_URL)
76 return [
77 datasets.SplitGenerator(
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/download_manager.py in download_and_extract(self, url_or_urls)
287 extracted_path(s): `str`, extracted paths of given URL(s).
288 """
--> 289 return self.extract(self.download(url_or_urls))
290
291 def get_recorded_sizes_checksums(self):
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/download_manager.py in download(self, url_or_urls)
195
196 start_time = datetime.now()
--> 197 downloaded_path_or_paths = map_nested(
198 download_func,
199 url_or_urls,
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)
194 # Singleton
195 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 196 return function(data_struct)
197
198 disable_tqdm = bool(logger.getEffectiveLevel() > logging.INFO) or not utils.is_progress_bar_enabled()
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/download_manager.py in _download(self, url_or_filename, download_config)
218 # append the relative path to the base_path
219 url_or_filename = url_or_path_join(self._base_path, url_or_filename)
--> 220 return cached_path(url_or_filename, download_config=download_config)
221
222 def iter_archive(self, path):
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
286 if is_remote_url(url_or_filename):
287 # URL, so get it from the cache (downloading if necessary)
--> 288 output_path = get_from_cache(
289 url_or_filename,
290 cache_dir=cache_dir,
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
643 ftp_get(url, temp_file)
644 else:
--> 645 http_get(
646 url,
647 temp_file,
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/file_utils.py in http_get(url, temp_file, proxies, resume_size, headers, cookies, timeout, max_retries)
451 disable=bool(logging.get_verbosity() == logging.NOTSET),
452 )
--> 453 for chunk in response.iter_content(chunk_size=1024):
454 if chunk: # filter out keep-alive new chunks
455 progress.update(len(chunk))
/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/requests/models.py in generate()
763 raise ContentDecodingError(e)
764 except ReadTimeoutError as e:
--> 765 raise ConnectionError(e)
766 else:
767 # Standard file-like object.
ConnectionError: HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out.
``` |
https://github.com/huggingface/datasets/issues/2820 | Downloading “reddit” dataset keeps timing out. | It also doesn't seem to be "smart caching" and I received an error about a file not being found... | ## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data")
```
## Expected results
A clear and concise description of the expected results.
I would expect the download to finish, or at least provide a parameter to extend the read timeout window.
## Actual results
Specify the actual results or traceback.
Shown below in error message.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: macOS
- Python version: 3.9.6 (conda env)
- PyArrow version: N/A
| 19 | Downloading “reddit” dataset keeps timing out.
## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data")
```
## Expected results
A clear and concise description of the expected results.
I would expect the download to finish, or at least provide a parameter to extend the read timeout window.
## Actual results
Specify the actual results or traceback.
Shown below in error message.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: macOS
- Python version: 3.9.6 (conda env)
- PyArrow version: N/A
It also doesn't seem to be "smart caching" and I received an error about a file not being found... |
https://github.com/huggingface/datasets/issues/2820 | Downloading “reddit” dataset keeps timing out. | To be clear, the error I get when I try to "re-instantiate" the download after failure is:
```
OSError: Cannot find data file.
Original error:
[Errno 20] Not a directory: <HOME>/.cache/huggingface/datasets/downloads/1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c/corpus-webis-tldr-17.json'
``` | ## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data")
```
## Expected results
A clear and concise description of the expected results.
I would expect the download to finish, or at least provide a parameter to extend the read timeout window.
## Actual results
Specify the actual results or traceback.
Shown below in error message.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: macOS
- Python version: 3.9.6 (conda env)
- PyArrow version: N/A
| 32 | Downloading “reddit” dataset keeps timing out.
## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data")
```
## Expected results
A clear and concise description of the expected results.
I would expect the download to finish, or at least provide a parameter to extend the read timeout window.
## Actual results
Specify the actual results or traceback.
Shown below in error message.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: macOS
- Python version: 3.9.6 (conda env)
- PyArrow version: N/A
To be clear, the error I get when I try to "re-instantiate" the download after failure is:
```
OSError: Cannot find data file.
Original error:
[Errno 20] Not a directory: <HOME>/.cache/huggingface/datasets/downloads/1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c/corpus-webis-tldr-17.json'
``` |
https://github.com/huggingface/datasets/issues/2820 | Downloading “reddit” dataset keeps timing out. | Hi ! Since https://github.com/huggingface/datasets/pull/2803 we've changed the time out from 10sec to 100sec.
This should prevent the `ReadTimeoutError`. Feel free to try it out by installing `datasets` from source
```
pip install git+https://github.com/huggingface/datasets.git
```
When re-running your code you said you get a `OSError`, could you try deleting the file at the path returned by the error ? (the one after `[Errno 20] Not a directory:`). Ideally when a download fails you should be able to re-run it without error; there might be an issue here.
Finally not sure what we can do about `ConnectionError`, this must be an issue from zenodo. If it happens you simply need to try again
| ## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data")
```
## Expected results
A clear and concise description of the expected results.
I would expect the download to finish, or at least provide a parameter to extend the read timeout window.
## Actual results
Specify the actual results or traceback.
Shown below in error message.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: macOS
- Python version: 3.9.6 (conda env)
- PyArrow version: N/A
| 111 | Downloading “reddit” dataset keeps timing out.
## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data")
```
## Expected results
A clear and concise description of the expected results.
I would expect the download to finish, or at least provide a parameter to extend the read timeout window.
## Actual results
Specify the actual results or traceback.
Shown below in error message.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: macOS
- Python version: 3.9.6 (conda env)
- PyArrow version: N/A
Hi ! Since https://github.com/huggingface/datasets/pull/2803 we've changed the time out from 10sec to 100sec.
This should prevent the `ReadTimeoutError`. Feel free to try it out by installing `datasets` from source
```
pip install git+https://github.com/huggingface/datasets.git
```
When re-running your code you said you get a `OSError`, could you try deleting the file at the path returned by the error ? (the one after `[Errno 20] Not a directory:`). Ideally when a download fails you should be able to re-run it without error; there might be an issue here.
Finally not sure what we can do about `ConnectionError`, this must be an issue from zenodo. If it happens you simply need to try again
|
https://github.com/huggingface/datasets/issues/2820 | Downloading “reddit” dataset keeps timing out. | @lhoestq thanks for the update. The directory specified by the OSError ie.
```
1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c/corpus-webis-tldr-17.json
```
was not actually in that directory so I can't delete it. | ## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data")
```
## Expected results
A clear and concise description of the expected results.
I would expect the download to finish, or at least provide a parameter to extend the read timeout window.
## Actual results
Specify the actual results or traceback.
Shown below in error message.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: macOS
- Python version: 3.9.6 (conda env)
- PyArrow version: N/A
| 26 | Downloading “reddit” dataset keeps timing out.
## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data")
```
## Expected results
A clear and concise description of the expected results.
I would expect the download to finish, or at least provide a parameter to extend the read timeout window.
## Actual results
Specify the actual results or traceback.
Shown below in error message.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: macOS
- Python version: 3.9.6 (conda env)
- PyArrow version: N/A
@lhoestq thanks for the update. The directory specified by the OSError ie.
```
1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c/corpus-webis-tldr-17.json
```
was not actually in that directory so I can't delete it. |