url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.92B
2.68B
node_id
stringlengths
18
19
number
int64
6.27k
7.3k
title
stringlengths
2
159
user
dict
labels
listlengths
0
2
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
1
milestone
dict
comments
int64
0
24
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
3
47.9k
closed_by
dict
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
time_to_close
float64
0
7.99k
https://api.github.com/repos/huggingface/datasets/issues/7296
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7296/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7296/comments
https://api.github.com/repos/huggingface/datasets/issues/7296/events
https://github.com/huggingface/datasets/pull/7296
2,675,573,974
PR_kwDODunzps6ChJIJ
7,296
Remove upper version limit of fsspec[http]
{ "avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4", "events_url": "https://api.github.com/users/cyyever/events{/privacy}", "followers_url": "https://api.github.com/users/cyyever/followers", "following_url": "https://api.github.com/users/cyyever/following{/other_user}", "gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cyyever", "id": 17618148, "login": "cyyever", "node_id": "MDQ6VXNlcjE3NjE4MTQ4", "organizations_url": "https://api.github.com/users/cyyever/orgs", "received_events_url": "https://api.github.com/users/cyyever/received_events", "repos_url": "https://api.github.com/users/cyyever/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyyever/subscriptions", "type": "User", "url": "https://api.github.com/users/cyyever", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2024-11-20T11:29:16
2024-11-20T11:29:16
null
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7296.diff", "html_url": "https://github.com/huggingface/datasets/pull/7296", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7296.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7296" }
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7296/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7296/timeline
null
null
true
null
https://api.github.com/repos/huggingface/datasets/issues/7295
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7295/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7295/comments
https://api.github.com/repos/huggingface/datasets/issues/7295/events
https://github.com/huggingface/datasets/issues/7295
2,672,003,384
I_kwDODunzps6fQ4k4
7,295
[BUG]: Streaming from S3 triggers `unexpected keyword argument 'requote_redirect_url'`
{ "avatar_url": "https://avatars.githubusercontent.com/u/27340033?v=4", "events_url": "https://api.github.com/users/casper-hansen/events{/privacy}", "followers_url": "https://api.github.com/users/casper-hansen/followers", "following_url": "https://api.github.com/users/casper-hansen/following{/other_user}", "gists_url": "https://api.github.com/users/casper-hansen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/casper-hansen", "id": 27340033, "login": "casper-hansen", "node_id": "MDQ6VXNlcjI3MzQwMDMz", "organizations_url": "https://api.github.com/users/casper-hansen/orgs", "received_events_url": "https://api.github.com/users/casper-hansen/received_events", "repos_url": "https://api.github.com/users/casper-hansen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/casper-hansen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/casper-hansen/subscriptions", "type": "User", "url": "https://api.github.com/users/casper-hansen", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2024-11-19T12:23:36
2024-11-19T13:01:53
null
NONE
null
null
null
### Describe the bug Note that this bug is only triggered when `streaming=True`. #5459 introduced always calling fsspec with `client_kwargs={"requote_redirect_url": False}`, which seems to have incompatibility issues even in the newest versions. Analysis of what's happening: 1. `datasets` passes the `client_kwargs` through `fsspec` 2. `fsspec` passes the `client_kwargs` through `s3fs` 3. `s3fs` passes the `client_kwargs` to `aiobotocore` which uses `aiohttp` ``` s3creator = self.session.create_client( "s3", config=conf, **init_kwargs, **client_kwargs ) ``` 4. The `session` tries to create an `aiohttp` session but the `**kwargs` are not just kept as unfolded `**kwargs` but passed in as individual variables (`requote_redirect_url` and `trust_env`). Error: ``` Traceback (most recent call last): File "/Users/cxrh/Documents/GitHub/nlp_foundation/nlp_train/test.py", line 14, in <module> batch = next(iter(ds)) File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1353, in __iter__ for key, example in ex_iterable: File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 255, in __iter__ for key, pa_table in self.generate_tables_fn(**self.kwargs): File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py", line 78, in _generate_tables for file_idx, file in enumerate(itertools.chain.from_iterable(files)): File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 840, in __iter__ yield from self.generator(*self.args, **self.kwargs) File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 921, in _iter_from_urlpaths elif xisdir(urlpath, download_config=download_config): File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 305, in xisdir return fs.isdir(inner_path) File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/spec.py", line 721, in isdir return self.info(path)["type"] == "directory" File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/archive.py", line 38, in info self._get_dirs() File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/filesystems/compression.py", line 64, in _get_dirs f = {**self.file.fs.info(self.file.path), "name": self.uncompressed_name} File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 118, in wrapper return sync(self.loop, func, *args, **kwargs) File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 103, in sync raise return_result File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 56, in _runner result[0] = await coro File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 1302, in _info out = await self._call_s3( File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 341, in _call_s3 await self.set_session() File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 524, in set_session s3creator = self.session.create_client( File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/aiobotocore/session.py", line 114, in create_client return ClientCreatorContext(self._create_client(*args, **kwargs)) TypeError: AioSession._create_client() got an unexpected keyword argument 'requote_redirect_url' ``` ### Steps to reproduce the bug 1. Install the necessary libraries, datasets having a requirement for being at least 2.19.0: ``` pip install s3fs fsspec aiohttp aiobotocore botocore 'datasets>=2.19.0' ``` 2. Run this code: ``` from datasets import load_dataset ds = load_dataset( "json", data_files="s3://your_path/*.jsonl.gz", streaming=True, split="train", ) batch = next(iter(ds)) print(batch) ``` 3. You get the `unexpected keyword argument 'requote_redirect_url'` error. ### Expected behavior The datasets is able to load a batch from the dataset stored on S3, without triggering this `requote_redirect_url` error. Fix: I could fix this by directly removing the `requote_redirect_url` and `trust_env` - then it loads properly. <img width="1127" alt="image" src="https://github.com/user-attachments/assets/4c40efa9-8787-4919-b613-e4908c3d1ab2"> ### Environment info - `datasets` version: 3.1.0 - Platform: macOS-15.1-arm64-arm-64bit - Python version: 3.10.15 - `huggingface_hub` version: 0.26.2 - PyArrow version: 18.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7295/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7295/timeline
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/7294
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7294/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7294/comments
https://api.github.com/repos/huggingface/datasets/issues/7294/events
https://github.com/huggingface/datasets/pull/7294
2,668,663,130
PR_kwDODunzps6CQKTy
7,294
Remove `aiohttp` from direct dependencies
{ "avatar_url": "https://avatars.githubusercontent.com/u/58669?v=4", "events_url": "https://api.github.com/users/akx/events{/privacy}", "followers_url": "https://api.github.com/users/akx/followers", "following_url": "https://api.github.com/users/akx/following{/other_user}", "gists_url": "https://api.github.com/users/akx/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/akx", "id": 58669, "login": "akx", "node_id": "MDQ6VXNlcjU4NjY5", "organizations_url": "https://api.github.com/users/akx/orgs", "received_events_url": "https://api.github.com/users/akx/received_events", "repos_url": "https://api.github.com/users/akx/repos", "site_admin": false, "starred_url": "https://api.github.com/users/akx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akx/subscriptions", "type": "User", "url": "https://api.github.com/users/akx", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2024-11-18T14:00:59
2024-11-18T14:00:59
null
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7294.diff", "html_url": "https://github.com/huggingface/datasets/pull/7294", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7294.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7294" }
The dependency is only used for catching an exception from other code. That can be done with an import guard.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7294/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7294/timeline
null
null
true
null
https://api.github.com/repos/huggingface/datasets/issues/7293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7293/comments
https://api.github.com/repos/huggingface/datasets/issues/7293/events
https://github.com/huggingface/datasets/pull/7293
2,664,592,054
PR_kwDODunzps6CIjS-
7,293
Updated inconsistent output in documentation examples for `ClassLabel`
{ "avatar_url": "https://avatars.githubusercontent.com/u/17179696?v=4", "events_url": "https://api.github.com/users/sergiopaniego/events{/privacy}", "followers_url": "https://api.github.com/users/sergiopaniego/followers", "following_url": "https://api.github.com/users/sergiopaniego/following{/other_user}", "gists_url": "https://api.github.com/users/sergiopaniego/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sergiopaniego", "id": 17179696, "login": "sergiopaniego", "node_id": "MDQ6VXNlcjE3MTc5Njk2", "organizations_url": "https://api.github.com/users/sergiopaniego/orgs", "received_events_url": "https://api.github.com/users/sergiopaniego/received_events", "repos_url": "https://api.github.com/users/sergiopaniego/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sergiopaniego/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sergiopaniego/subscriptions", "type": "User", "url": "https://api.github.com/users/sergiopaniego", "user_view_type": "public" }
[]
open
false
null
[]
null
3
2024-11-16T16:20:57
2024-11-18T18:34:37
null
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7293.diff", "html_url": "https://github.com/huggingface/datasets/pull/7293", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7293.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7293" }
fix #7129 @stevhliu
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7293/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7293/timeline
null
null
true
null
https://api.github.com/repos/huggingface/datasets/issues/7292
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7292/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7292/comments
https://api.github.com/repos/huggingface/datasets/issues/7292/events
https://github.com/huggingface/datasets/issues/7292
2,664,250,855
I_kwDODunzps6ezT3n
7,292
DataFilesNotFoundError for datasets `OpenMol/PubChemSFT`
{ "avatar_url": "https://avatars.githubusercontent.com/u/17878022?v=4", "events_url": "https://api.github.com/users/xnuohz/events{/privacy}", "followers_url": "https://api.github.com/users/xnuohz/followers", "following_url": "https://api.github.com/users/xnuohz/following{/other_user}", "gists_url": "https://api.github.com/users/xnuohz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xnuohz", "id": 17878022, "login": "xnuohz", "node_id": "MDQ6VXNlcjE3ODc4MDIy", "organizations_url": "https://api.github.com/users/xnuohz/orgs", "received_events_url": "https://api.github.com/users/xnuohz/received_events", "repos_url": "https://api.github.com/users/xnuohz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xnuohz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xnuohz/subscriptions", "type": "User", "url": "https://api.github.com/users/xnuohz", "user_view_type": "public" }
[]
closed
false
null
[]
null
3
2024-11-16T11:54:31
2024-11-19T00:53:00
2024-11-19T00:52:59
NONE
null
null
null
### Describe the bug Cannot load the dataset https://huggingface.co/datasets/OpenMol/PubChemSFT ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('OpenMol/PubChemSFT') ``` ### Expected behavior ``` --------------------------------------------------------------------------- DataFilesNotFoundError Traceback (most recent call last) Cell In[7], [line 2](vscode-notebook-cell:?execution_count=7&line=2) [1](vscode-notebook-cell:?execution_count=7&line=1) from datasets import load_dataset ----> [2](vscode-notebook-cell:?execution_count=7&line=2) dataset = load_dataset('OpenMol/PubChemSFT') File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2587, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) [2582](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2582) verification_mode = VerificationMode( [2583](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2583) (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS [2584](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2584) ) [2586](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2586) # Create a dataset builder -> [2587](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2587) builder_instance = load_dataset_builder( [2588](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2588) path=path, [2589](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2589) name=name, [2590](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2590) data_dir=data_dir, [2591](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2591) data_files=data_files, [2592](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2592) cache_dir=cache_dir, [2593](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2593) features=features, [2594](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2594) download_config=download_config, [2595](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2595) download_mode=download_mode, [2596](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2596) revision=revision, [2597](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2597) token=token, [2598](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2598) storage_options=storage_options, [2599](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2599) trust_remote_code=trust_remote_code, [2600](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2600) _require_default_config_name=name is None, [2601](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2601) **config_kwargs, [2602](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2602) ) [2604](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2604) # Return iterable dataset in case of streaming [2605](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2605) if streaming: File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2259, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs) [2257](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2257) download_config = download_config.copy() if download_config else DownloadConfig() [2258](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2258) download_config.storage_options.update(storage_options) -> [2259](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2259) dataset_module = dataset_module_factory( [2260](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2260) path, [2261](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2261) revision=revision, [2262](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2262) download_config=download_config, [2263](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2263) download_mode=download_mode, [2264](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2264) data_dir=data_dir, [2265](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2265) data_files=data_files, [2266](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2266) cache_dir=cache_dir, [2267](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2267) trust_remote_code=trust_remote_code, [2268](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2268) _require_default_config_name=_require_default_config_name, [2269](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2269) _require_custom_configs=bool(config_kwargs), [2270](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2270) ) [2271](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2271) # Get dataset builder class from the processing script [2272](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2272) builder_kwargs = dataset_module.builder_kwargs File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1904, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) [1902](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1902) raise ConnectionError(f"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}") from None [1903](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1903) if isinstance(e1, (DataFilesNotFoundError, DatasetNotFoundError, EmptyDatasetError)): -> [1904](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1904) raise e1 from None [1905](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1905) if isinstance(e1, FileNotFoundError): [1906](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1906) raise FileNotFoundError( [1907](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1907) f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " [1908](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1908) f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" [1909](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1909) ) from None File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1885, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) [1876](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1876) return HubDatasetModuleFactoryWithScript( [1877](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1877) path, [1878](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1878) revision=revision, (...) [1882](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1882) trust_remote_code=trust_remote_code, [1883](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1883) ).get_module() [1884](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1884) else: -> [1885](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1885) return HubDatasetModuleFactoryWithoutScript( [1886](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1886) path, [1887](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1887) revision=revision, [1888](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1888) data_dir=data_dir, [1889](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1889) data_files=data_files, [1890](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1890) download_config=download_config, [1891](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1891) download_mode=download_mode, [1892](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1892) ).get_module() [1893](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1893) except Exception as e1: [1894](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1894) # All the attempts failed, before raising the error we should check if the module is already cached [1895](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1895) try: File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1270, in HubDatasetModuleFactoryWithoutScript.get_module(self) [1263](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1263) patterns = get_data_patterns(base_path, download_config=self.download_config) [1264](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1264) data_files = DataFilesDict.from_patterns( [1265](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1265) patterns, [1266](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1266) base_path=base_path, [1267](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1267) allowed_extensions=ALL_ALLOWED_EXTENSIONS, [1268](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1268) download_config=self.download_config, [1269](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1269) ) -> [1270](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1270) module_name, default_builder_kwargs = infer_module_for_data_files( [1271](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1271) data_files=data_files, [1272](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1272) path=self.name, [1273](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1273) download_config=self.download_config, [1274](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1274) ) [1275](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1275) data_files = data_files.filter_extensions(_MODULE_TO_EXTENSIONS[module_name]) [1276](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1276) # Collect metadata files if the module supports them File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:597, in infer_module_for_data_files(data_files, path, download_config) [595](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:595) raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") [596](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:596) if not module_name: --> [597](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:597) raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else "")) [598](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:598) return module_name, default_builder_kwargs DataFilesNotFoundError: No (supported) data files found in OpenMol/PubChemSFT ``` ### Environment info ``` - `datasets` version: 3.1.0 - Platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.31 - Python version: 3.9.18 - `huggingface_hub` version: 0.25.2 - PyArrow version: 18.0.0 - Pandas version: 2.0.3 - `fsspec` version: 2023.9.2 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/17878022?v=4", "events_url": "https://api.github.com/users/xnuohz/events{/privacy}", "followers_url": "https://api.github.com/users/xnuohz/followers", "following_url": "https://api.github.com/users/xnuohz/following{/other_user}", "gists_url": "https://api.github.com/users/xnuohz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xnuohz", "id": 17878022, "login": "xnuohz", "node_id": "MDQ6VXNlcjE3ODc4MDIy", "organizations_url": "https://api.github.com/users/xnuohz/orgs", "received_events_url": "https://api.github.com/users/xnuohz/received_events", "repos_url": "https://api.github.com/users/xnuohz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xnuohz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xnuohz/subscriptions", "type": "User", "url": "https://api.github.com/users/xnuohz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7292/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7292/timeline
null
completed
false
60.974444
https://api.github.com/repos/huggingface/datasets/issues/7291
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7291/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7291/comments
https://api.github.com/repos/huggingface/datasets/issues/7291/events
https://github.com/huggingface/datasets/issues/7291
2,662,244,643
I_kwDODunzps6erqEj
7,291
Why return_tensors='pt' doesn't work?
{ "avatar_url": "https://avatars.githubusercontent.com/u/86752851?v=4", "events_url": "https://api.github.com/users/bw-wang19/events{/privacy}", "followers_url": "https://api.github.com/users/bw-wang19/followers", "following_url": "https://api.github.com/users/bw-wang19/following{/other_user}", "gists_url": "https://api.github.com/users/bw-wang19/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bw-wang19", "id": 86752851, "login": "bw-wang19", "node_id": "MDQ6VXNlcjg2NzUyODUx", "organizations_url": "https://api.github.com/users/bw-wang19/orgs", "received_events_url": "https://api.github.com/users/bw-wang19/received_events", "repos_url": "https://api.github.com/users/bw-wang19/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bw-wang19/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bw-wang19/subscriptions", "type": "User", "url": "https://api.github.com/users/bw-wang19", "user_view_type": "public" }
[]
open
false
null
[]
null
2
2024-11-15T15:01:23
2024-11-18T13:47:08
null
NONE
null
null
null
### Describe the bug I tried to add input_ids to dataset with map(), and I used the return_tensors='pt', but why I got the callback with the type of List? ![image](https://github.com/user-attachments/assets/ab046e20-2174-4e91-9cd6-4a296a43e83c) ### Steps to reproduce the bug ![image](https://github.com/user-attachments/assets/5d504d4c-22c7-4742-99a1-9cab78739b17) ### Expected behavior Sorry for this silly question, I'm noob on using this tool. But I think it should return a tensor value as I have used the protocol? When I tokenize only one sentence using tokenized_input=tokenizer(input, return_tensors='pt' ),it does return in tensor type. Why doesn't it work in map()? ### Environment info transformers>=4.41.2,<=4.45.0 datasets>=2.16.0,<=2.21.0 accelerate>=0.30.1,<=0.34.2 peft>=0.11.1,<=0.12.0 trl>=0.8.6,<=0.9.6 gradio>=4.0.0 pandas>=2.0.0 scipy einops sentencepiece tiktoken protobuf uvicorn pydantic fastapi sse-starlette matplotlib>=3.7.0 fire packaging pyyaml numpy<2.0.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7291/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7291/timeline
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/7290
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7290/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7290/comments
https://api.github.com/repos/huggingface/datasets/issues/7290/events
https://github.com/huggingface/datasets/issues/7290
2,657,620,816
I_kwDODunzps6eaBNQ
7,290
`Dataset.save_to_disk` hangs when using num_proc > 1
{ "avatar_url": "https://avatars.githubusercontent.com/u/22243463?v=4", "events_url": "https://api.github.com/users/JohannesAck/events{/privacy}", "followers_url": "https://api.github.com/users/JohannesAck/followers", "following_url": "https://api.github.com/users/JohannesAck/following{/other_user}", "gists_url": "https://api.github.com/users/JohannesAck/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JohannesAck", "id": 22243463, "login": "JohannesAck", "node_id": "MDQ6VXNlcjIyMjQzNDYz", "organizations_url": "https://api.github.com/users/JohannesAck/orgs", "received_events_url": "https://api.github.com/users/JohannesAck/received_events", "repos_url": "https://api.github.com/users/JohannesAck/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JohannesAck/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JohannesAck/subscriptions", "type": "User", "url": "https://api.github.com/users/JohannesAck", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2024-11-14T05:25:13
2024-11-14T05:25:13
null
NONE
null
null
null
### Describe the bug Hi, I'm encountered a small issue when saving datasets that led to the saving taking up to multiple hours. Specifically, [`Dataset.save_to_disk`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.save_to_disk) is a lot slower when using `num_proc>1` than when using `num_proc=1` The documentation mentions that "Multiprocessing is disabled by default.", but there is no explanation on how to enable it. ### Steps to reproduce the bug ``` import numpy as np from datasets import Dataset n_samples = int(4e6) n_tokens_sample = 100 data_dict = { 'tokens' : np.random.randint(0, 100, (n_samples, n_tokens_sample)), } dataset = Dataset.from_dict(data_dict) dataset.save_to_disk('test_dataset', num_proc=1) dataset.save_to_disk('test_dataset', num_proc=4) dataset.save_to_disk('test_dataset', num_proc=8) ``` This results in: ``` >>> dataset.save_to_disk('test_dataset', num_proc=1) Saving the dataset (7/7 shards): 100%|██████████████| 4000000/4000000 [00:17<00:00, 228075.15 examples/s] >>> dataset.save_to_disk('test_dataset', num_proc=4) Saving the dataset (7/7 shards): 100%|██████████████| 4000000/4000000 [01:49<00:00, 36583.75 examples/s] >>> dataset.save_to_disk('test_dataset', num_proc=8) Saving the dataset (8/8 shards): 100%|██████████████| 4000000/4000000 [02:11<00:00, 30518.43 examples/s] ``` With larger datasets it can take hours, but I didn't benchmark that for this bug report. ### Expected behavior I would expect using `num_proc>1` to be faster instead of slower than `num_proc=1`. ### Environment info - `datasets` version: 3.1.0 - Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.26.2 - PyArrow version: 18.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7290/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7290/timeline
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/7289
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7289/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7289/comments
https://api.github.com/repos/huggingface/datasets/issues/7289/events
https://github.com/huggingface/datasets/issues/7289
2,648,019,507
I_kwDODunzps6d1ZIz
7,289
Dataset viewer displays wrong statists
{ "avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4", "events_url": "https://api.github.com/users/speedcell4/events{/privacy}", "followers_url": "https://api.github.com/users/speedcell4/followers", "following_url": "https://api.github.com/users/speedcell4/following{/other_user}", "gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/speedcell4", "id": 3585459, "login": "speedcell4", "node_id": "MDQ6VXNlcjM1ODU0NTk=", "organizations_url": "https://api.github.com/users/speedcell4/orgs", "received_events_url": "https://api.github.com/users/speedcell4/received_events", "repos_url": "https://api.github.com/users/speedcell4/repos", "site_admin": false, "starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions", "type": "User", "url": "https://api.github.com/users/speedcell4", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2024-11-11T03:29:27
2024-11-13T13:02:25
2024-11-13T13:02:25
NONE
null
null
null
### Describe the bug In [my dataset](https://huggingface.co/datasets/speedcell4/opus-unigram2), there is a column called `lang2`, and there are 94 different classes in total, but the viewer says there are 83 values only. This issue only arises in the `train` split. The total number of values is also 94 in the `test` and `dev` columns, viewer tells the correct number of them. <img width="177" alt="image" src="https://github.com/user-attachments/assets/78d76ef2-fe0e-4fa3-85e0-fb2552813d1c"> ### Steps to reproduce the bug ```python3 from datasets import load_dataset ds = load_dataset('speedcell4/opus-unigram2').unique('lang2') for key, lang2 in ds.items(): print(key, len(lang2)) ``` This script returns the following and tells that the `train` split has 94 values in the `lang2` column. ``` train 94 dev 94 test 94 zero 5 ``` ### Expected behavior 94 in the reviewer. ### Environment info Collecting environment information... PyTorch version: 2.4.1+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: CentOS Linux release 8.2.2004 (Core) (x86_64) GCC version: (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5) Clang version: Could not collect CMake version: version 3.11.4 Libc version: glibc-2.28 Python version: 3.9.20 (main, Oct 3 2024, 07:27:41) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.28 Is CUDA available: True CUDA runtime version: 12.2.140 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB GPU 1: NVIDIA A100-SXM4-40GB GPU 2: NVIDIA A100-SXM4-40GB GPU 3: NVIDIA A100-SXM4-40GB GPU 4: NVIDIA A100-SXM4-40GB GPU 5: NVIDIA A100-SXM4-40GB GPU 6: NVIDIA A100-SXM4-40GB GPU 7: NVIDIA A100-SXM4-40GB Nvidia driver version: 525.85.05 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 1 Core(s) per socket: 32 Socket(s): 2 NUMA node(s): 4 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7542 32-Core Processor Stepping: 0 CPU MHz: 3389.114 BogoMIPS: 5789.40 Virtualization: AMD-V L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-15 NUMA node1 CPU(s): 16-31 NUMA node2 CPU(s): 32-47 NUMA node3 CPU(s): 48-63 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] torch==2.4.1+cu121 [pip3] torchaudio==2.4.1+cu121 [pip3] torchdevice==0.1.1 [pip3] torchglyph==0.3.2 [pip3] torchmetrics==1.5.0 [pip3] torchrua==0.5.1 [pip3] torchvision==0.19.1+cu121 [pip3] triton==3.0.0 [pip3] datasets==3.0.1 [conda] numpy 1.26.4 pypi_0 pypi [conda] torch 2.4.1+cu121 pypi_0 pypi [conda] torchaudio 2.4.1+cu121 pypi_0 pypi [conda] torchdevice 0.1.1 pypi_0 pypi [conda] torchglyph 0.3.2 pypi_0 pypi [conda] torchmetrics 1.5.0 pypi_0 pypi [conda] torchrua 0.5.1 pypi_0 pypi [conda] torchvision 0.19.1+cu121 pypi_0 pypi [conda] triton 3.0.0 pypi_0 pypi
{ "avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4", "events_url": "https://api.github.com/users/speedcell4/events{/privacy}", "followers_url": "https://api.github.com/users/speedcell4/followers", "following_url": "https://api.github.com/users/speedcell4/following{/other_user}", "gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/speedcell4", "id": 3585459, "login": "speedcell4", "node_id": "MDQ6VXNlcjM1ODU0NTk=", "organizations_url": "https://api.github.com/users/speedcell4/orgs", "received_events_url": "https://api.github.com/users/speedcell4/received_events", "repos_url": "https://api.github.com/users/speedcell4/repos", "site_admin": false, "starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions", "type": "User", "url": "https://api.github.com/users/speedcell4", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7289/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7289/timeline
null
completed
false
57.549444
https://api.github.com/repos/huggingface/datasets/issues/7288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7288/comments
https://api.github.com/repos/huggingface/datasets/issues/7288/events
https://github.com/huggingface/datasets/pull/7288
2,647,052,280
PR_kwDODunzps6BbIpz
7,288
Release v3.1.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2024-11-10T09:38:15
2024-11-10T09:38:48
2024-11-10T09:38:48
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7288.diff", "html_url": "https://github.com/huggingface/datasets/pull/7288", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7288.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7288" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7288/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7288/timeline
null
null
true
0.009167
https://api.github.com/repos/huggingface/datasets/issues/7287
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7287/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7287/comments
https://api.github.com/repos/huggingface/datasets/issues/7287/events
https://github.com/huggingface/datasets/issues/7287
2,646,958,393
I_kwDODunzps6dxWE5
7,287
Support for identifier-based automated split construction
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
3
2024-11-10T07:45:19
2024-11-19T14:37:02
null
CONTRIBUTOR
null
null
null
### Feature request As far as I understand, automated construction of splits for hub datasets is currently based on either file names or directory structure ([as described here](https://huggingface.co/docs/datasets/en/repository_structure)) It would seem to be pretty useful to also allow splits to be based on identifiers of individual examples This could be configured like {"split_name": {"column_name": [column values in split]}} (This in turn requires unique 'index' columns, which could be explicitly supported or just assumed to be defined appropriately by the user). I guess a potential downside would be that shards would end up spanning different splits - is this something that can be handled somehow? Would this only affect streaming from hub? ### Motivation The main motivation would be that all data files could be stored in a single directory, and multiple sets of splits could be generated from the same data. This is often useful for large datasets with multiple distinct sets of splits. This could all be configured via the README.md yaml configs ### Your contribution May be able to contribute if it seems like a good idea
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7287/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7287/timeline
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/7286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7286/comments
https://api.github.com/repos/huggingface/datasets/issues/7286/events
https://github.com/huggingface/datasets/issues/7286
2,645,350,151
I_kwDODunzps6drNcH
7,286
Concurrent loading in `load_from_disk` - `num_proc` as a param
{ "avatar_url": "https://avatars.githubusercontent.com/u/5240449?v=4", "events_url": "https://api.github.com/users/unography/events{/privacy}", "followers_url": "https://api.github.com/users/unography/followers", "following_url": "https://api.github.com/users/unography/following{/other_user}", "gists_url": "https://api.github.com/users/unography/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/unography", "id": 5240449, "login": "unography", "node_id": "MDQ6VXNlcjUyNDA0NDk=", "organizations_url": "https://api.github.com/users/unography/orgs", "received_events_url": "https://api.github.com/users/unography/received_events", "repos_url": "https://api.github.com/users/unography/repos", "site_admin": false, "starred_url": "https://api.github.com/users/unography/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unography/subscriptions", "type": "User", "url": "https://api.github.com/users/unography", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
0
2024-11-08T23:21:40
2024-11-09T16:14:37
2024-11-09T16:14:37
NONE
null
null
null
### Feature request https://github.com/huggingface/datasets/pull/6464 mentions a `num_proc` param while loading dataset from disk, but can't find that in the documentation and code anywhere ### Motivation Make loading large datasets from disk faster ### Your contribution Happy to contribute if given pointers
{ "avatar_url": "https://avatars.githubusercontent.com/u/5240449?v=4", "events_url": "https://api.github.com/users/unography/events{/privacy}", "followers_url": "https://api.github.com/users/unography/followers", "following_url": "https://api.github.com/users/unography/following{/other_user}", "gists_url": "https://api.github.com/users/unography/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/unography", "id": 5240449, "login": "unography", "node_id": "MDQ6VXNlcjUyNDA0NDk=", "organizations_url": "https://api.github.com/users/unography/orgs", "received_events_url": "https://api.github.com/users/unography/received_events", "repos_url": "https://api.github.com/users/unography/repos", "site_admin": false, "starred_url": "https://api.github.com/users/unography/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unography/subscriptions", "type": "User", "url": "https://api.github.com/users/unography", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7286/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7286/timeline
null
not_planned
false
16.8825
https://api.github.com/repos/huggingface/datasets/issues/7285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7285/comments
https://api.github.com/repos/huggingface/datasets/issues/7285/events
https://github.com/huggingface/datasets/pull/7285
2,644,488,598
PR_kwDODunzps6BV3Gu
7,285
Release v3.1.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2024-11-08T16:17:58
2024-11-08T16:18:05
2024-11-08T16:18:05
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7285.diff", "html_url": "https://github.com/huggingface/datasets/pull/7285", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7285.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7285" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7285/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7285/timeline
null
null
true
0.001944
https://api.github.com/repos/huggingface/datasets/issues/7284
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7284/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7284/comments
https://api.github.com/repos/huggingface/datasets/issues/7284/events
https://github.com/huggingface/datasets/pull/7284
2,644,302,386
PR_kwDODunzps6BVUSh
7,284
support for custom feature encoding/decoding
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2024-11-08T15:04:08
2024-11-08T15:06:40
null
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7284.diff", "html_url": "https://github.com/huggingface/datasets/pull/7284", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7284.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7284" }
Fix for https://github.com/huggingface/datasets/issues/7220 as suggested in discussion, in preference to #7221 (only concern would be on effect on type checking with custom feature types that aren't covered by FeatureType?)
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7284/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7284/timeline
null
null
true
null
https://api.github.com/repos/huggingface/datasets/issues/7283
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7283/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7283/comments
https://api.github.com/repos/huggingface/datasets/issues/7283/events
https://github.com/huggingface/datasets/pull/7283
2,642,537,708
PR_kwDODunzps6BQUgH
7,283
Allow for variation in metadata file names as per issue #7123
{ "avatar_url": "https://avatars.githubusercontent.com/u/38985481?v=4", "events_url": "https://api.github.com/users/egrace479/events{/privacy}", "followers_url": "https://api.github.com/users/egrace479/followers", "following_url": "https://api.github.com/users/egrace479/following{/other_user}", "gists_url": "https://api.github.com/users/egrace479/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/egrace479", "id": 38985481, "login": "egrace479", "node_id": "MDQ6VXNlcjM4OTg1NDgx", "organizations_url": "https://api.github.com/users/egrace479/orgs", "received_events_url": "https://api.github.com/users/egrace479/received_events", "repos_url": "https://api.github.com/users/egrace479/repos", "site_admin": false, "starred_url": "https://api.github.com/users/egrace479/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/egrace479/subscriptions", "type": "User", "url": "https://api.github.com/users/egrace479", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2024-11-08T00:44:47
2024-11-08T00:44:47
null
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7283.diff", "html_url": "https://github.com/huggingface/datasets/pull/7283", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7283.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7283" }
Allow metadata files to have an identifying preface. Specifically, it will recognize files with `-metadata.csv` or `_metadata.csv` as metadata files for the purposes of the dataset viewer functionality. Resolves #7123.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7283/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7283/timeline
null
null
true
null
https://api.github.com/repos/huggingface/datasets/issues/7282
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7282/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7282/comments
https://api.github.com/repos/huggingface/datasets/issues/7282/events
https://github.com/huggingface/datasets/issues/7282
2,642,075,491
I_kwDODunzps6det9j
7,282
Faulty datasets.exceptions.ExpectedMoreSplitsError
{ "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/meg-huggingface", "id": 90473723, "login": "meg-huggingface", "node_id": "MDQ6VXNlcjkwNDczNzIz", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "site_admin": false, "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "type": "User", "url": "https://api.github.com/users/meg-huggingface", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2024-11-07T20:15:01
2024-11-07T20:15:42
null
CONTRIBUTOR
null
null
null
### Describe the bug Trying to download only the 'validation' split of my dataset; instead hit the error `datasets.exceptions.ExpectedMoreSplitsError`. Appears to be the same undesired behavior as reported in [#6939](https://github.com/huggingface/datasets/issues/6939), but with `data_files`, not `data_dir`. Here is the Traceback: ``` Traceback (most recent call last): File "/home/user/app/app.py", line 12, in <module> ds = load_dataset('datacomp/imagenet-1k-random0.0', token=GATED_IMAGENET, data_files={'validation': 'data/val*'}, split='validation', trust_remote_code=True) File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 2154, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/site-packages/datasets/builder.py", line 924, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/site-packages/datasets/builder.py", line 1018, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/usr/local/lib/python3.10/site-packages/datasets/utils/info_utils.py", line 68, in verify_splits raise ExpectedMoreSplitsError(str(set(expected_splits) - set(recorded_splits))) datasets.exceptions.ExpectedMoreSplitsError: {'train', 'test'} ``` Note: I am using the `data_files` argument only because I am trying to specify that I only want the 'validation' split, and the whole dataset will be downloaded even when the `split='validation'` argument is specified, unless you also specify `data_files`, as described here: https://discuss.huggingface.co/t/how-can-i-download-a-specific-split-of-a-dataset/79027 ### Steps to reproduce the bug 1. Create a Space with the default blank 'gradio' SDK https://huggingface.co/new-space 2. Create a file 'app.py' that loads a dataset to only extract a 'validation' split: `ds = load_dataset('datacomp/imagenet-1k-random0.0', token=GATED_IMAGENET, data_files={'validation': 'data/val*'}, split='validation', trust_remote_code=True)` ### Expected behavior Downloading validation split. ### Environment info Default environment for creating a new Space. Relevant to this bug, that is: ``` FROM docker.io/library/python:3.10@sha256:fd0fa50d997eb56ce560c6e5ca6a1f5cf8fdff87572a16ac07fb1f5ca01eb608 --> RUN pip install --no-cache-dir pip==22.3.1 && pip install --no-cache-dir datasets "huggingface-hub>=0.19" "hf-transfer>=0.1.4" "protobuf<4" "click<8.1" ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7282/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7282/timeline
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/7281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7281/comments
https://api.github.com/repos/huggingface/datasets/issues/7281/events
https://github.com/huggingface/datasets/issues/7281
2,640,346,339
I_kwDODunzps6dYHzj
7,281
File not found error
{ "avatar_url": "https://avatars.githubusercontent.com/u/37507786?v=4", "events_url": "https://api.github.com/users/MichielBontenbal/events{/privacy}", "followers_url": "https://api.github.com/users/MichielBontenbal/followers", "following_url": "https://api.github.com/users/MichielBontenbal/following{/other_user}", "gists_url": "https://api.github.com/users/MichielBontenbal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MichielBontenbal", "id": 37507786, "login": "MichielBontenbal", "node_id": "MDQ6VXNlcjM3NTA3Nzg2", "organizations_url": "https://api.github.com/users/MichielBontenbal/orgs", "received_events_url": "https://api.github.com/users/MichielBontenbal/received_events", "repos_url": "https://api.github.com/users/MichielBontenbal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MichielBontenbal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MichielBontenbal/subscriptions", "type": "User", "url": "https://api.github.com/users/MichielBontenbal", "user_view_type": "public" }
[]
open
false
null
[]
null
1
2024-11-07T09:04:49
2024-11-07T09:22:43
null
NONE
null
null
null
### Describe the bug I get a FileNotFoundError: <img width="944" alt="image" src="https://github.com/user-attachments/assets/1336bc08-06f6-4682-a3c0-071ff65efa87"> ### Steps to reproduce the bug See screenshot. ### Expected behavior I want to load one audiofile from the dataset. ### Environment info MacOs Intel 14.6.1 (23G93) Python 3.10.9 Numpy 1.23 Datasets latest version
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7281/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7281/timeline
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/7280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7280/comments
https://api.github.com/repos/huggingface/datasets/issues/7280/events
https://github.com/huggingface/datasets/issues/7280
2,639,977,077
I_kwDODunzps6dWtp1
7,280
Add filename in error message when ReadError or similar occur
{ "avatar_url": "https://avatars.githubusercontent.com/u/37046039?v=4", "events_url": "https://api.github.com/users/elisa-aleman/events{/privacy}", "followers_url": "https://api.github.com/users/elisa-aleman/followers", "following_url": "https://api.github.com/users/elisa-aleman/following{/other_user}", "gists_url": "https://api.github.com/users/elisa-aleman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/elisa-aleman", "id": 37046039, "login": "elisa-aleman", "node_id": "MDQ6VXNlcjM3MDQ2MDM5", "organizations_url": "https://api.github.com/users/elisa-aleman/orgs", "received_events_url": "https://api.github.com/users/elisa-aleman/received_events", "repos_url": "https://api.github.com/users/elisa-aleman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/elisa-aleman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elisa-aleman/subscriptions", "type": "User", "url": "https://api.github.com/users/elisa-aleman", "user_view_type": "public" }
[]
open
false
null
[]
null
5
2024-11-07T06:00:53
2024-11-20T13:23:12
null
NONE
null
null
null
Please update error messages to include relevant information for debugging when loading datasets with `load_dataset()` that may have a few corrupted files. Whenever downloading a full dataset, some files might be corrupted (either at the source or from downloading corruption). However the errors often only let me know it was a tar file if `tarfile.ReadError` appears on the traceback, and I imagine similarly for other file types. This makes it really hard to debug which file is corrupted, and when dealing with very large datasets, it shouldn't be necessary to force download everything again.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7280/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7280/timeline
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/7279
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7279/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7279/comments
https://api.github.com/repos/huggingface/datasets/issues/7279/events
https://github.com/huggingface/datasets/pull/7279
2,635,813,932
PR_kwDODunzps6A8pTJ
7,279
Feature proposal: Stacking, potentially heterogeneous, datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/96243987?v=4", "events_url": "https://api.github.com/users/TimCares/events{/privacy}", "followers_url": "https://api.github.com/users/TimCares/followers", "following_url": "https://api.github.com/users/TimCares/following{/other_user}", "gists_url": "https://api.github.com/users/TimCares/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TimCares", "id": 96243987, "login": "TimCares", "node_id": "U_kgDOBbyREw", "organizations_url": "https://api.github.com/users/TimCares/orgs", "received_events_url": "https://api.github.com/users/TimCares/received_events", "repos_url": "https://api.github.com/users/TimCares/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TimCares/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TimCares/subscriptions", "type": "User", "url": "https://api.github.com/users/TimCares", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2024-11-05T15:40:50
2024-11-05T15:40:50
null
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7279.diff", "html_url": "https://github.com/huggingface/datasets/pull/7279", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7279.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7279" }
### Introduction Hello there, I noticed that there are two ways to combine multiple datasets: Either through `datasets.concatenate_datasets` or `datasets.interleave_datasets`. However, to my knowledge (please correct me if I am wrong) both approaches require the datasets that are combined to have the same features. I think it would be a great idea to add support for combining multiple datasets that might not follow the same schema (i.e. have different features), for example an image and text dataset. That is why I propose a third function of the `datasets.combine` module called `stack_datasets`, which can be used to combine a list of datasets with (potentially) different features. This would look as follows: ```python >>> from datasets import stack_datasets >>> image_dataset = ... >>> next(iter(image_dataset)) {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=555x416 at 0x313E79CD0> } >>> text_dataset = ... >>> next(iter(text_dataset)) {'text': "This is a test."} >>> stacked = stack_datasets(datasets={'i_ds': image_dataset, 't_ds': text_dataset}, stopping_strategy='all_exhausted') >>> next(iter(stacked)) { 'i_ds': {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=555x416 at 0x313E79CD0> } 't_ds': {'text': "This is a test."} } ``` <br /> ### Motivation I motivate this by: **A**: The fact that Pytorch offers a similar functionality under `torch.utils.data.StackDataset` ([link](https://pytorch.org/docs/stable/data.html#torch.utils.data.StackDataset)). **B**: In settings where one would like to e.g. train a Vision-Language model using an image-text dataset, an image dataset, and a text dataset, this functionality would offer a clean and intuitive solution to create multimodal datasets. I am aware that the aforementioned is also feasible without my proposed function, but I believe this offers a nice approach that aligns with existing functionality and is directly provided within the `datasets` package. ### API `stack_datasets` has two arguments: `datasets` and `stopping_strategy `. <br /> `datasets` is a dictionary of either type `Dict[str, Dataset]` or `Dict[str, IterableDatasets]`, a mixture is not allowed. It contains the names of the datasets (the keys) and the datasets themselves (the values) that should be stacked. Each item returned is a dictionary with one key-value pair for each dataset. The keys are the names of the datasets as provided in the argument `datasets`, and the values are the respective examples from the datasets. <br /> `stopping_strategy` is the same as for `interleave_datasets`. If it is `first_exhausted` we stop if the smallest dataset runs out of examples, if it is `all_exhausted` we stop if all datasets ran out of examples at least once. For `all_exhausted` that means that we may visit examples from datasets multiple times. ### Docs I saw that there are multiple documentations and guides on the HuggingFace website that introduce `concatenate_datasets` and `interleave_datasets`, for example [here](https://huggingface.co/docs/datasets/process#concatenate). If this request is merged I would be willing to add the new functionality at the appropriate points in the documentation (if desired). ### Tests I also added some tests to ensure correctness. Some tests I wrote in [tests/test_iterable_dataset.py](https://github.com/TimCares/datasets/blob/fadc1159debf2a65d44e40cbf7758f2bd2cc8b08/tests/test_iterable_dataset.py#L2169) run for both `Dataset` and `IterableDataset` even though tests for `Dataset` technically do not belong in this script, but I found that this was a nice way to cover more cases with mostly the same code. ### Additional information I tried to write the code in a way so that it is similar to that of `concatenate_datasets` and `interleave_datasets`. I’m open to feedback and willing to make adjustments based on your suggestions, so feel free to give me your take. :)
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7279/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7279/timeline
null
null
true
null
https://api.github.com/repos/huggingface/datasets/issues/7278
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7278/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7278/comments
https://api.github.com/repos/huggingface/datasets/issues/7278/events
https://github.com/huggingface/datasets/pull/7278
2,633,436,151
PR_kwDODunzps6A1ORG
7,278
Let soundfile directly read local audio files
{ "avatar_url": "https://avatars.githubusercontent.com/u/20347013?v=4", "events_url": "https://api.github.com/users/fawazahmed0/events{/privacy}", "followers_url": "https://api.github.com/users/fawazahmed0/followers", "following_url": "https://api.github.com/users/fawazahmed0/following{/other_user}", "gists_url": "https://api.github.com/users/fawazahmed0/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fawazahmed0", "id": 20347013, "login": "fawazahmed0", "node_id": "MDQ6VXNlcjIwMzQ3MDEz", "organizations_url": "https://api.github.com/users/fawazahmed0/orgs", "received_events_url": "https://api.github.com/users/fawazahmed0/received_events", "repos_url": "https://api.github.com/users/fawazahmed0/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fawazahmed0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fawazahmed0/subscriptions", "type": "User", "url": "https://api.github.com/users/fawazahmed0", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2024-11-04T17:41:13
2024-11-18T14:01:25
null
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7278.diff", "html_url": "https://github.com/huggingface/datasets/pull/7278", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7278.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7278" }
- [x] Fixes #7276
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7278/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7278/timeline
null
null
true
null
https://api.github.com/repos/huggingface/datasets/issues/7277
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7277/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7277/comments
https://api.github.com/repos/huggingface/datasets/issues/7277/events
https://github.com/huggingface/datasets/pull/7277
2,632,459,184
PR_kwDODunzps6AyB7O
7,277
Add link to video dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2024-11-04T10:45:12
2024-11-04T17:05:06
2024-11-04T17:05:06
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7277.diff", "html_url": "https://github.com/huggingface/datasets/pull/7277", "merged_at": "2024-11-04T17:05:06", "patch_url": "https://github.com/huggingface/datasets/pull/7277.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7277" }
This PR updates https://huggingface.co/docs/datasets/loading to also link to the new video loading docs. cc @mfarre
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7277/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7277/timeline
null
null
true
6.331667
https://api.github.com/repos/huggingface/datasets/issues/7276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7276/comments
https://api.github.com/repos/huggingface/datasets/issues/7276/events
https://github.com/huggingface/datasets/issues/7276
2,631,917,431
I_kwDODunzps6c3993
7,276
Accessing audio dataset value throws Format not recognised error
{ "avatar_url": "https://avatars.githubusercontent.com/u/20347013?v=4", "events_url": "https://api.github.com/users/fawazahmed0/events{/privacy}", "followers_url": "https://api.github.com/users/fawazahmed0/followers", "following_url": "https://api.github.com/users/fawazahmed0/following{/other_user}", "gists_url": "https://api.github.com/users/fawazahmed0/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fawazahmed0", "id": 20347013, "login": "fawazahmed0", "node_id": "MDQ6VXNlcjIwMzQ3MDEz", "organizations_url": "https://api.github.com/users/fawazahmed0/orgs", "received_events_url": "https://api.github.com/users/fawazahmed0/received_events", "repos_url": "https://api.github.com/users/fawazahmed0/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fawazahmed0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fawazahmed0/subscriptions", "type": "User", "url": "https://api.github.com/users/fawazahmed0", "user_view_type": "public" }
[]
open
false
null
[]
null
3
2024-11-04T05:59:13
2024-11-09T18:51:52
null
NONE
null
null
null
### Describe the bug Accessing audio dataset value throws `Format not recognised error` ### Steps to reproduce the bug **code:** ```py from datasets import load_dataset dataset = load_dataset("fawazahmed0/bug-audio") for data in dataset["train"]: print(data) ``` **output:** ```bash (mypy) C:\Users\Nawaz-Server\Documents\ml>python myest.py [C:\vcpkg\buildtrees\mpg123\src\0d8db63f9b-3db975bc05.clean\src\libmpg123\layer3.c:INT123_do_layer3():1801] error: dequantization failed! {'audio': {'path': 'C:\\Users\\Nawaz-Server\\.cache\\huggingface\\hub\\datasets--fawazahmed0--bug-audio\\snapshots\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\data\\Ghamadi\\037135.mp3', 'array': array([ 0.00000000e+00, -2.86519935e-22, -2.56504911e-21, ..., -1.94239747e-02, -2.42924765e-02, -2.99104657e-02]), 'sampling_rate': 22050}, 'reciter': 'Ghamadi', 'transcription': 'الا عجوز ا في الغبرين', 'line': 3923, 'chapter': 37, 'verse': 135, 'text': 'إِلَّا عَجُوزࣰ ا فِي ٱلۡغَٰبِرِينَ'} Traceback (most recent call last): File "C:\Users\Nawaz-Server\Documents\ml\myest.py", line 5, in <module> for data in dataset["train"]: ~~~~~~~^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\arrow_dataset.py", line 2372, in __iter__ formatted_output = format_table( ^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 639, in format_table return formatter(pa_table, query_type=query_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 403, in __call__ return self.format_row(pa_table) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 444, in format_row row = self.python_features_decoder.decode_row(row) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 222, in decode_row return self.features.decode_example(row) if self.features else row ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\features\features.py", line 2042, in decode_example column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\features\features.py", line 1403, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\features\audio.py", line 184, in decode_example array, sampling_rate = sf.read(f) ^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\soundfile.py", line 285, in read with SoundFile(file, 'r', samplerate, channels, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\soundfile.py", line 658, in __init__ self._file = self._open(file, mode_int, closefd) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\soundfile.py", line 1216, in _open raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name)) soundfile.LibsndfileError: Error opening <_io.BufferedReader name='C:\\Users\\Nawaz-Server\\.cache\\huggingface\\hub\\datasets--fawazahmed0--bug-audio\\snapshots\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\data\\Ghamadi\\037136.mp3'>: Format not recognised. ``` ### Expected behavior Everything should work fine, as loading the problematic audio file directly with soundfile package works fine **code:** ``` import soundfile as sf print(sf.read('C:\\Users\\Nawaz-Server\\.cache\\huggingface\\hub\\datasets--fawazahmed0--bug-audio\\snapshots\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\data\\Ghamadi\\037136.mp3')) ``` **output:** ```bash (mypy) C:\Users\Nawaz-Server\Documents\ml>python myest.py [C:\vcpkg\buildtrees\mpg123\src\0d8db63f9b-3db975bc05.clean\src\libmpg123\layer3.c:INT123_do_layer3():1801] error: dequantization failed! (array([ 0.00000000e+00, -8.43723821e-22, -2.45370628e-22, ..., -7.71464454e-03, -6.90496899e-03, -8.63333419e-03]), 22050) ``` ### Environment info - `datasets` version: 3.0.2 - Platform: Windows-11-10.0.22621-SP0 - Python version: 3.12.7 - `huggingface_hub` version: 0.26.2 - PyArrow version: 17.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.10.0 - soundfile: 0.12.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7276/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7276/timeline
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/7275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7275/comments
https://api.github.com/repos/huggingface/datasets/issues/7275/events
https://github.com/huggingface/datasets/issues/7275
2,631,713,397
I_kwDODunzps6c3MJ1
7,275
load_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/46941974?v=4", "events_url": "https://api.github.com/users/santiagobp99/events{/privacy}", "followers_url": "https://api.github.com/users/santiagobp99/followers", "following_url": "https://api.github.com/users/santiagobp99/following{/other_user}", "gists_url": "https://api.github.com/users/santiagobp99/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/santiagobp99", "id": 46941974, "login": "santiagobp99", "node_id": "MDQ6VXNlcjQ2OTQxOTc0", "organizations_url": "https://api.github.com/users/santiagobp99/orgs", "received_events_url": "https://api.github.com/users/santiagobp99/received_events", "repos_url": "https://api.github.com/users/santiagobp99/repos", "site_admin": false, "starred_url": "https://api.github.com/users/santiagobp99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/santiagobp99/subscriptions", "type": "User", "url": "https://api.github.com/users/santiagobp99", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2024-11-04T03:01:44
2024-11-04T03:01:44
null
NONE
null
null
null
### Describe the bug I am performing two operations I see on a hugging face tutorial (Fine-tune a language model), and I am defining every aspect inside the mapped functions, also some imports of the library because it doesnt identify anything not defined outside that function where the dataset elements are being mapped: https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb#scrollTo=iaAJy5Hu3l_B `- lm_datasets = tokenized_datasets.map( group_texts, batched=True, batch_size=batch_size, num_proc=4, ) - tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"]) def tokenize_function(examples): model_checkpoint = 'gpt2' from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True) return tokenizer(examples["text"])` ### Steps to reproduce the bug Currently handle all the imports inside the function ### Expected behavior The code must work es expected in the notebook, but currently this is not happening. https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb#scrollTo=iaAJy5Hu3l_B ### Environment info print(transformers.__version__) 4.46.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7275/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7275/timeline
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/7274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7274/comments
https://api.github.com/repos/huggingface/datasets/issues/7274/events
https://github.com/huggingface/datasets/pull/7274
2,629,882,821
PR_kwDODunzps6ArEt-
7,274
[MINOR:TYPO] Fix typo in exception text
{ "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cakiki", "id": 3664563, "login": "cakiki", "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "organizations_url": "https://api.github.com/users/cakiki/orgs", "received_events_url": "https://api.github.com/users/cakiki/received_events", "repos_url": "https://api.github.com/users/cakiki/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "type": "User", "url": "https://api.github.com/users/cakiki", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2024-11-01T21:15:29
2024-11-01T21:15:54
null
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7274.diff", "html_url": "https://github.com/huggingface/datasets/pull/7274", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7274.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7274" }
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7274/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7274/timeline
null
null
true
null
https://api.github.com/repos/huggingface/datasets/issues/7273
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7273/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7273/comments
https://api.github.com/repos/huggingface/datasets/issues/7273/events
https://github.com/huggingface/datasets/pull/7273
2,628,896,492
PR_kwDODunzps6An6n8
7,273
Raise error for incorrect JSON serialization
{ "avatar_url": "https://avatars.githubusercontent.com/u/20443618?v=4", "events_url": "https://api.github.com/users/varadhbhatnagar/events{/privacy}", "followers_url": "https://api.github.com/users/varadhbhatnagar/followers", "following_url": "https://api.github.com/users/varadhbhatnagar/following{/other_user}", "gists_url": "https://api.github.com/users/varadhbhatnagar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/varadhbhatnagar", "id": 20443618, "login": "varadhbhatnagar", "node_id": "MDQ6VXNlcjIwNDQzNjE4", "organizations_url": "https://api.github.com/users/varadhbhatnagar/orgs", "received_events_url": "https://api.github.com/users/varadhbhatnagar/received_events", "repos_url": "https://api.github.com/users/varadhbhatnagar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/varadhbhatnagar/subscriptions", "type": "User", "url": "https://api.github.com/users/varadhbhatnagar", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2024-11-01T11:54:35
2024-11-18T11:25:01
2024-11-18T11:25:01
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7273.diff", "html_url": "https://github.com/huggingface/datasets/pull/7273", "merged_at": "2024-11-18T11:25:01", "patch_url": "https://github.com/huggingface/datasets/pull/7273.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7273" }
Raise error when `lines = False` and `batch_size < Dataset.num_rows` in `Dataset.to_json()`. Issue: #7037 Related PRs: #7039 #7181
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7273/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7273/timeline
null
null
true
407.507222
https://api.github.com/repos/huggingface/datasets/issues/7272
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7272/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7272/comments
https://api.github.com/repos/huggingface/datasets/issues/7272/events
https://github.com/huggingface/datasets/pull/7272
2,627,223,390
PR_kwDODunzps6AirL2
7,272
fix conda release worlflow
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2024-10-31T15:56:19
2024-10-31T15:58:35
2024-10-31T15:57:29
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7272.diff", "html_url": "https://github.com/huggingface/datasets/pull/7272", "merged_at": "2024-10-31T15:57:29", "patch_url": "https://github.com/huggingface/datasets/pull/7272.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7272" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7272/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7272/timeline
null
null
true
0.019444
https://api.github.com/repos/huggingface/datasets/issues/7271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7271/comments
https://api.github.com/repos/huggingface/datasets/issues/7271/events
https://github.com/huggingface/datasets/pull/7271
2,627,135,540
PR_kwDODunzps6AiZaj
7,271
Set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2024-10-31T15:22:51
2024-10-31T15:25:27
2024-10-31T15:22:59
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7271.diff", "html_url": "https://github.com/huggingface/datasets/pull/7271", "merged_at": "2024-10-31T15:22:59", "patch_url": "https://github.com/huggingface/datasets/pull/7271.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7271" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7271/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7271/timeline
null
null
true
0.002222
https://api.github.com/repos/huggingface/datasets/issues/7270
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7270/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7270/comments
https://api.github.com/repos/huggingface/datasets/issues/7270/events
https://github.com/huggingface/datasets/pull/7270
2,627,107,016
PR_kwDODunzps6AiTJm
7,270
Release: 3.1.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2024-10-31T15:10:01
2024-10-31T15:14:23
2024-10-31T15:14:20
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7270.diff", "html_url": "https://github.com/huggingface/datasets/pull/7270", "merged_at": "2024-10-31T15:14:20", "patch_url": "https://github.com/huggingface/datasets/pull/7270.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7270" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7270/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7270/timeline
null
null
true
0.071944
https://api.github.com/repos/huggingface/datasets/issues/7269
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7269/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7269/comments
https://api.github.com/repos/huggingface/datasets/issues/7269/events
https://github.com/huggingface/datasets/issues/7269
2,626,873,843
I_kwDODunzps6ckunz
7,269
Memory leak when streaming
{ "avatar_url": "https://avatars.githubusercontent.com/u/64205064?v=4", "events_url": "https://api.github.com/users/Jourdelune/events{/privacy}", "followers_url": "https://api.github.com/users/Jourdelune/followers", "following_url": "https://api.github.com/users/Jourdelune/following{/other_user}", "gists_url": "https://api.github.com/users/Jourdelune/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Jourdelune", "id": 64205064, "login": "Jourdelune", "node_id": "MDQ6VXNlcjY0MjA1MDY0", "organizations_url": "https://api.github.com/users/Jourdelune/orgs", "received_events_url": "https://api.github.com/users/Jourdelune/received_events", "repos_url": "https://api.github.com/users/Jourdelune/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Jourdelune/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jourdelune/subscriptions", "type": "User", "url": "https://api.github.com/users/Jourdelune", "user_view_type": "public" }
[]
open
false
null
[]
null
2
2024-10-31T13:33:52
2024-11-18T11:46:07
null
NONE
null
null
null
### Describe the bug I try to use a dataset with streaming=True, the issue I have is that the RAM usage becomes higher and higher until it is no longer sustainable. I understand that huggingface store data in ram during the streaming, and more worker in dataloader there are, more a lot of shard will be stored in ram, but the issue I have is that the ram usage is not constant. So after each new shard loaded, the ram usage will be higher and higher. ### Steps to reproduce the bug You can run this code and see you ram usage, after each shard of 255 examples, your ram usage will be extended. ```py from datasets import load_dataset from torch.utils.data import DataLoader dataset = load_dataset("WaveGenAI/dataset", streaming=True) dataloader = DataLoader(dataset["train"], num_workers=3) for i, data in enumerate(dataloader): print(i, end="\r") ``` ### Expected behavior The Ram usage should be always the same (just 3 shards loaded in the ram). ### Environment info - `datasets` version: 3.0.1 - Platform: Linux-6.10.5-arch1-1-x86_64-with-glibc2.40 - Python version: 3.12.4 - `huggingface_hub` version: 0.26.0 - PyArrow version: 17.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7269/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7269/timeline
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/7268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7268/comments
https://api.github.com/repos/huggingface/datasets/issues/7268/events
https://github.com/huggingface/datasets/issues/7268
2,626,664,687
I_kwDODunzps6cj7jv
7,268
load_from_disk
{ "avatar_url": "https://avatars.githubusercontent.com/u/71670961?v=4", "events_url": "https://api.github.com/users/ghaith-mq/events{/privacy}", "followers_url": "https://api.github.com/users/ghaith-mq/followers", "following_url": "https://api.github.com/users/ghaith-mq/following{/other_user}", "gists_url": "https://api.github.com/users/ghaith-mq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghaith-mq", "id": 71670961, "login": "ghaith-mq", "node_id": "MDQ6VXNlcjcxNjcwOTYx", "organizations_url": "https://api.github.com/users/ghaith-mq/orgs", "received_events_url": "https://api.github.com/users/ghaith-mq/received_events", "repos_url": "https://api.github.com/users/ghaith-mq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghaith-mq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghaith-mq/subscriptions", "type": "User", "url": "https://api.github.com/users/ghaith-mq", "user_view_type": "public" }
[]
open
false
null
[]
null
1
2024-10-31T11:51:56
2024-10-31T14:43:47
null
NONE
null
null
null
### Describe the bug I have data saved with save_to_disk. The data is big (700Gb). When I try loading it, the only option is load_from_disk, and this function copies the data to a tmp directory, causing me to run out of disk space. Is there an alternative solution to that? ### Steps to reproduce the bug when trying to load data using load_From_disk after being saved using save_to_disk ### Expected behavior run out of disk space ### Environment info lateest version
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7268/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7268/timeline
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/7267
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7267/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7267/comments
https://api.github.com/repos/huggingface/datasets/issues/7267/events
https://github.com/huggingface/datasets/issues/7267
2,626,490,029
I_kwDODunzps6cjQ6t
7,267
Source installation fails on Macintosh with python 3.10
{ "avatar_url": "https://avatars.githubusercontent.com/u/39498938?v=4", "events_url": "https://api.github.com/users/mayankagarwals/events{/privacy}", "followers_url": "https://api.github.com/users/mayankagarwals/followers", "following_url": "https://api.github.com/users/mayankagarwals/following{/other_user}", "gists_url": "https://api.github.com/users/mayankagarwals/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mayankagarwals", "id": 39498938, "login": "mayankagarwals", "node_id": "MDQ6VXNlcjM5NDk4OTM4", "organizations_url": "https://api.github.com/users/mayankagarwals/orgs", "received_events_url": "https://api.github.com/users/mayankagarwals/received_events", "repos_url": "https://api.github.com/users/mayankagarwals/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mayankagarwals/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mayankagarwals/subscriptions", "type": "User", "url": "https://api.github.com/users/mayankagarwals", "user_view_type": "public" }
[]
open
false
null
[]
null
1
2024-10-31T10:18:45
2024-11-04T22:18:06
null
NONE
null
null
null
### Describe the bug Hi, Decord is a dev dependency not maintained since couple years. It does not have an ARM package available rendering it uninstallable on non-intel based macs Suggestion is to move to eva-decord (https://github.com/georgia-tech-db/eva-decord) which doesnt have this problem. Happy to raise a PR ### Steps to reproduce the bug Source installation as mentioned in contributinog.md ### Expected behavior Installation without decord failing to be installed. ### Environment info python=3.10, M3 Mac
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7267/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7267/timeline
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/7266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7266/comments
https://api.github.com/repos/huggingface/datasets/issues/7266/events
https://github.com/huggingface/datasets/issues/7266
2,624,666,087
I_kwDODunzps6ccTnn
7,266
The dataset viewer should be available soon. Please retry later.
{ "avatar_url": "https://avatars.githubusercontent.com/u/39821659?v=4", "events_url": "https://api.github.com/users/viiika/events{/privacy}", "followers_url": "https://api.github.com/users/viiika/followers", "following_url": "https://api.github.com/users/viiika/following{/other_user}", "gists_url": "https://api.github.com/users/viiika/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/viiika", "id": 39821659, "login": "viiika", "node_id": "MDQ6VXNlcjM5ODIxNjU5", "organizations_url": "https://api.github.com/users/viiika/orgs", "received_events_url": "https://api.github.com/users/viiika/received_events", "repos_url": "https://api.github.com/users/viiika/repos", "site_admin": false, "starred_url": "https://api.github.com/users/viiika/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/viiika/subscriptions", "type": "User", "url": "https://api.github.com/users/viiika", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2024-10-30T16:32:00
2024-10-31T03:48:11
2024-10-31T03:48:10
NONE
null
null
null
### Describe the bug After waiting for 2 hours, it still presents ``The dataset viewer should be available soon. Please retry later.'' ### Steps to reproduce the bug dataset link: https://huggingface.co/datasets/BryanW/HI_EDIT ### Expected behavior Present the dataset viewer. ### Environment info NA
{ "avatar_url": "https://avatars.githubusercontent.com/u/39821659?v=4", "events_url": "https://api.github.com/users/viiika/events{/privacy}", "followers_url": "https://api.github.com/users/viiika/followers", "following_url": "https://api.github.com/users/viiika/following{/other_user}", "gists_url": "https://api.github.com/users/viiika/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/viiika", "id": 39821659, "login": "viiika", "node_id": "MDQ6VXNlcjM5ODIxNjU5", "organizations_url": "https://api.github.com/users/viiika/orgs", "received_events_url": "https://api.github.com/users/viiika/received_events", "repos_url": "https://api.github.com/users/viiika/repos", "site_admin": false, "starred_url": "https://api.github.com/users/viiika/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/viiika/subscriptions", "type": "User", "url": "https://api.github.com/users/viiika", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7266/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7266/timeline
null
completed
false
11.269444
https://api.github.com/repos/huggingface/datasets/issues/7265
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7265/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7265/comments
https://api.github.com/repos/huggingface/datasets/issues/7265/events
https://github.com/huggingface/datasets/pull/7265
2,624,090,418
PR_kwDODunzps6AYofJ
7,265
Disallow video push_to_hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2024-10-30T13:21:55
2024-10-30T13:36:05
2024-10-30T13:36:02
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7265.diff", "html_url": "https://github.com/huggingface/datasets/pull/7265", "merged_at": "2024-10-30T13:36:02", "patch_url": "https://github.com/huggingface/datasets/pull/7265.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7265" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7265/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7265/timeline
null
null
true
0.235278
https://api.github.com/repos/huggingface/datasets/issues/7264
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7264/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7264/comments
https://api.github.com/repos/huggingface/datasets/issues/7264/events
https://github.com/huggingface/datasets/pull/7264
2,624,047,640
PR_kwDODunzps6AYfwL
7,264
fix docs relative links
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2024-10-30T13:07:34
2024-10-30T13:10:13
2024-10-30T13:09:02
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7264.diff", "html_url": "https://github.com/huggingface/datasets/pull/7264", "merged_at": "2024-10-30T13:09:02", "patch_url": "https://github.com/huggingface/datasets/pull/7264.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7264" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7264/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7264/timeline
null
null
true
0.024444
https://api.github.com/repos/huggingface/datasets/issues/7263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7263/comments
https://api.github.com/repos/huggingface/datasets/issues/7263/events
https://github.com/huggingface/datasets/pull/7263
2,621,844,054
PR_kwDODunzps6ARg7m
7,263
Small addition to video docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2024-10-29T16:58:37
2024-10-29T17:01:05
2024-10-29T16:59:10
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7263.diff", "html_url": "https://github.com/huggingface/datasets/pull/7263", "merged_at": "2024-10-29T16:59:10", "patch_url": "https://github.com/huggingface/datasets/pull/7263.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7263" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7263/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7263/timeline
null
null
true
0.009167
https://api.github.com/repos/huggingface/datasets/issues/7262
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7262/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7262/comments
https://api.github.com/repos/huggingface/datasets/issues/7262/events
https://github.com/huggingface/datasets/pull/7262
2,620,879,059
PR_kwDODunzps6AOWI8
7,262
Allow video with disabeld decoding without decord
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2024-10-29T10:54:04
2024-10-29T10:56:19
2024-10-29T10:55:37
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7262.diff", "html_url": "https://github.com/huggingface/datasets/pull/7262", "merged_at": "2024-10-29T10:55:37", "patch_url": "https://github.com/huggingface/datasets/pull/7262.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7262" }
for the viewer, this way it can use Video(decode=False) and doesn't need decord (which causes segfaults)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7262/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7262/timeline
null
null
true
0.025833
https://api.github.com/repos/huggingface/datasets/issues/7261
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7261/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7261/comments
https://api.github.com/repos/huggingface/datasets/issues/7261/events
https://github.com/huggingface/datasets/issues/7261
2,620,510,840
I_kwDODunzps6cMdJ4
7,261
Cannot load the cache when mapping the dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/43033959?v=4", "events_url": "https://api.github.com/users/zhangn77/events{/privacy}", "followers_url": "https://api.github.com/users/zhangn77/followers", "following_url": "https://api.github.com/users/zhangn77/following{/other_user}", "gists_url": "https://api.github.com/users/zhangn77/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zhangn77", "id": 43033959, "login": "zhangn77", "node_id": "MDQ6VXNlcjQzMDMzOTU5", "organizations_url": "https://api.github.com/users/zhangn77/orgs", "received_events_url": "https://api.github.com/users/zhangn77/received_events", "repos_url": "https://api.github.com/users/zhangn77/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zhangn77/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhangn77/subscriptions", "type": "User", "url": "https://api.github.com/users/zhangn77", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2024-10-29T08:29:40
2024-10-29T08:29:40
null
NONE
null
null
null
### Describe the bug I'm training the flux controlnet. The train_dataset.map() takes long time to finish. However, when I killed one training process and want to restart a new training with the same dataset. I can't reuse the mapped result even I defined the cache dir for the dataset. with accelerator.main_process_first(): from datasets.fingerprint import Hasher # fingerprint used by the cache for the other processes to load the result # details: https://github.com/huggingface/diffusers/pull/4038#discussion_r1266078401 new_fingerprint = Hasher.hash(args) train_dataset = train_dataset.map( compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint, batch_size=10, ) ### Steps to reproduce the bug train flux controlnet and start again ### Expected behavior will not map again ### Environment info latest diffusers
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7261/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7261/timeline
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/7260
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7260/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7260/comments
https://api.github.com/repos/huggingface/datasets/issues/7260/events
https://github.com/huggingface/datasets/issues/7260
2,620,014,285
I_kwDODunzps6cKj7N
7,260
cache can't cleaned or disabled
{ "avatar_url": "https://avatars.githubusercontent.com/u/15007828?v=4", "events_url": "https://api.github.com/users/charliedream1/events{/privacy}", "followers_url": "https://api.github.com/users/charliedream1/followers", "following_url": "https://api.github.com/users/charliedream1/following{/other_user}", "gists_url": "https://api.github.com/users/charliedream1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/charliedream1", "id": 15007828, "login": "charliedream1", "node_id": "MDQ6VXNlcjE1MDA3ODI4", "organizations_url": "https://api.github.com/users/charliedream1/orgs", "received_events_url": "https://api.github.com/users/charliedream1/received_events", "repos_url": "https://api.github.com/users/charliedream1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/charliedream1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/charliedream1/subscriptions", "type": "User", "url": "https://api.github.com/users/charliedream1", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2024-10-29T03:15:28
2024-10-29T03:18:22
null
NONE
null
null
null
### Describe the bug I tried following ways, the cache can't be disabled. I got 2T data, but I also got more than 2T cache file. I got pressure on storage. I need to diable cache or cleaned immediately after processed. Following ways are all not working, please give some help! ```python from datasets import disable_caching from transformers import AutoTokenizer disable_caching() tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_path) def tokenization_fn(examples): column_name = 'text' if 'text' in examples else 'data' tokenized_inputs = tokenizer( examples[column_name], return_special_tokens_mask=True, truncation=False, max_length=tokenizer.model_max_length ) return tokenized_inputs data = load_dataset('json', data_files=save_local_path, split='train', cache_dir=None) data.cleanup_cache_files() updated_dataset = data.map(tokenization_fn, load_from_cache_file=False) updated_dataset .cleanup_cache_files() ``` ### Expected behavior no cache file generated ### Environment info Ubuntu 20.04.6 LTS datasets 3.0.2
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7260/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7260/timeline
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/7259
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7259/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7259/comments
https://api.github.com/repos/huggingface/datasets/issues/7259/events
https://github.com/huggingface/datasets/pull/7259
2,618,909,241
PR_kwDODunzps6AIEY-
7,259
Don't embed videos
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2024-10-28T16:25:10
2024-10-28T16:27:34
2024-10-28T16:26:01
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7259.diff", "html_url": "https://github.com/huggingface/datasets/pull/7259", "merged_at": "2024-10-28T16:26:01", "patch_url": "https://github.com/huggingface/datasets/pull/7259.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7259" }
don't include video bytes when running download_and_prepare(format="parquet") this also affects push_to_hub which will just upload the local paths of the videos though
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7259/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7259/timeline
null
null
true
0.014167
https://api.github.com/repos/huggingface/datasets/issues/7258
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7258/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7258/comments
https://api.github.com/repos/huggingface/datasets/issues/7258/events
https://github.com/huggingface/datasets/pull/7258
2,618,758,399
PR_kwDODunzps6AHlK1
7,258
Always set non-null writer batch size
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2024-10-28T15:26:14
2024-10-28T15:28:41
2024-10-28T15:26:29
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7258.diff", "html_url": "https://github.com/huggingface/datasets/pull/7258", "merged_at": "2024-10-28T15:26:29", "patch_url": "https://github.com/huggingface/datasets/pull/7258.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7258" }
bug introduced in #7230, it was preventing the Viewer limit writes to work
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7258/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7258/timeline
null
null
true
0.004167
https://api.github.com/repos/huggingface/datasets/issues/7257
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7257/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7257/comments
https://api.github.com/repos/huggingface/datasets/issues/7257/events
https://github.com/huggingface/datasets/pull/7257
2,618,602,173
PR_kwDODunzps6AHEfy
7,257
fix ci for pyarrow 18
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2024-10-28T14:31:34
2024-10-28T14:34:05
2024-10-28T14:31:44
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7257.diff", "html_url": "https://github.com/huggingface/datasets/pull/7257", "merged_at": "2024-10-28T14:31:44", "patch_url": "https://github.com/huggingface/datasets/pull/7257.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7257" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7257/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7257/timeline
null
null
true
0.002778
https://api.github.com/repos/huggingface/datasets/issues/7256
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7256/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7256/comments
https://api.github.com/repos/huggingface/datasets/issues/7256/events
https://github.com/huggingface/datasets/pull/7256
2,618,580,188
PR_kwDODunzps6AG_qk
7,256
Retry all requests timeouts
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2024-10-28T14:23:16
2024-10-28T14:56:28
2024-10-28T14:56:26
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7256.diff", "html_url": "https://github.com/huggingface/datasets/pull/7256", "merged_at": "2024-10-28T14:56:26", "patch_url": "https://github.com/huggingface/datasets/pull/7256.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7256" }
as reported in https://github.com/huggingface/datasets/issues/6843
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7256/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7256/timeline
null
null
true
0.552778
https://api.github.com/repos/huggingface/datasets/issues/7255
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7255/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7255/comments
https://api.github.com/repos/huggingface/datasets/issues/7255/events
https://github.com/huggingface/datasets/pull/7255
2,618,540,355
PR_kwDODunzps6AG25R
7,255
fix decord import
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2024-10-28T14:08:19
2024-10-28T14:10:43
2024-10-28T14:09:14
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7255.diff", "html_url": "https://github.com/huggingface/datasets/pull/7255", "merged_at": "2024-10-28T14:09:14", "patch_url": "https://github.com/huggingface/datasets/pull/7255.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7255" }
delay the import until Video() is instantiated + also import duckdb first (otherwise importing duckdb later causes a segfault)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7255/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7255/timeline
null
null
true
0.015278
https://api.github.com/repos/huggingface/datasets/issues/7254
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7254/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7254/comments
https://api.github.com/repos/huggingface/datasets/issues/7254/events
https://github.com/huggingface/datasets/issues/7254
2,616,174,996
I_kwDODunzps6b76mU
7,254
mismatch for datatypes when providing `Features` with `Array2D` and user specified `dtype` and using with_format("numpy")
{ "avatar_url": "https://avatars.githubusercontent.com/u/97193607?v=4", "events_url": "https://api.github.com/users/Akhil-CM/events{/privacy}", "followers_url": "https://api.github.com/users/Akhil-CM/followers", "following_url": "https://api.github.com/users/Akhil-CM/following{/other_user}", "gists_url": "https://api.github.com/users/Akhil-CM/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Akhil-CM", "id": 97193607, "login": "Akhil-CM", "node_id": "U_kgDOBcsOhw", "organizations_url": "https://api.github.com/users/Akhil-CM/orgs", "received_events_url": "https://api.github.com/users/Akhil-CM/received_events", "repos_url": "https://api.github.com/users/Akhil-CM/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Akhil-CM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Akhil-CM/subscriptions", "type": "User", "url": "https://api.github.com/users/Akhil-CM", "user_view_type": "public" }
[]
open
false
null
[]
null
1
2024-10-26T22:06:27
2024-10-26T22:07:37
null
NONE
null
null
null
### Describe the bug If the user provides a `Features` type value to `datasets.Dataset` with members having `Array2D` with a value for `dtype`, it is not respected during `with_format("numpy")` which should return a `np.array` with `dtype` that the user provided for `Array2D`. It seems for floats, it will be set to `float32` and for ints it will be set to `int64` ### Steps to reproduce the bug ```python import numpy as np import datasets from datasets import Dataset, Features, Array2D print(f"datasets version: {datasets.__version__}") data_info = { "arr_float" : "float64", "arr_int" : "int32" } sample = {key : [np.zeros([4, 5], dtype=dtype)] for key, dtype in data_info.items()} features = {key : Array2D(shape=(None, 5), dtype=dtype) for key, dtype in data_info.items()} features = Features(features) dataset = Dataset.from_dict(sample, features=features) ds = dataset.with_format("numpy") for key in features: print(f"{key} feature dtype: ", ds.features[key].dtype) print(f"{key} dtype:", ds[key].dtype) ``` Output: ```bash datasets version: 3.0.2 arr_float feature dtype: float64 arr_float dtype: float32 arr_int feature dtype: int32 arr_int dtype: int64 ``` ### Expected behavior It should return a `np.array` with `dtype` that the user provided for the corresponding member in the `Features` type value ### Environment info - `datasets` version: 3.0.2 - Platform: Linux-6.11.5-arch1-1-x86_64-with-glibc2.40 - Python version: 3.12.7 - `huggingface_hub` version: 0.26.1 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7254/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7254/timeline
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/7253
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7253/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7253/comments
https://api.github.com/repos/huggingface/datasets/issues/7253/events
https://github.com/huggingface/datasets/issues/7253
2,615,862,202
I_kwDODunzps6b6uO6
7,253
Unable to upload a large dataset zip either from command line or UI
{ "avatar_url": "https://avatars.githubusercontent.com/u/159609047?v=4", "events_url": "https://api.github.com/users/vakyansh/events{/privacy}", "followers_url": "https://api.github.com/users/vakyansh/followers", "following_url": "https://api.github.com/users/vakyansh/following{/other_user}", "gists_url": "https://api.github.com/users/vakyansh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vakyansh", "id": 159609047, "login": "vakyansh", "node_id": "U_kgDOCYNw1w", "organizations_url": "https://api.github.com/users/vakyansh/orgs", "received_events_url": "https://api.github.com/users/vakyansh/received_events", "repos_url": "https://api.github.com/users/vakyansh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vakyansh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vakyansh/subscriptions", "type": "User", "url": "https://api.github.com/users/vakyansh", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2024-10-26T13:17:06
2024-10-26T13:17:06
null
NONE
null
null
null
### Describe the bug Unable to upload a large dataset zip from command line or UI. UI simply says error. I am trying to a upload a tar.gz file of 17GB. <img width="550" alt="image" src="https://github.com/user-attachments/assets/f9d29024-06c8-49c4-a109-0492cff79d34"> <img width="755" alt="image" src="https://github.com/user-attachments/assets/a8d4acda-7f02-4279-9c2d-b2e0282b4faa"> ### Steps to reproduce the bug Upload a large file ### Expected behavior The file should upload without any issue. ### Environment info None
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7253/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7253/timeline
null
null
false
null
https://api.github.com/repos/huggingface/datasets/issues/7252
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7252/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7252/comments
https://api.github.com/repos/huggingface/datasets/issues/7252/events
https://github.com/huggingface/datasets/pull/7252
2,613,795,544
PR_kwDODunzps5_41s7
7,252
Add IterableDataset.shard()
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2024-10-25T11:07:12
2024-10-25T15:45:24
2024-10-25T15:45:22
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7252.diff", "html_url": "https://github.com/huggingface/datasets/pull/7252", "merged_at": "2024-10-25T15:45:21", "patch_url": "https://github.com/huggingface/datasets/pull/7252.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7252" }
Will be useful to distribute a dataset across workers (other than pytorch) like spark I also renamed `.n_shards` -> `.num_shards` for consistency and kept the old name for backward compatibility. And a few changes in internal functions for consistency as well (rank, world_size -> num_shards, index) Breaking change: the new default for `contiguous` in `Dataset.shard()` is `True`, but imo not a big deal since I couldn't find any usage of `contiguous=False` internally (we always do contiguous=True for map-style datasets since its more optimized) or in the wild
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7252/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7252/timeline
null
null
true
4.636111
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
8
Edit dataset card