id
int64
599M
2.47B
url
stringlengths
58
61
repository_url
stringclasses
1 value
events_url
stringlengths
65
68
labels
listlengths
0
4
active_lock_reason
null
updated_at
stringlengths
20
20
assignees
listlengths
0
4
html_url
stringlengths
46
51
author_association
stringclasses
4 values
state_reason
stringclasses
3 values
draft
bool
2 classes
milestone
dict
comments
sequencelengths
0
30
title
stringlengths
1
290
reactions
dict
node_id
stringlengths
18
32
pull_request
dict
created_at
stringlengths
20
20
comments_url
stringlengths
67
70
body
stringlengths
0
228k
user
dict
labels_url
stringlengths
72
75
timeline_url
stringlengths
67
70
state
stringclasses
2 values
locked
bool
1 class
number
int64
1
7.11k
performed_via_github_app
null
closed_at
stringlengths
20
20
assignee
dict
is_pull_request
bool
2 classes
2,473,367,848
https://api.github.com/repos/huggingface/datasets/issues/7109
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7109/events
[]
null
2024-08-19T13:29:12Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/7109
MEMBER
null
null
null
[]
ConnectionError for gated datasets and unauthenticated users
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7109/reactions" }
I_kwDODunzps6TbJko
null
2024-08-19T13:27:45Z
https://api.github.com/repos/huggingface/datasets/issues/7109/comments
Since the Hub returns dataset info for gated datasets and unauthenticated users, there is dead code: https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/load.py#L1846-L1852 We should remove the dead code and properly handle this case: currently we are raising a `ConnectionError` instead of a `DatasetNotFoundError` (as before). See: - https://github.com/huggingface/dataset-viewer/issues/3025 - https://github.com/huggingface/huggingface_hub/issues/2457
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7109/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7109/timeline
open
false
7,109
null
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,470,665,327
https://api.github.com/repos/huggingface/datasets/issues/7108
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7108/events
[]
null
2024-08-19T13:21:12Z
[]
https://github.com/huggingface/datasets/issues/7108
NONE
completed
null
null
[ "I don't reproduce, I was able to create a new repo: https://huggingface.co/datasets/severo/reproduce-datasets-issues-7108. Can you confirm it's still broken?", "I have just tried again.\r\n\r\nFirefox: The `Create dataset` doesn't work. It has worked in the past. It's my preferred browser.\r\n\r\nChrome: The `Create dataset` works.\r\n\r\nIt seems to be a Firefox specific issue.", "I have updated Firefox 129.0 (64 bit), and now the `Create dataset` is working again in Firefox.\r\n\r\nUX: It would be nice with better error messages on HuggingFace.", "maybe an issue with the cookie. cc @Wauplin @coyotte508 " ]
website broken: Create a new dataset repository, doesn't create a new repo in Firefox
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7108/reactions" }
I_kwDODunzps6TQ1xv
null
2024-08-16T17:23:00Z
https://api.github.com/repos/huggingface/datasets/issues/7108/comments
### Describe the bug This issue is also reported here: https://discuss.huggingface.co/t/create-a-new-dataset-repository-broken-page/102644 This page is broken. https://huggingface.co/new-dataset I fill in the form with my text, and click `Create Dataset`. ![Screenshot 2024-08-16 at 15 55 37](https://github.com/user-attachments/assets/de16627b-7a55-4bcf-9f0b-a48227aabfe6) Then the form gets wiped. And no repo got created. No error message visible in the developer console. ![Screenshot 2024-08-16 at 15 56 54](https://github.com/user-attachments/assets/0520164b-431c-40a5-9634-11fd62c4f4c3) # Idea for improvement For better UX, if the repo cannot be created, then show an error message, that something went wrong. # Work around, that works for me ```python from huggingface_hub import HfApi, HfFolder repo_id = 'simon-arc-solve-fractal-v3' api = HfApi() username = api.whoami()['name'] repo_url = api.create_repo(repo_id=repo_id, exist_ok=True, private=True, repo_type="dataset") ``` ### Steps to reproduce the bug Go https://huggingface.co/new-dataset Fill in the form. Click `Create dataset`. Now the form is cleared. And the page doesn't jump anywhere. ### Expected behavior The moment the user clicks `Create dataset`, the repo gets created and the page jumps to the created repo. ### Environment info Firefox 128.0.3 (64-bit) macOS Sonoma 14.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4", "events_url": "https://api.github.com/users/neoneye/events{/privacy}", "followers_url": "https://api.github.com/users/neoneye/followers", "following_url": "https://api.github.com/users/neoneye/following{/other_user}", "gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/neoneye", "id": 147971, "login": "neoneye", "node_id": "MDQ6VXNlcjE0Nzk3MQ==", "organizations_url": "https://api.github.com/users/neoneye/orgs", "received_events_url": "https://api.github.com/users/neoneye/received_events", "repos_url": "https://api.github.com/users/neoneye/repos", "site_admin": false, "starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neoneye/subscriptions", "type": "User", "url": "https://api.github.com/users/neoneye" }
https://api.github.com/repos/huggingface/datasets/issues/7108/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7108/timeline
closed
false
7,108
null
2024-08-19T06:52:48Z
null
false
2,470,444,732
https://api.github.com/repos/huggingface/datasets/issues/7107
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7107/events
[]
null
2024-08-18T09:28:43Z
[]
https://github.com/huggingface/datasets/issues/7107
NONE
completed
null
null
[ "There seems to be a PR related to the load_dataset path that went into 2.21.0 -- https://github.com/huggingface/datasets/pull/6862/files\r\n\r\nTaking a look at it now", "+1\r\n\r\nDowngrading to 2.20.0 fixed my issue, hopefully helpful for others.", "I tried adding a simple test to `test_load.py` with the alpaca eval dataset but the test didn't fail :(. \r\n\r\nSo looks like this might have something to do with the environment? ", "There was an issue with the script of the \"tatsu-lab/alpaca_eval\" dataset.\r\n\r\nI was fixed with this PR: \r\n- [Fix FileNotFoundError](https://huggingface.co/datasets/tatsu-lab/alpaca_eval/discussions/2)\r\n\r\nIt should work now if you retry to load the dataset." ]
load_dataset broken in 2.21.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7107/reactions" }
I_kwDODunzps6TP_68
null
2024-08-16T14:59:51Z
https://api.github.com/repos/huggingface/datasets/issues/7107/comments
### Describe the bug `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)` used to work till 2.20.0 but doesn't work in 2.21.0 In 2.20.0: ![Screenshot 2024-08-16 at 3 57 10 PM](https://github.com/user-attachments/assets/0516489b-8187-486d-bee8-88af3381dee9) in 2.21.0: ![Screenshot 2024-08-16 at 3 57 24 PM](https://github.com/user-attachments/assets/bc257570-f461-41e4-8717-90a69ed7c24f) ### Steps to reproduce the bug 1. Spin up a new google collab 2. `pip install datasets==2.21.0` 3. `import datasets` 4. `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)` 5. Will throw an error. ### Expected behavior Try steps 1-5 again but replace datasets version with 2.20.0, it will work ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-6.1.85+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.5 - PyArrow version: 17.0.0 - Pandas version: 2.1.4 - `fsspec` version: 2024.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/1911631?v=4", "events_url": "https://api.github.com/users/anjor/events{/privacy}", "followers_url": "https://api.github.com/users/anjor/followers", "following_url": "https://api.github.com/users/anjor/following{/other_user}", "gists_url": "https://api.github.com/users/anjor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anjor", "id": 1911631, "login": "anjor", "node_id": "MDQ6VXNlcjE5MTE2MzE=", "organizations_url": "https://api.github.com/users/anjor/orgs", "received_events_url": "https://api.github.com/users/anjor/received_events", "repos_url": "https://api.github.com/users/anjor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anjor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anjor/subscriptions", "type": "User", "url": "https://api.github.com/users/anjor" }
https://api.github.com/repos/huggingface/datasets/issues/7107/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7107/timeline
closed
false
7,107
null
2024-08-18T09:27:12Z
null
false
2,469,854,262
https://api.github.com/repos/huggingface/datasets/issues/7106
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7106/events
[]
null
2024-08-16T09:31:37Z
[]
https://github.com/huggingface/datasets/pull/7106
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7106). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
Rename LargeList.dtype to LargeList.feature
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7106/reactions" }
PR_kwDODunzps54jntM
{ "diff_url": "https://github.com/huggingface/datasets/pull/7106.diff", "html_url": "https://github.com/huggingface/datasets/pull/7106", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7106.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7106" }
2024-08-16T09:12:04Z
https://api.github.com/repos/huggingface/datasets/issues/7106/comments
Rename `LargeList.dtype` to `LargeList.feature`. Note that `dtype` is usually used for NumPy data types ("int64", "float32",...): see `Value.dtype`. However, `LargeList` attribute (like `Sequence.feature`) expects a `FeatureType` instead. With this renaming: - we avoid confusion about the expected type and - we also align `LargeList` with `Sequence`.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7106/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7106/timeline
open
false
7,106
null
null
null
true
2,468,207,039
https://api.github.com/repos/huggingface/datasets/issues/7105
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7105/events
[]
null
2024-08-19T15:08:49Z
[]
https://github.com/huggingface/datasets/pull/7105
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7105). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Nice\r\n\r\n<img width=\"141\" alt=\"Capture d’écran 2024-08-19 à 15 25 00\" src=\"https://github.com/user-attachments/assets/18c7b3ec-a57e-45d7-9b19-0b12df9feccd\">\r\n", "fyi the CI failure on test_py310_numpy2 is unrelated to this PR (it's a dependency install failure)" ]
Use `huggingface_hub` cache
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 2, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7105/reactions" }
PR_kwDODunzps54eZ0D
{ "diff_url": "https://github.com/huggingface/datasets/pull/7105.diff", "html_url": "https://github.com/huggingface/datasets/pull/7105", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7105.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7105" }
2024-08-15T14:45:22Z
https://api.github.com/repos/huggingface/datasets/issues/7105/comments
wip - use `hf_hub_download()` from `huggingface_hub` for HF files - `datasets` cache_dir is still used for: - caching datasets as Arrow files (that back `Dataset` objects) - extracted archives, uncompressed files - files downloaded via http (datasets with scripts) - I removed code that were made for http files (and also the dummy_data / mock_download_manager stuff that happened to rely on them and have been legacy for a while now)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/7105/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7105/timeline
open
false
7,105
null
null
null
true
2,467,788,212
https://api.github.com/repos/huggingface/datasets/issues/7104
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7104/events
[]
null
2024-08-15T10:24:13Z
[]
https://github.com/huggingface/datasets/pull/7104
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7104). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005343 / 0.011353 (-0.006010) | 0.003562 / 0.011008 (-0.007447) | 0.062785 / 0.038508 (0.024277) | 0.031459 / 0.023109 (0.008349) | 0.246497 / 0.275898 (-0.029401) | 0.268258 / 0.323480 (-0.055222) | 0.003201 / 0.007986 (-0.004785) | 0.004153 / 0.004328 (-0.000175) | 0.049003 / 0.004250 (0.044753) | 0.042780 / 0.037052 (0.005728) | 0.263857 / 0.258489 (0.005368) | 0.278578 / 0.293841 (-0.015263) | 0.030357 / 0.128546 (-0.098190) | 0.012341 / 0.075646 (-0.063305) | 0.206010 / 0.419271 (-0.213262) | 0.036244 / 0.043533 (-0.007289) | 0.245799 / 0.255139 (-0.009340) | 0.265467 / 0.283200 (-0.017733) | 0.019473 / 0.141683 (-0.122210) | 1.147913 / 1.452155 (-0.304242) | 1.209968 / 1.492716 (-0.282749) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099393 / 0.018006 (0.081387) | 0.300898 / 0.000490 (0.300408) | 0.000258 / 0.000200 (0.000058) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018888 / 0.037411 (-0.018523) | 0.062452 / 0.014526 (0.047926) | 0.073799 / 0.176557 (-0.102757) | 0.121297 / 0.737135 (-0.615839) | 0.074855 / 0.296338 (-0.221484) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283969 / 0.215209 (0.068760) | 2.808820 / 2.077655 (0.731165) | 1.446106 / 1.504120 (-0.058014) | 1.321622 / 1.541195 (-0.219573) | 1.348317 / 1.468490 (-0.120173) | 0.738369 / 4.584777 (-3.846408) | 2.349825 / 3.745712 (-1.395887) | 2.913964 / 5.269862 (-2.355897) | 1.870585 / 4.565676 (-2.695092) | 0.080141 / 0.424275 (-0.344134) | 0.005174 / 0.007607 (-0.002433) | 0.335977 / 0.226044 (0.109933) | 3.356267 / 2.268929 (1.087338) | 1.811149 / 55.444624 (-53.633475) | 1.510685 / 6.876477 (-5.365792) | 1.524960 / 2.142072 (-0.617112) | 0.803900 / 4.805227 (-4.001328) | 0.138294 / 6.500664 (-6.362370) | 0.042241 / 0.075469 (-0.033229) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975597 / 1.841788 (-0.866191) | 11.395109 / 8.074308 (3.320801) | 9.837724 / 10.191392 (-0.353668) | 0.141474 / 0.680424 (-0.538950) | 0.015075 / 0.534201 (-0.519126) | 0.304285 / 0.579283 (-0.274998) | 0.267845 / 0.434364 (-0.166519) | 0.342808 / 0.540337 (-0.197529) | 0.434299 / 1.386936 (-0.952637) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005612 / 0.011353 (-0.005741) | 0.003808 / 0.011008 (-0.007201) | 0.050533 / 0.038508 (0.012024) | 0.032635 / 0.023109 (0.009526) | 0.265522 / 0.275898 (-0.010376) | 0.289763 / 0.323480 (-0.033716) | 0.004395 / 0.007986 (-0.003590) | 0.002868 / 0.004328 (-0.001460) | 0.048443 / 0.004250 (0.044193) | 0.040047 / 0.037052 (0.002995) | 0.279013 / 0.258489 (0.020524) | 0.314499 / 0.293841 (0.020658) | 0.032321 / 0.128546 (-0.096225) | 0.011902 / 0.075646 (-0.063744) | 0.059827 / 0.419271 (-0.359445) | 0.034388 / 0.043533 (-0.009145) | 0.270660 / 0.255139 (0.015521) | 0.290776 / 0.283200 (0.007576) | 0.017875 / 0.141683 (-0.123808) | 1.188085 / 1.452155 (-0.264070) | 1.221384 / 1.492716 (-0.271332) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095619 / 0.018006 (0.077613) | 0.305331 / 0.000490 (0.304841) | 0.000217 / 0.000200 (0.000018) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022481 / 0.037411 (-0.014930) | 0.076957 / 0.014526 (0.062431) | 0.087830 / 0.176557 (-0.088726) | 0.128290 / 0.737135 (-0.608845) | 0.090565 / 0.296338 (-0.205774) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291861 / 0.215209 (0.076652) | 2.869776 / 2.077655 (0.792121) | 1.575114 / 1.504120 (0.070994) | 1.449873 / 1.541195 (-0.091322) | 1.450333 / 1.468490 (-0.018158) | 0.723319 / 4.584777 (-3.861458) | 0.972603 / 3.745712 (-2.773109) | 2.940909 / 5.269862 (-2.328953) | 1.889664 / 4.565676 (-2.676012) | 0.078654 / 0.424275 (-0.345621) | 0.005197 / 0.007607 (-0.002410) | 0.344380 / 0.226044 (0.118336) | 3.387509 / 2.268929 (1.118580) | 1.981590 / 55.444624 (-53.463034) | 1.643214 / 6.876477 (-5.233263) | 1.640435 / 2.142072 (-0.501638) | 0.802037 / 4.805227 (-4.003191) | 0.133016 / 6.500664 (-6.367648) | 0.040861 / 0.075469 (-0.034608) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.026372 / 1.841788 (-0.815416) | 11.959931 / 8.074308 (3.885623) | 10.122523 / 10.191392 (-0.068869) | 0.144443 / 0.680424 (-0.535981) | 0.015629 / 0.534201 (-0.518572) | 0.304802 / 0.579283 (-0.274481) | 0.120538 / 0.434364 (-0.313826) | 0.343394 / 0.540337 (-0.196943) | 0.437544 / 1.386936 (-0.949392) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#84832c07f614e5f51a762166b2fa9ac27e988173 \"CML watermark\")\n" ]
remove more script docs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7104/reactions" }
PR_kwDODunzps54dAhE
{ "diff_url": "https://github.com/huggingface/datasets/pull/7104.diff", "html_url": "https://github.com/huggingface/datasets/pull/7104", "merged_at": "2024-08-15T10:18:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/7104.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7104" }
2024-08-15T10:13:26Z
https://api.github.com/repos/huggingface/datasets/issues/7104/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/7104/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7104/timeline
closed
false
7,104
null
2024-08-15T10:18:25Z
null
true
2,467,664,581
https://api.github.com/repos/huggingface/datasets/issues/7103
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7103/events
[]
null
2024-08-16T09:18:29Z
[]
https://github.com/huggingface/datasets/pull/7103
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7103). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005255 / 0.011353 (-0.006098) | 0.003344 / 0.011008 (-0.007664) | 0.062062 / 0.038508 (0.023554) | 0.030154 / 0.023109 (0.007045) | 0.233728 / 0.275898 (-0.042170) | 0.258799 / 0.323480 (-0.064681) | 0.004105 / 0.007986 (-0.003880) | 0.002708 / 0.004328 (-0.001621) | 0.048689 / 0.004250 (0.044439) | 0.041864 / 0.037052 (0.004812) | 0.247221 / 0.258489 (-0.011268) | 0.274067 / 0.293841 (-0.019774) | 0.029108 / 0.128546 (-0.099439) | 0.011867 / 0.075646 (-0.063779) | 0.203181 / 0.419271 (-0.216090) | 0.035162 / 0.043533 (-0.008371) | 0.239723 / 0.255139 (-0.015416) | 0.256679 / 0.283200 (-0.026521) | 0.018362 / 0.141683 (-0.123321) | 1.139974 / 1.452155 (-0.312181) | 1.193946 / 1.492716 (-0.298770) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.135477 / 0.018006 (0.117471) | 0.298500 / 0.000490 (0.298011) | 0.000225 / 0.000200 (0.000025) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018743 / 0.037411 (-0.018668) | 0.062999 / 0.014526 (0.048474) | 0.073466 / 0.176557 (-0.103090) | 0.119227 / 0.737135 (-0.617908) | 0.074338 / 0.296338 (-0.222000) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280747 / 0.215209 (0.065538) | 2.750660 / 2.077655 (0.673006) | 1.461004 / 1.504120 (-0.043116) | 1.348439 / 1.541195 (-0.192756) | 1.365209 / 1.468490 (-0.103281) | 0.718416 / 4.584777 (-3.866361) | 2.333568 / 3.745712 (-1.412144) | 2.854639 / 5.269862 (-2.415223) | 1.821144 / 4.565676 (-2.744532) | 0.077234 / 0.424275 (-0.347041) | 0.005111 / 0.007607 (-0.002497) | 0.330749 / 0.226044 (0.104705) | 3.277189 / 2.268929 (1.008260) | 1.825886 / 55.444624 (-53.618739) | 1.515078 / 6.876477 (-5.361399) | 1.527288 / 2.142072 (-0.614785) | 0.786922 / 4.805227 (-4.018305) | 0.131539 / 6.500664 (-6.369125) | 0.042365 / 0.075469 (-0.033104) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.961809 / 1.841788 (-0.879979) | 11.184540 / 8.074308 (3.110232) | 9.473338 / 10.191392 (-0.718054) | 0.138460 / 0.680424 (-0.541964) | 0.014588 / 0.534201 (-0.519613) | 0.301503 / 0.579283 (-0.277780) | 0.261092 / 0.434364 (-0.173271) | 0.336480 / 0.540337 (-0.203857) | 0.427665 / 1.386936 (-0.959271) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005517 / 0.011353 (-0.005836) | 0.003417 / 0.011008 (-0.007591) | 0.049338 / 0.038508 (0.010830) | 0.033411 / 0.023109 (0.010302) | 0.264328 / 0.275898 (-0.011570) | 0.286750 / 0.323480 (-0.036730) | 0.004299 / 0.007986 (-0.003686) | 0.002506 / 0.004328 (-0.001823) | 0.049511 / 0.004250 (0.045260) | 0.041471 / 0.037052 (0.004418) | 0.276732 / 0.258489 (0.018243) | 0.311908 / 0.293841 (0.018067) | 0.031683 / 0.128546 (-0.096863) | 0.011700 / 0.075646 (-0.063946) | 0.060084 / 0.419271 (-0.359188) | 0.037757 / 0.043533 (-0.005776) | 0.265342 / 0.255139 (0.010203) | 0.287782 / 0.283200 (0.004583) | 0.018692 / 0.141683 (-0.122990) | 1.163462 / 1.452155 (-0.288692) | 1.219236 / 1.492716 (-0.273481) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094102 / 0.018006 (0.076096) | 0.303976 / 0.000490 (0.303487) | 0.000208 / 0.000200 (0.000008) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023252 / 0.037411 (-0.014160) | 0.076986 / 0.014526 (0.062461) | 0.088831 / 0.176557 (-0.087726) | 0.128661 / 0.737135 (-0.608475) | 0.089082 / 0.296338 (-0.207256) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297428 / 0.215209 (0.082218) | 2.951568 / 2.077655 (0.873913) | 1.597627 / 1.504120 (0.093508) | 1.466556 / 1.541195 (-0.074639) | 1.455522 / 1.468490 (-0.012968) | 0.723576 / 4.584777 (-3.861201) | 0.951113 / 3.745712 (-2.794599) | 2.889671 / 5.269862 (-2.380190) | 1.877330 / 4.565676 (-2.688347) | 0.079124 / 0.424275 (-0.345151) | 0.005146 / 0.007607 (-0.002461) | 0.344063 / 0.226044 (0.118018) | 3.432190 / 2.268929 (1.163261) | 1.927049 / 55.444624 (-53.517576) | 1.638552 / 6.876477 (-5.237924) | 1.647791 / 2.142072 (-0.494282) | 0.800526 / 4.805227 (-4.004701) | 0.131858 / 6.500664 (-6.368806) | 0.040852 / 0.075469 (-0.034618) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.025536 / 1.841788 (-0.816252) | 11.798302 / 8.074308 (3.723994) | 10.012051 / 10.191392 (-0.179341) | 0.137701 / 0.680424 (-0.542723) | 0.015151 / 0.534201 (-0.519050) | 0.298972 / 0.579283 (-0.280311) | 0.123816 / 0.434364 (-0.310548) | 0.337292 / 0.540337 (-0.203046) | 0.432729 / 1.386936 (-0.954207) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bececdac927160b5c7e883736d7cc79d5699ad0a \"CML watermark\")\n" ]
Fix args of feature docstrings
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7103/reactions" }
PR_kwDODunzps54clrp
{ "diff_url": "https://github.com/huggingface/datasets/pull/7103.diff", "html_url": "https://github.com/huggingface/datasets/pull/7103", "merged_at": "2024-08-15T10:33:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/7103.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7103" }
2024-08-15T08:46:08Z
https://api.github.com/repos/huggingface/datasets/issues/7103/comments
Fix Args section of feature docstrings. Currently, some args do not appear in the docs because they are not properly parsed due to the lack of their type (between parentheses).
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7103/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7103/timeline
closed
false
7,103
null
2024-08-15T10:33:30Z
null
true
2,466,893,106
https://api.github.com/repos/huggingface/datasets/issues/7102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7102/events
[]
null
2024-08-15T16:17:31Z
[]
https://github.com/huggingface/datasets/issues/7102
NONE
null
null
null
[ "Hi @lajd , I was skeptical about how we are saving the shards each as their own dataset (arrow file) in the script above, and so I updated the script to try out saving the shards in a few different file formats. From the experiments I ran, I saw binary format show significantly the best performance, with arrow and parquet about the same. However, I was unable to reproduce a drastically slower iteration speed after shuffling in any case when using the revised script -- pasting below:\r\n\r\n```python\r\nimport time\r\nfrom datasets import load_dataset, Dataset, IterableDataset\r\nfrom pathlib import Path\r\nimport torch\r\nimport pandas as pd\r\nimport pickle\r\nimport pyarrow as pa\r\nimport pyarrow.parquet as pq\r\n\r\n\r\ndef generate_random_example():\r\n return {\r\n 'inputs': torch.randn(128).tolist(),\r\n 'indices': torch.randint(0, 10000, (2, 20000)).tolist(),\r\n 'values': torch.randn(20000).tolist(),\r\n }\r\n\r\n\r\ndef generate_shard_data(examples_per_shard: int = 512):\r\n return [generate_random_example() for _ in range(examples_per_shard)]\r\n\r\n\r\ndef save_shard_as_arrow(shard_idx, save_dir, examples_per_shard):\r\n # Generate shard data\r\n shard_data = generate_shard_data(examples_per_shard)\r\n\r\n # Convert data to a Hugging Face Dataset\r\n dataset = Dataset.from_dict({\r\n 'inputs': [example['inputs'] for example in shard_data],\r\n 'indices': [example['indices'] for example in shard_data],\r\n 'values': [example['values'] for example in shard_data],\r\n })\r\n\r\n # Define the shard save path\r\n shard_write_path = Path(save_dir) / f\"shard_{shard_idx}\"\r\n\r\n # Save the dataset to disk using the Arrow format\r\n dataset.save_to_disk(str(shard_write_path))\r\n\r\n return str(shard_write_path)\r\n\r\n\r\ndef save_shard_as_parquet(shard_idx, save_dir, examples_per_shard):\r\n # Generate shard data\r\n shard_data = generate_shard_data(examples_per_shard)\r\n\r\n # Convert data to a pandas DataFrame for easy conversion to Parquet\r\n df = pd.DataFrame(shard_data)\r\n\r\n # Define the shard save path\r\n shard_write_path = Path(save_dir) / f\"shard_{shard_idx}.parquet\"\r\n\r\n # Convert DataFrame to PyArrow Table for Parquet saving\r\n table = pa.Table.from_pandas(df)\r\n\r\n # Save the table as a Parquet file\r\n pq.write_table(table, shard_write_path)\r\n\r\n return str(shard_write_path)\r\n\r\n\r\ndef save_shard_as_binary(shard_idx, save_dir, examples_per_shard):\r\n # Generate shard data\r\n shard_data = generate_shard_data(examples_per_shard)\r\n\r\n # Define the shard save path\r\n shard_write_path = Path(save_dir) / f\"shard_{shard_idx}.bin\"\r\n\r\n # Save each example as a serialized binary object using pickle\r\n with open(shard_write_path, 'wb') as f:\r\n for example in shard_data:\r\n f.write(pickle.dumps(example))\r\n\r\n return str(shard_write_path)\r\n\r\n\r\ndef generate_split_shards(save_dir, filetype=\"parquet\", num_shards: int = 16, examples_per_shard: int = 512):\r\n shard_filepaths = []\r\n for shard_idx in range(num_shards):\r\n if filetype == \"parquet\":\r\n shard_filepaths.append(save_shard_as_parquet(shard_idx, save_dir, examples_per_shard))\r\n elif filetype == \"binary\":\r\n shard_filepaths.append(save_shard_as_binary(shard_idx, save_dir, examples_per_shard))\r\n elif filetype == \"arrow\":\r\n shard_filepaths.append(save_shard_as_arrow(shard_idx, save_dir, examples_per_shard))\r\n else:\r\n raise ValueError(f\"Unsupported filetype: {filetype}. Choose either 'parquet' or 'binary'.\")\r\n return shard_filepaths\r\n\r\n\r\ndef _binary_dataset_generator(files):\r\n for filepath in files:\r\n with open(filepath, 'rb') as f:\r\n while True:\r\n try:\r\n example = pickle.load(f)\r\n yield example\r\n except EOFError:\r\n break\r\n\r\n\r\ndef load_binary_dataset(shard_filepaths):\r\n return IterableDataset.from_generator(\r\n _binary_dataset_generator, gen_kwargs={\"files\": shard_filepaths},\r\n )\r\n\r\n\r\ndef load_parquet_dataset(shard_filepaths):\r\n # Load the dataset as an IterableDataset\r\n return load_dataset(\r\n \"parquet\",\r\n data_files={split: shard_filepaths},\r\n streaming=True,\r\n split=split,\r\n )\r\n\r\n\r\ndef load_arrow_dataset(shard_filepaths):\r\n # Load the dataset as an IterableDataset\r\n shard_filepaths = [f + \"/data-00000-of-00001.arrow\" for f in shard_filepaths]\r\n return load_dataset(\r\n \"arrow\",\r\n data_files={split: shard_filepaths},\r\n streaming=True,\r\n split=split,\r\n )\r\n\r\n\r\ndef load_dataset_wrapper(filetype: str, shard_filepaths: list[str]):\r\n if filetype == \"parquet\":\r\n return load_parquet_dataset(shard_filepaths)\r\n if filetype == \"binary\":\r\n return load_binary_dataset(shard_filepaths)\r\n if filetype == \"arrow\":\r\n return load_arrow_dataset(shard_filepaths)\r\n else:\r\n raise ValueError(\"Unsupported filetype\")\r\n\r\n\r\n# Example usage:\r\nsplit = \"train\"\r\nsplit_save_dir = \"/tmp/random_split\"\r\n\r\nfiletype = \"binary\" # or \"parquet\", or \"arrow\"\r\nnum_shards = 16\r\n\r\nshard_filepaths = generate_split_shards(split_save_dir, filetype=filetype, num_shards=num_shards)\r\ndataset = load_dataset_wrapper(filetype=filetype, shard_filepaths=shard_filepaths)\r\n\r\ndataset = dataset.shuffle(buffer_size=100, seed=42)\r\n\r\nstart_time = time.time()\r\nfor count, item in enumerate(dataset):\r\n if count > 0 and count % 100 == 0:\r\n elapsed_time = time.time() - start_time\r\n iterations_per_second = count / elapsed_time\r\n print(f\"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second\")\r\n```", "update: I was able to reproduce the issue you described -- but ONLY if I do \r\n\r\n```\r\nrandom_dataset = random_dataset.with_format(\"numpy\")\r\n```\r\n\r\nIf I do this, I see similar numbers as what you reported. If I do not use numpy format, parquet and arrow are about 17 iterations per second regardless of whether or not we shuffle. Using binary, (again no numpy format tried with this yet), still shows the fastest speeds on average (shuffle and no shuffle) of about 850 it/sec.\r\n\r\nI suspect some issues with arrow and numpy being optimized for sequential reads, and shuffling cuases issuses... hmm" ]
Slow iteration speeds when using IterableDataset.shuffle with load_dataset(data_files=..., streaming=True)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7102/reactions" }
I_kwDODunzps6TCc0y
null
2024-08-14T21:44:44Z
https://api.github.com/repos/huggingface/datasets/issues/7102/comments
### Describe the bug When I load a dataset from a number of arrow files, as in: ``` random_dataset = load_dataset( "arrow", data_files={split: shard_filepaths}, streaming=True, split=split, ) ``` I'm able to get fast iteration speeds when iterating over the dataset without shuffling. When I shuffle the dataset, the iteration speed is reduced by ~1000x. It's very possible the way I'm loading dataset shards is not appropriate; if so please advise! Thanks for the help ### Steps to reproduce the bug Here's full code to reproduce the issue: - Generate a random dataset - Create shards of data independently using Dataset.save_to_disk() - The below will generate 16 shards (arrow files), of 512 examples each ``` import time from pathlib import Path from multiprocessing import Pool, cpu_count import torch from datasets import Dataset, load_dataset split = "train" split_save_dir = "/tmp/random_split" def generate_random_example(): return { 'inputs': torch.randn(128).tolist(), 'indices': torch.randint(0, 10000, (2, 20000)).tolist(), 'values': torch.randn(20000).tolist(), } def generate_shard_dataset(examples_per_shard: int = 512): dataset_dict = { 'inputs': [], 'indices': [], 'values': [] } for _ in range(examples_per_shard): example = generate_random_example() dataset_dict['inputs'].append(example['inputs']) dataset_dict['indices'].append(example['indices']) dataset_dict['values'].append(example['values']) return Dataset.from_dict(dataset_dict) def save_shard(shard_idx, save_dir, examples_per_shard): shard_dataset = generate_shard_dataset(examples_per_shard) shard_write_path = Path(save_dir) / f"shard_{shard_idx}" shard_dataset.save_to_disk(shard_write_path) return str(Path(shard_write_path) / "data-00000-of-00001.arrow") def generate_split_shards(save_dir, num_shards: int = 16, examples_per_shard: int = 512): with Pool(cpu_count()) as pool: args = [(m, save_dir, examples_per_shard) for m in range(num_shards)] shard_filepaths = pool.starmap(save_shard, args) return shard_filepaths shard_filepaths = generate_split_shards(split_save_dir) ``` Load the dataset as IterableDataset: ``` random_dataset = load_dataset( "arrow", data_files={split: shard_filepaths}, streaming=True, split=split, ) random_dataset = random_dataset.with_format("numpy") ``` Observe the iterations/second when iterating over the dataset directly, and applying shuffling before iterating: Without shuffling, this gives ~1500 iterations/second ``` start_time = time.time() for count, item in enumerate(random_dataset): if count > 0 and count % 100 == 0: elapsed_time = time.time() - start_time iterations_per_second = count / elapsed_time print(f"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second") ``` ``` Processed 100 items at an average of 705.74 iterations/second Processed 200 items at an average of 1169.68 iterations/second Processed 300 items at an average of 1497.97 iterations/second Processed 400 items at an average of 1739.62 iterations/second Processed 500 items at an average of 1931.11 iterations/second` ``` When shuffling, this gives ~3 iterations/second: ``` random_dataset = random_dataset.shuffle(buffer_size=100,seed=42) start_time = time.time() for count, item in enumerate(random_dataset): if count > 0 and count % 100 == 0: elapsed_time = time.time() - start_time iterations_per_second = count / elapsed_time print(f"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second") ``` ``` Processed 100 items at an average of 3.75 iterations/second Processed 200 items at an average of 3.93 iterations/second ``` ### Expected behavior Iterations per second should be barely affected by shuffling, especially with a small buffer size ### Environment info Datasets version: 2.21.0 Python 3.10 Ubuntu 22.04
{ "avatar_url": "https://avatars.githubusercontent.com/u/13192126?v=4", "events_url": "https://api.github.com/users/lajd/events{/privacy}", "followers_url": "https://api.github.com/users/lajd/followers", "following_url": "https://api.github.com/users/lajd/following{/other_user}", "gists_url": "https://api.github.com/users/lajd/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lajd", "id": 13192126, "login": "lajd", "node_id": "MDQ6VXNlcjEzMTkyMTI2", "organizations_url": "https://api.github.com/users/lajd/orgs", "received_events_url": "https://api.github.com/users/lajd/received_events", "repos_url": "https://api.github.com/users/lajd/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lajd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lajd/subscriptions", "type": "User", "url": "https://api.github.com/users/lajd" }
https://api.github.com/repos/huggingface/datasets/issues/7102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7102/timeline
open
false
7,102
null
null
null
false
2,466,510,783
https://api.github.com/repos/huggingface/datasets/issues/7101
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7101/events
[]
null
2024-08-18T10:33:38Z
[]
https://github.com/huggingface/datasets/issues/7101
NONE
null
null
null
[ "Having looked into this further it seems the core of the issue is with two different formats in the same repo.\r\n\r\nWhen the `parquet` config is first, the `WebDataset`s are loaded as `parquet`, if the `WebDataset` configs are first, the `parquet` is loaded as `WebDataset`.\r\n\r\nA workaround in my case would be to just turn the `parquet` into a `WebDataset`, although I'd still need the Dataset Viewer config limit increasing. In other cases using the same format may not be possible.\r\n\r\nRelevant code: \r\n- [HubDatasetModuleFactoryWithoutScript](https://github.com/huggingface/datasets/blob/5f42139a2c5583a55d34a2f60d537f5fba285c28/src/datasets/load.py#L964)\r\n- [get_data_patterns](https://github.com/huggingface/datasets/blob/5f42139a2c5583a55d34a2f60d537f5fba285c28/src/datasets/data_files.py#L415)" ]
`load_dataset` from Hub with `name` to specify `config` using incorrect builder type when multiple data formats are present
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7101/reactions" }
I_kwDODunzps6TA_e_
null
2024-08-14T18:12:25Z
https://api.github.com/repos/huggingface/datasets/issues/7101/comments
Following [documentation](https://huggingface.co/docs/datasets/repository_structure#define-your-splits-and-subsets-in-yaml) I had defined different configs for [`Dataception`](https://huggingface.co/datasets/bigdata-pw/Dataception), a dataset of datasets: ```yaml configs: - config_name: dataception data_files: - path: dataception.parquet split: train default: true - config_name: dataset_5423 data_files: - path: datasets/5423.tar split: train ... - config_name: dataset_721736 data_files: - path: datasets/721736.tar split: train ``` The intent was for metadata to be browsable via Dataset Viewer, in addition to each individual dataset, and to allow datasets to be loaded by specifying the config/name to `load_dataset`. While testing `load_dataset` I encountered the following error: ```python >>> dataset = load_dataset("bigdata-pw/Dataception", "dataset_7691") Downloading readme: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 467k/467k [00:00<00:00, 1.99MB/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 71.0M/71.0M [00:02<00:00, 26.8MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "datasets\load.py", line 2145, in load_dataset builder_instance.download_and_prepare( File "datasets\builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "datasets\builder.py", line 1100, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "datasets\packaged_modules\parquet\parquet.py", line 58, in _split_generators self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f)) ^^^^^^^^^^^^^^^^^ File "pyarrow\parquet\core.py", line 2325, in read_schema file = ParquetFile( ^^^^^^^^^^^^ File "pyarrow\parquet\core.py", line 318, in __init__ self.reader.open( File "pyarrow\_parquet.pyx", line 1470, in pyarrow._parquet.ParquetReader.open File "pyarrow\error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. ``` The correct file is downloaded, however the incorrect builder type is detected; `parquet` due to other content of the repository. It would appear that the config needs to be taken into account. Note that I have removed the additional configs from the repository because of this issue and there is a limit of 3000 configs anyway so the Dataset Viewer doesn't work as I intended. I'll add them back in if it assists with testing.
{ "avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4", "events_url": "https://api.github.com/users/hlky/events{/privacy}", "followers_url": "https://api.github.com/users/hlky/followers", "following_url": "https://api.github.com/users/hlky/following{/other_user}", "gists_url": "https://api.github.com/users/hlky/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hlky", "id": 106811348, "login": "hlky", "node_id": "U_kgDOBl3P1A", "organizations_url": "https://api.github.com/users/hlky/orgs", "received_events_url": "https://api.github.com/users/hlky/received_events", "repos_url": "https://api.github.com/users/hlky/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hlky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hlky/subscriptions", "type": "User", "url": "https://api.github.com/users/hlky" }
https://api.github.com/repos/huggingface/datasets/issues/7101/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7101/timeline
open
false
7,101
null
null
null
false
2,465,529,414
https://api.github.com/repos/huggingface/datasets/issues/7100
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7100/events
[]
null
2024-08-14T11:01:51Z
[]
https://github.com/huggingface/datasets/issues/7100
NONE
null
null
null
[]
IterableDataset: cannot resolve features from list of numpy arrays
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7100/reactions" }
I_kwDODunzps6S9P5G
null
2024-08-14T11:01:51Z
https://api.github.com/repos/huggingface/datasets/issues/7100/comments
### Describe the bug when resolve features of `IterableDataset`, got `pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values` error. ``` Traceback (most recent call last): File "test.py", line 6 iter_ds = iter_ds._resolve_features() File "lib/python3.10/site-packages/datasets/iterable_dataset.py", line 2876, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "lib/python3.10/site-packages/datasets/iterable_dataset.py", line 63, in _infer_features_from_batch pa_table = pa.Table.from_pydict(batch) File "pyarrow/table.pxi", line 1813, in pyarrow.lib._Tabular.from_pydict File "pyarrow/table.pxi", line 5339, in pyarrow.lib._from_pydict File "pyarrow/array.pxi", line 374, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 344, in pyarrow.lib.array File "pyarrow/array.pxi", line 42, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values ``` ### Steps to reproduce the bug ```python from datasets import Dataset import numpy as np # create list of numpy iter_ds = Dataset.from_dict({'a': [[[1, 2, 3], [1, 2, 3]]]}).to_iterable_dataset().map(lambda x: {'a': [np.array(x['a'])]}) iter_ds = iter_ds._resolve_features() # errors here ``` ### Expected behavior features can be successfully resolved ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.4 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/18899212?v=4", "events_url": "https://api.github.com/users/VeryLazyBoy/events{/privacy}", "followers_url": "https://api.github.com/users/VeryLazyBoy/followers", "following_url": "https://api.github.com/users/VeryLazyBoy/following{/other_user}", "gists_url": "https://api.github.com/users/VeryLazyBoy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VeryLazyBoy", "id": 18899212, "login": "VeryLazyBoy", "node_id": "MDQ6VXNlcjE4ODk5MjEy", "organizations_url": "https://api.github.com/users/VeryLazyBoy/orgs", "received_events_url": "https://api.github.com/users/VeryLazyBoy/received_events", "repos_url": "https://api.github.com/users/VeryLazyBoy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VeryLazyBoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VeryLazyBoy/subscriptions", "type": "User", "url": "https://api.github.com/users/VeryLazyBoy" }
https://api.github.com/repos/huggingface/datasets/issues/7100/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7100/timeline
open
false
7,100
null
null
null
false
2,465,221,827
https://api.github.com/repos/huggingface/datasets/issues/7099
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7099/events
[]
null
2024-08-14T08:45:17Z
[]
https://github.com/huggingface/datasets/pull/7099
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7099). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005649 / 0.011353 (-0.005704) | 0.003918 / 0.011008 (-0.007091) | 0.064333 / 0.038508 (0.025825) | 0.031909 / 0.023109 (0.008800) | 0.249020 / 0.275898 (-0.026878) | 0.273563 / 0.323480 (-0.049917) | 0.004184 / 0.007986 (-0.003802) | 0.002809 / 0.004328 (-0.001519) | 0.049066 / 0.004250 (0.044816) | 0.043324 / 0.037052 (0.006272) | 0.257889 / 0.258489 (-0.000600) | 0.285410 / 0.293841 (-0.008431) | 0.030681 / 0.128546 (-0.097865) | 0.012389 / 0.075646 (-0.063258) | 0.206172 / 0.419271 (-0.213100) | 0.036500 / 0.043533 (-0.007032) | 0.253674 / 0.255139 (-0.001465) | 0.272086 / 0.283200 (-0.011114) | 0.019558 / 0.141683 (-0.122125) | 1.149501 / 1.452155 (-0.302653) | 1.198036 / 1.492716 (-0.294680) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.139977 / 0.018006 (0.121971) | 0.301149 / 0.000490 (0.300659) | 0.000253 / 0.000200 (0.000053) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019137 / 0.037411 (-0.018274) | 0.062616 / 0.014526 (0.048090) | 0.075965 / 0.176557 (-0.100591) | 0.120976 / 0.737135 (-0.616159) | 0.076384 / 0.296338 (-0.219954) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283801 / 0.215209 (0.068592) | 2.794074 / 2.077655 (0.716419) | 1.475633 / 1.504120 (-0.028487) | 1.336270 / 1.541195 (-0.204925) | 1.376159 / 1.468490 (-0.092331) | 0.718768 / 4.584777 (-3.866009) | 2.375970 / 3.745712 (-1.369742) | 2.969121 / 5.269862 (-2.300741) | 1.900236 / 4.565676 (-2.665440) | 0.082463 / 0.424275 (-0.341812) | 0.005159 / 0.007607 (-0.002448) | 0.329057 / 0.226044 (0.103012) | 3.250535 / 2.268929 (0.981607) | 1.846415 / 55.444624 (-53.598210) | 1.496622 / 6.876477 (-5.379855) | 1.538125 / 2.142072 (-0.603947) | 0.806127 / 4.805227 (-3.999101) | 0.135272 / 6.500664 (-6.365392) | 0.042668 / 0.075469 (-0.032801) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983035 / 1.841788 (-0.858753) | 11.725835 / 8.074308 (3.651527) | 9.962818 / 10.191392 (-0.228574) | 0.131928 / 0.680424 (-0.548496) | 0.015784 / 0.534201 (-0.518417) | 0.301640 / 0.579283 (-0.277643) | 0.266251 / 0.434364 (-0.168113) | 0.339723 / 0.540337 (-0.200614) | 0.443384 / 1.386936 (-0.943552) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006301 / 0.011353 (-0.005052) | 0.004346 / 0.011008 (-0.006662) | 0.051406 / 0.038508 (0.012898) | 0.032263 / 0.023109 (0.009154) | 0.273715 / 0.275898 (-0.002183) | 0.300982 / 0.323480 (-0.022498) | 0.004533 / 0.007986 (-0.003452) | 0.002911 / 0.004328 (-0.001418) | 0.050464 / 0.004250 (0.046214) | 0.041131 / 0.037052 (0.004078) | 0.289958 / 0.258489 (0.031469) | 0.328632 / 0.293841 (0.034791) | 0.033545 / 0.128546 (-0.095001) | 0.013145 / 0.075646 (-0.062501) | 0.062241 / 0.419271 (-0.357031) | 0.035095 / 0.043533 (-0.008438) | 0.273303 / 0.255139 (0.018164) | 0.293652 / 0.283200 (0.010452) | 0.019980 / 0.141683 (-0.121703) | 1.155432 / 1.452155 (-0.296722) | 1.211409 / 1.492716 (-0.281307) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094885 / 0.018006 (0.076879) | 0.307423 / 0.000490 (0.306933) | 0.000254 / 0.000200 (0.000054) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023462 / 0.037411 (-0.013949) | 0.081980 / 0.014526 (0.067454) | 0.089890 / 0.176557 (-0.086666) | 0.131058 / 0.737135 (-0.606078) | 0.091873 / 0.296338 (-0.204465) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298522 / 0.215209 (0.083313) | 2.981771 / 2.077655 (0.904116) | 1.632515 / 1.504120 (0.128395) | 1.502885 / 1.541195 (-0.038310) | 1.496868 / 1.468490 (0.028377) | 0.750145 / 4.584777 (-3.834632) | 0.988853 / 3.745712 (-2.756859) | 3.029162 / 5.269862 (-2.240700) | 1.952304 / 4.565676 (-2.613373) | 0.082418 / 0.424275 (-0.341857) | 0.005724 / 0.007607 (-0.001883) | 0.356914 / 0.226044 (0.130870) | 3.523804 / 2.268929 (1.254875) | 1.983254 / 55.444624 (-53.461370) | 1.673135 / 6.876477 (-5.203342) | 1.716639 / 2.142072 (-0.425433) | 0.821568 / 4.805227 (-3.983659) | 0.136113 / 6.500664 (-6.364551) | 0.041593 / 0.075469 (-0.033876) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.044670 / 1.841788 (-0.797118) | 12.739375 / 8.074308 (4.665066) | 10.263619 / 10.191392 (0.072227) | 0.132811 / 0.680424 (-0.547613) | 0.015491 / 0.534201 (-0.518710) | 0.305545 / 0.579283 (-0.273738) | 0.129226 / 0.434364 (-0.305138) | 0.345532 / 0.540337 (-0.194805) | 0.460406 / 1.386936 (-0.926530) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ebec2691fb1e40145429f63375cef3f46d3011ab \"CML watermark\")\n" ]
Set dev version
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7099/reactions" }
PR_kwDODunzps54U7s4
{ "diff_url": "https://github.com/huggingface/datasets/pull/7099.diff", "html_url": "https://github.com/huggingface/datasets/pull/7099", "merged_at": "2024-08-14T08:39:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/7099.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7099" }
2024-08-14T08:31:17Z
https://api.github.com/repos/huggingface/datasets/issues/7099/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7099/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7099/timeline
closed
false
7,099
null
2024-08-14T08:39:25Z
null
true
2,465,016,562
https://api.github.com/repos/huggingface/datasets/issues/7098
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7098/events
[]
null
2024-08-14T06:41:07Z
[]
https://github.com/huggingface/datasets/pull/7098
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7098). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
Release: 2.21.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7098/reactions" }
PR_kwDODunzps54UPMS
{ "diff_url": "https://github.com/huggingface/datasets/pull/7098.diff", "html_url": "https://github.com/huggingface/datasets/pull/7098", "merged_at": "2024-08-14T06:41:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/7098.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7098" }
2024-08-14T06:35:13Z
https://api.github.com/repos/huggingface/datasets/issues/7098/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7098/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7098/timeline
closed
false
7,098
null
2024-08-14T06:41:06Z
null
true
2,458,455,489
https://api.github.com/repos/huggingface/datasets/issues/7097
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7097/events
[]
null
2024-08-09T18:26:37Z
[]
https://github.com/huggingface/datasets/issues/7097
NONE
null
null
null
[]
Some of DownloadConfig's properties are always being overridden in load.py
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7097/reactions" }
I_kwDODunzps6SiQ3B
null
2024-08-09T18:26:37Z
https://api.github.com/repos/huggingface/datasets/issues/7097/comments
### Describe the bug The `extract_compressed_file` and `force_extract` properties of DownloadConfig are always being set to True in the function `dataset_module_factory` in the `load.py` file. This behavior is very annoying because data extracted will just be ignored the next time the dataset is loaded. See this image below: ![image](https://github.com/user-attachments/assets/9e76ebb7-09b1-4c95-adc8-a959b536f93c) ### Steps to reproduce the bug 1. Have a local dataset that contains archived files (zip, tar.gz, etc) 2. Build a dataset loading script to download and extract these files 3. Run the load_dataset function with a DownloadConfig that specifically set `force_extract` to False 4. The extraction process will start no matter if the archives was extracted previously ### Expected behavior The extraction process should not run when the archives were previously extracted and `force_extract` is set to False. ### Environment info datasets==2.20.0 python3.9
{ "avatar_url": "https://avatars.githubusercontent.com/u/29772899?v=4", "events_url": "https://api.github.com/users/ductai199x/events{/privacy}", "followers_url": "https://api.github.com/users/ductai199x/followers", "following_url": "https://api.github.com/users/ductai199x/following{/other_user}", "gists_url": "https://api.github.com/users/ductai199x/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ductai199x", "id": 29772899, "login": "ductai199x", "node_id": "MDQ6VXNlcjI5NzcyODk5", "organizations_url": "https://api.github.com/users/ductai199x/orgs", "received_events_url": "https://api.github.com/users/ductai199x/received_events", "repos_url": "https://api.github.com/users/ductai199x/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ductai199x/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ductai199x/subscriptions", "type": "User", "url": "https://api.github.com/users/ductai199x" }
https://api.github.com/repos/huggingface/datasets/issues/7097/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7097/timeline
open
false
7,097
null
null
null
false
2,456,929,173
https://api.github.com/repos/huggingface/datasets/issues/7096
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7096/events
[]
null
2024-08-15T17:25:26Z
[]
https://github.com/huggingface/datasets/pull/7096
CONTRIBUTOR
null
false
null
[ "Hi @albertvillanova, is this PR looking okay to you? Anything else you'd like to see?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7096). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005278 / 0.011353 (-0.006075) | 0.003536 / 0.011008 (-0.007472) | 0.062604 / 0.038508 (0.024096) | 0.030704 / 0.023109 (0.007595) | 0.242178 / 0.275898 (-0.033720) | 0.264335 / 0.323480 (-0.059145) | 0.004118 / 0.007986 (-0.003868) | 0.002789 / 0.004328 (-0.001539) | 0.048813 / 0.004250 (0.044563) | 0.041787 / 0.037052 (0.004735) | 0.252369 / 0.258489 (-0.006120) | 0.280981 / 0.293841 (-0.012859) | 0.029646 / 0.128546 (-0.098900) | 0.012093 / 0.075646 (-0.063553) | 0.203036 / 0.419271 (-0.216235) | 0.035814 / 0.043533 (-0.007719) | 0.248929 / 0.255139 (-0.006210) | 0.266568 / 0.283200 (-0.016632) | 0.018761 / 0.141683 (-0.122922) | 1.188443 / 1.452155 (-0.263712) | 1.219324 / 1.492716 (-0.273392) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095256 / 0.018006 (0.077250) | 0.301069 / 0.000490 (0.300579) | 0.000219 / 0.000200 (0.000019) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018541 / 0.037411 (-0.018870) | 0.067333 / 0.014526 (0.052807) | 0.075483 / 0.176557 (-0.101073) | 0.121301 / 0.737135 (-0.615834) | 0.076924 / 0.296338 (-0.219414) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284722 / 0.215209 (0.069513) | 2.817656 / 2.077655 (0.740001) | 1.483827 / 1.504120 (-0.020293) | 1.363072 / 1.541195 (-0.178123) | 1.380472 / 1.468490 (-0.088018) | 0.739543 / 4.584777 (-3.845234) | 2.390699 / 3.745712 (-1.355013) | 2.980347 / 5.269862 (-2.289515) | 1.897881 / 4.565676 (-2.667795) | 0.078827 / 0.424275 (-0.345448) | 0.005193 / 0.007607 (-0.002414) | 0.342739 / 0.226044 (0.116695) | 3.370871 / 2.268929 (1.101942) | 1.846475 / 55.444624 (-53.598150) | 1.577860 / 6.876477 (-5.298617) | 1.628606 / 2.142072 (-0.513466) | 0.815686 / 4.805227 (-3.989541) | 0.134985 / 6.500664 (-6.365679) | 0.042330 / 0.075469 (-0.033139) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962530 / 1.841788 (-0.879258) | 11.271449 / 8.074308 (3.197141) | 9.615452 / 10.191392 (-0.575940) | 0.140322 / 0.680424 (-0.540101) | 0.014057 / 0.534201 (-0.520144) | 0.306212 / 0.579283 (-0.273071) | 0.266758 / 0.434364 (-0.167606) | 0.341229 / 0.540337 (-0.199108) | 0.428974 / 1.386936 (-0.957962) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005980 / 0.011353 (-0.005373) | 0.003831 / 0.011008 (-0.007177) | 0.049837 / 0.038508 (0.011329) | 0.030602 / 0.023109 (0.007493) | 0.274107 / 0.275898 (-0.001791) | 0.298175 / 0.323480 (-0.025305) | 0.004492 / 0.007986 (-0.003494) | 0.002840 / 0.004328 (-0.001489) | 0.048984 / 0.004250 (0.044733) | 0.040001 / 0.037052 (0.002949) | 0.286130 / 0.258489 (0.027641) | 0.321546 / 0.293841 (0.027705) | 0.032675 / 0.128546 (-0.095871) | 0.012222 / 0.075646 (-0.063424) | 0.060321 / 0.419271 (-0.358950) | 0.034456 / 0.043533 (-0.009077) | 0.272408 / 0.255139 (0.017269) | 0.294714 / 0.283200 (0.011515) | 0.018568 / 0.141683 (-0.123115) | 1.169826 / 1.452155 (-0.282329) | 1.223906 / 1.492716 (-0.268810) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093734 / 0.018006 (0.075727) | 0.305915 / 0.000490 (0.305425) | 0.000210 / 0.000200 (0.000010) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022389 / 0.037411 (-0.015022) | 0.076640 / 0.014526 (0.062114) | 0.088660 / 0.176557 (-0.087897) | 0.128998 / 0.737135 (-0.608137) | 0.090346 / 0.296338 (-0.205992) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291642 / 0.215209 (0.076433) | 2.897270 / 2.077655 (0.819615) | 1.571564 / 1.504120 (0.067444) | 1.449533 / 1.541195 (-0.091662) | 1.458744 / 1.468490 (-0.009746) | 0.725465 / 4.584777 (-3.859312) | 0.962597 / 3.745712 (-2.783115) | 3.035056 / 5.269862 (-2.234806) | 1.902542 / 4.565676 (-2.663135) | 0.079869 / 0.424275 (-0.344407) | 0.005172 / 0.007607 (-0.002435) | 0.352099 / 0.226044 (0.126055) | 3.469058 / 2.268929 (1.200129) | 1.953402 / 55.444624 (-53.491222) | 1.647182 / 6.876477 (-5.229294) | 1.686473 / 2.142072 (-0.455599) | 0.797218 / 4.805227 (-4.008009) | 0.134161 / 6.500664 (-6.366503) | 0.041563 / 0.075469 (-0.033906) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.045855 / 1.841788 (-0.795933) | 12.271390 / 8.074308 (4.197082) | 10.186889 / 10.191392 (-0.004503) | 0.141141 / 0.680424 (-0.539283) | 0.015482 / 0.534201 (-0.518719) | 0.305699 / 0.579283 (-0.273584) | 0.128539 / 0.434364 (-0.305825) | 0.348492 / 0.540337 (-0.191845) | 0.444867 / 1.386936 (-0.942069) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#93dc73501298ccb1d31d854ba20fcf2c3b2fea8b \"CML watermark\")\n" ]
Automatically create `cache_dir` from `cache_file_name`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7096/reactions" }
PR_kwDODunzps535Xkr
{ "diff_url": "https://github.com/huggingface/datasets/pull/7096.diff", "html_url": "https://github.com/huggingface/datasets/pull/7096", "merged_at": "2024-08-15T10:13:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/7096.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7096" }
2024-08-09T01:34:06Z
https://api.github.com/repos/huggingface/datasets/issues/7096/comments
You get a pretty unhelpful error message when specifying a `cache_file_name` in a directory that doesn't exist, e.g. `cache_file_name="./cache/data.map"` ```python import datasets cache_file_name="./cache/train.map" dataset = datasets.load_dataset("ylecun/mnist") dataset["train"].map(lambda x: x, cache_file_name=cache_file_name) ``` ``` FileNotFoundError: [Errno 2] No such file or directory: '/.../cache/tmp48r61siw' ``` It is simple enough to create and I was expecting that this would have been the case. cc: @albertvillanova @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/27844407?v=4", "events_url": "https://api.github.com/users/ringohoffman/events{/privacy}", "followers_url": "https://api.github.com/users/ringohoffman/followers", "following_url": "https://api.github.com/users/ringohoffman/following{/other_user}", "gists_url": "https://api.github.com/users/ringohoffman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ringohoffman", "id": 27844407, "login": "ringohoffman", "node_id": "MDQ6VXNlcjI3ODQ0NDA3", "organizations_url": "https://api.github.com/users/ringohoffman/orgs", "received_events_url": "https://api.github.com/users/ringohoffman/received_events", "repos_url": "https://api.github.com/users/ringohoffman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ringohoffman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ringohoffman/subscriptions", "type": "User", "url": "https://api.github.com/users/ringohoffman" }
https://api.github.com/repos/huggingface/datasets/issues/7096/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7096/timeline
closed
false
7,096
null
2024-08-15T10:13:22Z
null
true
2,454,418,130
https://api.github.com/repos/huggingface/datasets/issues/7094
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7094/events
[]
null
2024-08-07T21:53:06Z
[]
https://github.com/huggingface/datasets/pull/7094
NONE
null
false
null
[]
Add Arabic Docs to Datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7094/reactions" }
PR_kwDODunzps53w2b7
{ "diff_url": "https://github.com/huggingface/datasets/pull/7094.diff", "html_url": "https://github.com/huggingface/datasets/pull/7094", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7094.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7094" }
2024-08-07T21:53:06Z
https://api.github.com/repos/huggingface/datasets/issues/7094/comments
Translate Docs into Arabic issue-number : #7093 [Arabic Docs](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx) [English Docs](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/en/index.mdx) @stevhliu
{ "avatar_url": "https://avatars.githubusercontent.com/u/53489256?v=4", "events_url": "https://api.github.com/users/AhmedAlmaghz/events{/privacy}", "followers_url": "https://api.github.com/users/AhmedAlmaghz/followers", "following_url": "https://api.github.com/users/AhmedAlmaghz/following{/other_user}", "gists_url": "https://api.github.com/users/AhmedAlmaghz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AhmedAlmaghz", "id": 53489256, "login": "AhmedAlmaghz", "node_id": "MDQ6VXNlcjUzNDg5MjU2", "organizations_url": "https://api.github.com/users/AhmedAlmaghz/orgs", "received_events_url": "https://api.github.com/users/AhmedAlmaghz/received_events", "repos_url": "https://api.github.com/users/AhmedAlmaghz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AhmedAlmaghz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AhmedAlmaghz/subscriptions", "type": "User", "url": "https://api.github.com/users/AhmedAlmaghz" }
https://api.github.com/repos/huggingface/datasets/issues/7094/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7094/timeline
open
false
7,094
null
null
null
true
2,454,413,074
https://api.github.com/repos/huggingface/datasets/issues/7093
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7093/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-08-07T21:48:05Z
[]
https://github.com/huggingface/datasets/issues/7093
NONE
null
null
null
[]
Add Arabic Docs to datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7093/reactions" }
I_kwDODunzps6SS18S
null
2024-08-07T21:48:05Z
https://api.github.com/repos/huggingface/datasets/issues/7093/comments
### Feature request Add Arabic Docs to datasets [Datasets Arabic](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx) ### Motivation @AhmedAlmaghz https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx ### Your contribution @AhmedAlmaghz https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx
{ "avatar_url": "https://avatars.githubusercontent.com/u/53489256?v=4", "events_url": "https://api.github.com/users/AhmedAlmaghz/events{/privacy}", "followers_url": "https://api.github.com/users/AhmedAlmaghz/followers", "following_url": "https://api.github.com/users/AhmedAlmaghz/following{/other_user}", "gists_url": "https://api.github.com/users/AhmedAlmaghz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AhmedAlmaghz", "id": 53489256, "login": "AhmedAlmaghz", "node_id": "MDQ6VXNlcjUzNDg5MjU2", "organizations_url": "https://api.github.com/users/AhmedAlmaghz/orgs", "received_events_url": "https://api.github.com/users/AhmedAlmaghz/received_events", "repos_url": "https://api.github.com/users/AhmedAlmaghz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AhmedAlmaghz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AhmedAlmaghz/subscriptions", "type": "User", "url": "https://api.github.com/users/AhmedAlmaghz" }
https://api.github.com/repos/huggingface/datasets/issues/7093/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7093/timeline
open
false
7,093
null
null
null
false
2,451,393,658
https://api.github.com/repos/huggingface/datasets/issues/7092
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7092/events
[]
null
2024-08-08T16:35:01Z
[]
https://github.com/huggingface/datasets/issues/7092
NONE
null
null
null
[ "I’ll take a look", "Possible definitions of done for this issue:\r\n\r\n1. A fix so you can load your dataset specifically\r\n2. A general fix for datasets similar to this in the `datasets` library\r\n\r\nOption 1 is trivial. I think option 2 requires significant changes to the library.\r\n\r\nSince you outlined something akin to option 2 in `Expected behavior` I'm assuming that's what you'd like to see done. Is that right?\r\n\r\nIn the meantime, here's a solution for option 1:\r\n\r\n```python\r\nimport datasets\r\n\r\ndata_dir = './data/annotated/api'\r\n\r\nfeatures = datasets.Features({'id': datasets.Value(dtype='string'),\r\n 'name': datasets.Value(dtype='string'),\r\n 'author': datasets.Value(dtype='string'),\r\n 'description': datasets.Value(dtype='string'),\r\n 'tags': datasets.Sequence(feature=datasets.Value(dtype='string'), length=-1),\r\n 'likes': datasets.Value(dtype='int64'),\r\n 'viewed': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'date': datasets.Value(dtype='string'),\r\n 'time_retrieved': datasets.Value(dtype='string'),\r\n 'image_code': datasets.Value(dtype='string'),\r\n 'image_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'common_code': datasets.Value(dtype='string'),\r\n 'sound_code': datasets.Value(dtype='string'),\r\n 'sound_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'buffer_a_code': datasets.Value(dtype='string'),\r\n 'buffer_a_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'buffer_b_code': datasets.Value(dtype='string'),\r\n 'buffer_b_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'buffer_c_code': datasets.Value(dtype='string'),\r\n 'buffer_c_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'buffer_d_code': datasets.Value(dtype='string'),\r\n 'buffer_d_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'cube_a_code': datasets.Value(dtype='string'),\r\n 'cube_a_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'thumbnail': datasets.Value(dtype='string'),\r\n 'access': datasets.Value(dtype='string'),\r\n 'license': datasets.Value(dtype='string'),\r\n 'functions': datasets.Sequence(feature=datasets.Sequence(feature=datasets.Value(dtype='int64'), length=-1), length=-1),\r\n 'test': datasets.Value(dtype='string')})\r\n\r\ndatasets.load_dataset('json', data_dir=data_dir, features=features)\r\n```", "As pointed out by @hvaara, you can define explicit features so that you avoid the `datasets` library having to infer them (from the first few samples).\r\n\r\nNote that the feature inference is done from the first few samples of JSON-Lines on purpose, so that the entire data does not need to be parsed twice (it would be inefficient for very large datasets).", "I understand this. But can there be a solution that doesn't require the end user to write this shema by hand(in my case there is some fields that contain a nested structure)? \r\n\r\nMaybe offer an option to infer the shema automatically before loading the dataset. Or perhaps - trigger such a method when this error arises? \r\n\r\nIs this \"first few files\" heuristics accessible via kwargs perhaps. Maybe an error that says \r\n`Cloud not cast some structure into feature shema, consider increasing shema_files to a large number or all\".\r\n\r\nThere might be efficient implementations to solve this problem for larger datasets. ", "@Vipitis raised a good point on the HF Discord regarding the use of a [dataset script](https://huggingface.co/docs/datasets/en/dataset_script) to provide the schema during initialization. Using this approach requires setting `trust_remote_code=True`, which is not allowed in certain evaluation frameworks.\r\n\r\nFor cases where using a dataset script is acceptable, would it be helpful to add functionality to the library (not necessarily in `load_dataset`) that can automatically discover the feature definitions and output them, so you don't have to manually define them?\r\n\r\nAlternatively, for situations where features need to be known at load-time without using a dataset script, another option could be loading the dataset schema from a file format that doesn't require `trust_remote_code=True`." ]
load_dataset with multiple jsonlines files interprets datastructure too early
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7092/reactions" }
I_kwDODunzps6SHUx6
null
2024-08-06T17:42:55Z
https://api.github.com/repos/huggingface/datasets/issues/7092/comments
### Describe the bug likely related to #6460 using `datasets.load_dataset("json", data_dir= ... )` with multiple `.jsonl` files will error if one of the files (maybe the first file?) contains a full column of empty data. ### Steps to reproduce the bug real world example: data is available in this [PR-branch](https://github.com/Vipitis/shadertoys-dataset/pull/3/commits/cb1e7157814f74acb09d5dc2f1be3c0a868a9933). Because my files are chunked by months, some months contain all empty data for some columns, just by chance - these are `[]`. Otherwise it's all the same structure. ```python from datasets import load_dataset ds = load_dataset("json", data_dir="./data/annotated/api") ``` you get a long error trace, where in the middle it says something like ```cs TypeError: Couldn't cast array of type struct<id: int64, src: string, ctype: string, channel: int64, sampler: struct<filter: string, wrap: string, vflip: string, srgb: string, internal: string>, published: int64> to null ``` toy example: (on request) ### Expected behavior Some suggestions 1. give a better error message to the user 2. consider all files before deciding on a data structure for a given column. 3. if you encounter a new structure, and can't cast that to null, replace the null-hypothesis. (maybe something for pyarrow) as a workaround I have lazily implemented the following (essentially step 2) ```python import os import jsonlines import datasets api_files = os.listdir("./data/annotated/api") api_files = [f"./data/annotated/api/{f}" for f in api_files] api_file_contents = [] for f in api_files: with jsonlines.open(f) as reader: for obj in reader: api_file_contents.append(obj) ds = datasets.Dataset.from_list(api_file_contents) ``` this works fine for my usecase, but is potentially slower and less memory efficient for really large datasets (where this is unlikely to happen in the first place). ### Environment info - `datasets` version: 2.20.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.4 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/23384483?v=4", "events_url": "https://api.github.com/users/Vipitis/events{/privacy}", "followers_url": "https://api.github.com/users/Vipitis/followers", "following_url": "https://api.github.com/users/Vipitis/following{/other_user}", "gists_url": "https://api.github.com/users/Vipitis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Vipitis", "id": 23384483, "login": "Vipitis", "node_id": "MDQ6VXNlcjIzMzg0NDgz", "organizations_url": "https://api.github.com/users/Vipitis/orgs", "received_events_url": "https://api.github.com/users/Vipitis/received_events", "repos_url": "https://api.github.com/users/Vipitis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Vipitis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Vipitis/subscriptions", "type": "User", "url": "https://api.github.com/users/Vipitis" }
https://api.github.com/repos/huggingface/datasets/issues/7092/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7092/timeline
open
false
7,092
null
null
null
false
2,449,699,490
https://api.github.com/repos/huggingface/datasets/issues/7090
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7090/events
[]
null
2024-08-06T00:35:05Z
[]
https://github.com/huggingface/datasets/issues/7090
NONE
null
null
null
[]
The test test_move_script_doesnt_change_hash fails because it runs the 'python' command while the python executable has a different name
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7090/reactions" }
I_kwDODunzps6SA3Ki
null
2024-08-06T00:35:05Z
https://api.github.com/repos/huggingface/datasets/issues/7090/comments
### Describe the bug Tests should use the same pythin path as they are launched with, which in the case of FreeBSD is /usr/local/bin/python3.11 Failure: ``` if err_filename is not None: > raise child_exception_type(errno_num, err_msg, err_filename) E FileNotFoundError: [Errno 2] No such file or directory: 'python' ``` ### Steps to reproduce the bug regular test run using PyTest ### Expected behavior n/a ### Environment info FreeBSD 14.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4", "events_url": "https://api.github.com/users/yurivict/events{/privacy}", "followers_url": "https://api.github.com/users/yurivict/followers", "following_url": "https://api.github.com/users/yurivict/following{/other_user}", "gists_url": "https://api.github.com/users/yurivict/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yurivict", "id": 271906, "login": "yurivict", "node_id": "MDQ6VXNlcjI3MTkwNg==", "organizations_url": "https://api.github.com/users/yurivict/orgs", "received_events_url": "https://api.github.com/users/yurivict/received_events", "repos_url": "https://api.github.com/users/yurivict/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yurivict/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yurivict/subscriptions", "type": "User", "url": "https://api.github.com/users/yurivict" }
https://api.github.com/repos/huggingface/datasets/issues/7090/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7090/timeline
open
false
7,090
null
null
null
false
2,449,479,500
https://api.github.com/repos/huggingface/datasets/issues/7089
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7089/events
[]
null
2024-08-05T21:05:11Z
[]
https://github.com/huggingface/datasets/issues/7089
NONE
null
null
null
[]
Missing pyspark dependency causes the testsuite to error out, instead of a few tests to be skipped
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7089/reactions" }
I_kwDODunzps6SABdM
null
2024-08-05T21:05:11Z
https://api.github.com/repos/huggingface/datasets/issues/7089/comments
### Describe the bug see the subject ### Steps to reproduce the bug regular tests ### Expected behavior n/a ### Environment info version 2.20.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4", "events_url": "https://api.github.com/users/yurivict/events{/privacy}", "followers_url": "https://api.github.com/users/yurivict/followers", "following_url": "https://api.github.com/users/yurivict/following{/other_user}", "gists_url": "https://api.github.com/users/yurivict/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yurivict", "id": 271906, "login": "yurivict", "node_id": "MDQ6VXNlcjI3MTkwNg==", "organizations_url": "https://api.github.com/users/yurivict/orgs", "received_events_url": "https://api.github.com/users/yurivict/received_events", "repos_url": "https://api.github.com/users/yurivict/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yurivict/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yurivict/subscriptions", "type": "User", "url": "https://api.github.com/users/yurivict" }
https://api.github.com/repos/huggingface/datasets/issues/7089/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7089/timeline
open
false
7,089
null
null
null
false
2,447,383,940
https://api.github.com/repos/huggingface/datasets/issues/7088
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7088/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-08-05T00:45:50Z
[]
https://github.com/huggingface/datasets/issues/7088
NONE
null
null
null
[]
Disable warning when using with_format format on tensors
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7088/reactions" }
I_kwDODunzps6R4B2E
null
2024-08-05T00:45:50Z
https://api.github.com/repos/huggingface/datasets/issues/7088/comments
### Feature request If we write this code: ```python """Get data and define datasets.""" from enum import StrEnum from datasets import load_dataset from torch.utils.data import DataLoader from torchvision import transforms class Split(StrEnum): """Describes what type of split to use in the dataloader""" TRAIN = "train" TEST = "test" VAL = "validation" class ImageNetDataLoader(DataLoader): """Create an ImageNetDataloader""" _preprocess_transform = transforms.Compose( [ transforms.Resize(256), transforms.CenterCrop(224), ] ) def __init__(self, batch_size: int = 4, split: Split = Split.TRAIN): dataset = ( load_dataset( "imagenet-1k", split=split, trust_remote_code=True, streaming=True, ) .with_format("torch") .map(self._preprocess) ) super().__init__(dataset=dataset, batch_size=batch_size) def _preprocess(self, data): if data["image"].shape[0] < 3: data["image"] = data["image"].repeat(3, 1, 1) data["image"] = self._preprocess_transform(data["image"].float()) return data if __name__ == "__main__": dataloader = ImageNetDataLoader(batch_size=2) for batch in dataloader: print(batch["image"]) break ``` This will trigger an user warning : ```bash datasets\formatting\torch_formatter.py:85: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs}) ``` ### Motivation This happens because the the way the formatted tensor is returned in `TorchFormatter._tensorize`. This function handle values of different types, according to some tests it seems that possible value types are `int`, `numpy.ndarray` and `torch.Tensor`. In particular this warning is triggered when the value type is `torch.Tensor`, because is not the suggested Pytorch way of doing it: - https://stackoverflow.com/questions/55266154/pytorch-preferred-way-to-copy-a-tensor - https://discuss.pytorch.org/t/it-is-recommended-to-use-source-tensor-clone-detach-or-sourcetensor-clone-detach-requires-grad-true/101218#:~:text=The%20warning%20points%20to%20wrapping%20a%20tensor%20in%20torch.tensor%2C%20which%20is%20not%20recommended.%0AInstead%20of%20torch.tensor(outputs)%20use%20outputs.clone().detach()%20or%20the%20same%20with%20.requires_grad_(True)%2C%20if%20necessary. ### Your contribution A solution that I found to be working is to change the current way of doing it: ```python return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs}) ``` To: ```python if (isinstance(value, torch.Tensor)): tensor = value.clone().detach() if self.torch_tensor_kwargs.get('requires_grad', False): tensor.requires_grad_() return tensor else: return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs}) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42048782?v=4", "events_url": "https://api.github.com/users/Haislich/events{/privacy}", "followers_url": "https://api.github.com/users/Haislich/followers", "following_url": "https://api.github.com/users/Haislich/following{/other_user}", "gists_url": "https://api.github.com/users/Haislich/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Haislich", "id": 42048782, "login": "Haislich", "node_id": "MDQ6VXNlcjQyMDQ4Nzgy", "organizations_url": "https://api.github.com/users/Haislich/orgs", "received_events_url": "https://api.github.com/users/Haislich/received_events", "repos_url": "https://api.github.com/users/Haislich/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Haislich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Haislich/subscriptions", "type": "User", "url": "https://api.github.com/users/Haislich" }
https://api.github.com/repos/huggingface/datasets/issues/7088/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7088/timeline
open
false
7,088
null
null
null
false
2,447,158,643
https://api.github.com/repos/huggingface/datasets/issues/7087
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7087/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-08-06T06:59:23Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/7087
NONE
completed
null
null
[ "Thanks for reporting.\r\n\r\nIt is weird, because the language entry is in the list. See: https://github.com/huggingface/huggingface.js/blob/98e32f0ed4ee057a596f66a1dec738e5db9643d5/packages/languages/src/languages_iso_639_3.ts#L15186-L15189\r\n\r\nI have reported the issue:\r\n- https://github.com/huggingface/huggingface.js/issues/834\r\n\r\n", "As explained in the reported issue above, the problem only appears in the autocomplete field: you can still enter the `lut` language directly in the markdown editor window." ]
Unable to create dataset card for Lushootseed language
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7087/reactions" }
I_kwDODunzps6R3K1z
null
2024-08-04T14:27:04Z
https://api.github.com/repos/huggingface/datasets/issues/7087/comments
### Feature request While I was creating the dataset which contained all documents from the Lushootseed Wikipedia, the dataset card asked me to enter which language the dataset was in. Since Lushootseed is a critically endangered language, it was not available as one of the options. Is it possible to allow entering languages that aren't available in the options? ### Motivation I'd like to add more information about my dataset in the dataset card, and the language is one of the most important pieces of information, since the entire dataset is primarily concerned collecting Lushootseed documents. ### Your contribution I can submit a pull request
{ "avatar_url": "https://avatars.githubusercontent.com/u/134876525?v=4", "events_url": "https://api.github.com/users/vaishnavsudarshan/events{/privacy}", "followers_url": "https://api.github.com/users/vaishnavsudarshan/followers", "following_url": "https://api.github.com/users/vaishnavsudarshan/following{/other_user}", "gists_url": "https://api.github.com/users/vaishnavsudarshan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vaishnavsudarshan", "id": 134876525, "login": "vaishnavsudarshan", "node_id": "U_kgDOCAoNbQ", "organizations_url": "https://api.github.com/users/vaishnavsudarshan/orgs", "received_events_url": "https://api.github.com/users/vaishnavsudarshan/received_events", "repos_url": "https://api.github.com/users/vaishnavsudarshan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vaishnavsudarshan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vaishnavsudarshan/subscriptions", "type": "User", "url": "https://api.github.com/users/vaishnavsudarshan" }
https://api.github.com/repos/huggingface/datasets/issues/7087/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7087/timeline
closed
false
7,087
null
2024-08-06T06:59:22Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,445,516,829
https://api.github.com/repos/huggingface/datasets/issues/7086
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7086/events
[]
null
2024-08-02T18:12:23Z
[]
https://github.com/huggingface/datasets/issues/7086
NONE
null
null
null
[]
load_dataset ignores cached datasets and tries to hit HF Hub, resulting in API rate limit errors
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7086/reactions" }
I_kwDODunzps6Rw6Ad
null
2024-08-02T18:12:23Z
https://api.github.com/repos/huggingface/datasets/issues/7086/comments
### Describe the bug I have been running lm-eval-harness a lot which has results in an API rate limit. This seems strange, since all of the data should be cached locally. I have in fact verified this. ### Steps to reproduce the bug 1. Be Me 2. Run `load_dataset("TAUR-Lab/MuSR")` 3. Hit rate limit error 4. Dataset is in .cache/huggingface/datasets 5. ??? ### Expected behavior We should not run into API rate limits if we have cached the dataset ### Environment info datasets 2.16.0 python 3.10.4
{ "avatar_url": "https://avatars.githubusercontent.com/u/11379648?v=4", "events_url": "https://api.github.com/users/tginart/events{/privacy}", "followers_url": "https://api.github.com/users/tginart/followers", "following_url": "https://api.github.com/users/tginart/following{/other_user}", "gists_url": "https://api.github.com/users/tginart/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tginart", "id": 11379648, "login": "tginart", "node_id": "MDQ6VXNlcjExMzc5NjQ4", "organizations_url": "https://api.github.com/users/tginart/orgs", "received_events_url": "https://api.github.com/users/tginart/received_events", "repos_url": "https://api.github.com/users/tginart/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tginart/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tginart/subscriptions", "type": "User", "url": "https://api.github.com/users/tginart" }
https://api.github.com/repos/huggingface/datasets/issues/7086/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7086/timeline
open
false
7,086
null
null
null
false
2,440,008,618
https://api.github.com/repos/huggingface/datasets/issues/7085
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7085/events
[]
null
2024-08-14T16:04:24Z
[]
https://github.com/huggingface/datasets/issues/7085
NONE
null
null
null
[ "@lhoestq I detected this regression over on [DataDreamer](https://github.com/datadreamer-dev/DataDreamer)'s test suite. I put in these [monkey patches](https://github.com/datadreamer-dev/DataDreamer/blob/4cbaf9f39cf7bedde72bbaa68346e169788fbecb/src/_patches/datasets_reset_state_hack.py) in case that fixed it our tests failing in case it helps you figure out where this is coming from. I found it hard to reason through the resumable IterableDataset code though, so hopefully you have more intuition to implement a proper fix.", "I believe these lines in `TypedExamplesIterable` are responsible for stopping the re-iteration of `IterableDataset`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/ebec2691fb1e40145429f63375cef3f46d3011ab/src/datasets/iterable_dataset.py#L1616-L1619\r\n\r\nIn contrast to other `Iterable`s, there is no check on whether `self._state_dict` is None or not. This particular case stands out and seems less straightforward to comprehend why. @lhoestq could you please assist us with this? Your help is much appreciated." ]
[Regression] IterableDataset is broken on 2.20.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7085/reactions" }
I_kwDODunzps6Rb5Oq
null
2024-07-31T13:01:59Z
https://api.github.com/repos/huggingface/datasets/issues/7085/comments
### Describe the bug In the latest version of datasets there is a major regression, after creating an `IterableDataset` from a generator and applying a few operations (`map`, `select`), you can no longer iterate through the dataset multiple times. The issue seems to stem from the recent addition of "resumable IterableDatasets" (#6658) (@lhoestq). It seems like it's keeping state when it shouldn't. ### Steps to reproduce the bug Minimal Reproducible Example (comparing `datasets==2.17.0` and `datasets==2.20.0`) ``` #!/bin/bash # List of dataset versions to test versions=("2.17.0" "2.20.0") # Loop through each version for version in "${versions[@]}"; do # Install the specific version of the datasets library pip3 install -q datasets=="$version" 2>/dev/null # Run the Python script python3 - <<EOF from datasets import IterableDataset from datasets.features.features import Features, Value def test_gen(): yield from [{"foo": i} for i in range(10)] features = Features([("foo", Value("int64"))]) d = IterableDataset.from_generator(test_gen, features=features) mapped = d.map(lambda row: {"foo": row["foo"] * 2}) column = mapped.select_columns(["foo"]) print("Version $version - Iterate Once:", list(column)) print("Version $version - Iterate Twice:", list(column)) EOF done ``` The output looks like this: ``` Version 2.17.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}] Version 2.17.0 - Iterate Twice: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}] Version 2.20.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}] Version 2.20.0 - Iterate Twice: [] ``` ### Expected behavior The expected behavior is it version 2.20.0 should behave the same as 2.17.0. ### Environment info `datasets==2.20.0` on any platform.
{ "avatar_url": "https://avatars.githubusercontent.com/u/5404177?v=4", "events_url": "https://api.github.com/users/AjayP13/events{/privacy}", "followers_url": "https://api.github.com/users/AjayP13/followers", "following_url": "https://api.github.com/users/AjayP13/following{/other_user}", "gists_url": "https://api.github.com/users/AjayP13/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AjayP13", "id": 5404177, "login": "AjayP13", "node_id": "MDQ6VXNlcjU0MDQxNzc=", "organizations_url": "https://api.github.com/users/AjayP13/orgs", "received_events_url": "https://api.github.com/users/AjayP13/received_events", "repos_url": "https://api.github.com/users/AjayP13/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AjayP13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AjayP13/subscriptions", "type": "User", "url": "https://api.github.com/users/AjayP13" }
https://api.github.com/repos/huggingface/datasets/issues/7085/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7085/timeline
open
false
7,085
null
null
null
false
2,439,519,534
https://api.github.com/repos/huggingface/datasets/issues/7084
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7084/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-07-31T09:05:58Z
[]
https://github.com/huggingface/datasets/issues/7084
NONE
null
null
null
[]
More easily support streaming local files
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7084/reactions" }
I_kwDODunzps6RaB0u
null
2024-07-31T09:03:15Z
https://api.github.com/repos/huggingface/datasets/issues/7084/comments
### Feature request Simplify downloading and streaming datasets locally. Specifically, perhaps add an option to `load_dataset(..., streaming="download_first")` or add better support for streaming symlinked or arrow files. ### Motivation I have downloaded FineWeb-edu locally and currently trying to stream the dataset from the local files. I have both the raw parquet files using `hugginface-cli download --repo-type dataset HuggingFaceFW/fineweb-edu` and the processed arrow files using `load_dataset("HuggingFaceFW/fineweb-edu")`. Streaming the files locally does not work well for both file types for two different reasons. **Arrow files** When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/datasets/HuggingFaceFW___fineweb-edu/default/0.0.0/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/fineweb-edu-train-*.arrow"})` resolving the data files is fast, but because `arrow` is not included in the known [extensions file list](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/utils/file_utils.py#L738) , all files are opened and scanned to determine the compression type. Adding `arrow` to the known extension types resolves this issue. **Parquet files** When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/hub/dataset-HuggingFaceFW___fineweb-edu/snapshots/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/data/CC-MAIN-*/train-*.parquet"})` the paths do not get resolved because the parquet files are symlinked from the blobs (which contain all files in case there are different versions). This occurs because the [pattern matching](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/data_files.py#L389) checks if the path is a file and does not check for symlinks. Symlinks (at least on my machine) are of type "other". ### Your contribution I have created a PR for fixing arrow file streaming and symlinks. However, I have not checked locally if the tests work or new tests need to be added. IMO, the easiest option would be to add a `streaming=download_first` option, but I'm afraid that exceeds my current knowledge of how the datasets library works. https://github.com/huggingface/datasets/pull/7083
{ "avatar_url": "https://avatars.githubusercontent.com/u/23191892?v=4", "events_url": "https://api.github.com/users/fschlatt/events{/privacy}", "followers_url": "https://api.github.com/users/fschlatt/followers", "following_url": "https://api.github.com/users/fschlatt/following{/other_user}", "gists_url": "https://api.github.com/users/fschlatt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fschlatt", "id": 23191892, "login": "fschlatt", "node_id": "MDQ6VXNlcjIzMTkxODky", "organizations_url": "https://api.github.com/users/fschlatt/orgs", "received_events_url": "https://api.github.com/users/fschlatt/received_events", "repos_url": "https://api.github.com/users/fschlatt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fschlatt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fschlatt/subscriptions", "type": "User", "url": "https://api.github.com/users/fschlatt" }
https://api.github.com/repos/huggingface/datasets/issues/7084/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7084/timeline
open
false
7,084
null
null
null
false
2,439,518,466
https://api.github.com/repos/huggingface/datasets/issues/7083
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7083/events
[]
null
2024-08-15T14:08:04Z
[]
https://github.com/huggingface/datasets/pull/7083
NONE
null
false
null
[]
fix streaming from arrow files
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7083/reactions" }
PR_kwDODunzps5292hC
{ "diff_url": "https://github.com/huggingface/datasets/pull/7083.diff", "html_url": "https://github.com/huggingface/datasets/pull/7083", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7083.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7083" }
2024-07-31T09:02:42Z
https://api.github.com/repos/huggingface/datasets/issues/7083/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/23191892?v=4", "events_url": "https://api.github.com/users/fschlatt/events{/privacy}", "followers_url": "https://api.github.com/users/fschlatt/followers", "following_url": "https://api.github.com/users/fschlatt/following{/other_user}", "gists_url": "https://api.github.com/users/fschlatt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fschlatt", "id": 23191892, "login": "fschlatt", "node_id": "MDQ6VXNlcjIzMTkxODky", "organizations_url": "https://api.github.com/users/fschlatt/orgs", "received_events_url": "https://api.github.com/users/fschlatt/received_events", "repos_url": "https://api.github.com/users/fschlatt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fschlatt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fschlatt/subscriptions", "type": "User", "url": "https://api.github.com/users/fschlatt" }
https://api.github.com/repos/huggingface/datasets/issues/7083/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7083/timeline
open
false
7,083
null
null
null
true
2,437,354,975
https://api.github.com/repos/huggingface/datasets/issues/7082
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7082/events
[]
null
2024-08-08T08:29:55Z
[]
https://github.com/huggingface/datasets/pull/7082
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7082). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005280 / 0.011353 (-0.006073) | 0.003726 / 0.011008 (-0.007282) | 0.067028 / 0.038508 (0.028520) | 0.030833 / 0.023109 (0.007724) | 0.256888 / 0.275898 (-0.019010) | 0.271252 / 0.323480 (-0.052228) | 0.003149 / 0.007986 (-0.004836) | 0.004031 / 0.004328 (-0.000298) | 0.051178 / 0.004250 (0.046927) | 0.042751 / 0.037052 (0.005699) | 0.268385 / 0.258489 (0.009896) | 0.295547 / 0.293841 (0.001706) | 0.030218 / 0.128546 (-0.098328) | 0.012033 / 0.075646 (-0.063613) | 0.206389 / 0.419271 (-0.212882) | 0.036227 / 0.043533 (-0.007306) | 0.258778 / 0.255139 (0.003639) | 0.276027 / 0.283200 (-0.007172) | 0.020309 / 0.141683 (-0.121374) | 1.109689 / 1.452155 (-0.342466) | 1.139979 / 1.492716 (-0.352738) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093615 / 0.018006 (0.075609) | 0.301279 / 0.000490 (0.300789) | 0.000207 / 0.000200 (0.000007) | 0.000048 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018697 / 0.037411 (-0.018715) | 0.062627 / 0.014526 (0.048101) | 0.075119 / 0.176557 (-0.101438) | 0.119960 / 0.737135 (-0.617175) | 0.074606 / 0.296338 (-0.221732) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281042 / 0.215209 (0.065833) | 2.746232 / 2.077655 (0.668578) | 1.422351 / 1.504120 (-0.081769) | 1.290087 / 1.541195 (-0.251108) | 1.321067 / 1.468490 (-0.147423) | 0.727514 / 4.584777 (-3.857263) | 2.407086 / 3.745712 (-1.338626) | 2.914191 / 5.269862 (-2.355670) | 1.872206 / 4.565676 (-2.693471) | 0.079538 / 0.424275 (-0.344738) | 0.005250 / 0.007607 (-0.002357) | 0.335536 / 0.226044 (0.109491) | 3.324922 / 2.268929 (1.055994) | 1.790688 / 55.444624 (-53.653936) | 1.475738 / 6.876477 (-5.400739) | 1.492465 / 2.142072 (-0.649607) | 0.812342 / 4.805227 (-3.992885) | 0.135036 / 6.500664 (-6.365628) | 0.041484 / 0.075469 (-0.033985) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948425 / 1.841788 (-0.893363) | 11.321564 / 8.074308 (3.247256) | 9.635661 / 10.191392 (-0.555731) | 0.142793 / 0.680424 (-0.537631) | 0.014988 / 0.534201 (-0.519213) | 0.300209 / 0.579283 (-0.279074) | 0.262303 / 0.434364 (-0.172061) | 0.337927 / 0.540337 (-0.202411) | 0.427962 / 1.386936 (-0.958975) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005664 / 0.011353 (-0.005689) | 0.003946 / 0.011008 (-0.007062) | 0.050034 / 0.038508 (0.011526) | 0.031652 / 0.023109 (0.008543) | 0.281139 / 0.275898 (0.005241) | 0.299203 / 0.323480 (-0.024277) | 0.004332 / 0.007986 (-0.003653) | 0.002769 / 0.004328 (-0.001560) | 0.048336 / 0.004250 (0.044086) | 0.039744 / 0.037052 (0.002692) | 0.289344 / 0.258489 (0.030855) | 0.320470 / 0.293841 (0.026629) | 0.032372 / 0.128546 (-0.096174) | 0.012090 / 0.075646 (-0.063557) | 0.060838 / 0.419271 (-0.358433) | 0.034227 / 0.043533 (-0.009306) | 0.275007 / 0.255139 (0.019868) | 0.293455 / 0.283200 (0.010256) | 0.017203 / 0.141683 (-0.124480) | 1.141577 / 1.452155 (-0.310578) | 1.176761 / 1.492716 (-0.315955) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093562 / 0.018006 (0.075556) | 0.302695 / 0.000490 (0.302205) | 0.000215 / 0.000200 (0.000015) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022638 / 0.037411 (-0.014774) | 0.078788 / 0.014526 (0.064262) | 0.088474 / 0.176557 (-0.088082) | 0.128421 / 0.737135 (-0.608714) | 0.089297 / 0.296338 (-0.207041) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302669 / 0.215209 (0.087459) | 2.963855 / 2.077655 (0.886200) | 1.600053 / 1.504120 (0.095933) | 1.461456 / 1.541195 (-0.079739) | 1.469877 / 1.468490 (0.001387) | 0.725752 / 4.584777 (-3.859025) | 0.968970 / 3.745712 (-2.776742) | 2.910502 / 5.269862 (-2.359359) | 1.902762 / 4.565676 (-2.662914) | 0.079977 / 0.424275 (-0.344298) | 0.005582 / 0.007607 (-0.002025) | 0.351626 / 0.226044 (0.125581) | 3.520593 / 2.268929 (1.251664) | 1.968950 / 55.444624 (-53.475675) | 1.662190 / 6.876477 (-5.214286) | 1.677909 / 2.142072 (-0.464163) | 0.791541 / 4.805227 (-4.013687) | 0.134647 / 6.500664 (-6.366017) | 0.040687 / 0.075469 (-0.034782) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.028885 / 1.841788 (-0.812903) | 11.928358 / 8.074308 (3.854050) | 10.199165 / 10.191392 (0.007773) | 0.142930 / 0.680424 (-0.537493) | 0.016479 / 0.534201 (-0.517722) | 0.302993 / 0.579283 (-0.276290) | 0.128878 / 0.434364 (-0.305486) | 0.342591 / 0.540337 (-0.197747) | 0.456735 / 1.386936 (-0.930201) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d298f5549893228c03e9e3a42727327cb83f3dff \"CML watermark\")\n" ]
Support HTTP authentication in non-streaming mode
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7082/reactions" }
PR_kwDODunzps522dTJ
{ "diff_url": "https://github.com/huggingface/datasets/pull/7082.diff", "html_url": "https://github.com/huggingface/datasets/pull/7082", "merged_at": "2024-08-08T08:24:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/7082.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7082" }
2024-07-30T09:25:49Z
https://api.github.com/repos/huggingface/datasets/issues/7082/comments
Support HTTP authentication in non-streaming mode, by support passing HTTP storage_options in non-streaming mode. - Note that currently, HTTP authentication is supported only in streaming mode. For example, this is necessary if a remote HTTP host requires authentication to download the data.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7082/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7082/timeline
closed
false
7,082
null
2024-08-08T08:24:06Z
null
true
2,437,059,657
https://api.github.com/repos/huggingface/datasets/issues/7081
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7081/events
[]
null
2024-07-30T08:30:37Z
[]
https://github.com/huggingface/datasets/pull/7081
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7081). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005665 / 0.011353 (-0.005688) | 0.004130 / 0.011008 (-0.006878) | 0.064231 / 0.038508 (0.025723) | 0.030738 / 0.023109 (0.007628) | 0.251896 / 0.275898 (-0.024002) | 0.275182 / 0.323480 (-0.048298) | 0.003364 / 0.007986 (-0.004621) | 0.003569 / 0.004328 (-0.000759) | 0.049407 / 0.004250 (0.045157) | 0.048177 / 0.037052 (0.011124) | 0.253739 / 0.258489 (-0.004751) | 0.304087 / 0.293841 (0.010246) | 0.030457 / 0.128546 (-0.098089) | 0.012762 / 0.075646 (-0.062885) | 0.214312 / 0.419271 (-0.204959) | 0.036673 / 0.043533 (-0.006860) | 0.251838 / 0.255139 (-0.003301) | 0.274049 / 0.283200 (-0.009151) | 0.021133 / 0.141683 (-0.120550) | 1.143743 / 1.452155 (-0.308412) | 1.203681 / 1.492716 (-0.289036) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094668 / 0.018006 (0.076662) | 0.300323 / 0.000490 (0.299833) | 0.000222 / 0.000200 (0.000022) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018565 / 0.037411 (-0.018846) | 0.066096 / 0.014526 (0.051570) | 0.075700 / 0.176557 (-0.100857) | 0.122185 / 0.737135 (-0.614950) | 0.077688 / 0.296338 (-0.218651) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288804 / 0.215209 (0.073595) | 2.838336 / 2.077655 (0.760681) | 1.530575 / 1.504120 (0.026455) | 1.406716 / 1.541195 (-0.134478) | 1.438885 / 1.468490 (-0.029605) | 0.744809 / 4.584777 (-3.839968) | 2.447992 / 3.745712 (-1.297721) | 3.126261 / 5.269862 (-2.143601) | 1.999687 / 4.565676 (-2.565990) | 0.081536 / 0.424275 (-0.342739) | 0.005827 / 0.007607 (-0.001780) | 0.346367 / 0.226044 (0.120323) | 3.373268 / 2.268929 (1.104339) | 1.890293 / 55.444624 (-53.554332) | 1.590384 / 6.876477 (-5.286093) | 1.652101 / 2.142072 (-0.489971) | 0.805888 / 4.805227 (-3.999339) | 0.137687 / 6.500664 (-6.362977) | 0.044536 / 0.075469 (-0.030933) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.998393 / 1.841788 (-0.843395) | 12.392241 / 8.074308 (4.317933) | 10.055638 / 10.191392 (-0.135754) | 0.132347 / 0.680424 (-0.548077) | 0.014635 / 0.534201 (-0.519566) | 0.301939 / 0.579283 (-0.277344) | 0.266756 / 0.434364 (-0.167608) | 0.342730 / 0.540337 (-0.197608) | 0.435463 / 1.386936 (-0.951473) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006421 / 0.011353 (-0.004932) | 0.004494 / 0.011008 (-0.006514) | 0.051315 / 0.038508 (0.012806) | 0.035570 / 0.023109 (0.012460) | 0.271635 / 0.275898 (-0.004263) | 0.297082 / 0.323480 (-0.026398) | 0.004572 / 0.007986 (-0.003414) | 0.002886 / 0.004328 (-0.001443) | 0.049152 / 0.004250 (0.044902) | 0.043000 / 0.037052 (0.005948) | 0.281921 / 0.258489 (0.023432) | 0.321097 / 0.293841 (0.027256) | 0.033488 / 0.128546 (-0.095058) | 0.012835 / 0.075646 (-0.062811) | 0.061831 / 0.419271 (-0.357441) | 0.034674 / 0.043533 (-0.008858) | 0.272885 / 0.255139 (0.017746) | 0.292726 / 0.283200 (0.009527) | 0.019906 / 0.141683 (-0.121777) | 1.132234 / 1.452155 (-0.319920) | 1.155359 / 1.492716 (-0.337357) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096943 / 0.018006 (0.078937) | 0.308980 / 0.000490 (0.308490) | 0.000225 / 0.000200 (0.000025) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023551 / 0.037411 (-0.013861) | 0.081682 / 0.014526 (0.067156) | 0.090987 / 0.176557 (-0.085569) | 0.132542 / 0.737135 (-0.604593) | 0.092844 / 0.296338 (-0.203494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304190 / 0.215209 (0.088981) | 2.958591 / 2.077655 (0.880936) | 1.610211 / 1.504120 (0.106091) | 1.488216 / 1.541195 (-0.052978) | 1.525429 / 1.468490 (0.056939) | 0.752811 / 4.584777 (-3.831966) | 0.967887 / 3.745712 (-2.777825) | 2.982760 / 5.269862 (-2.287102) | 1.996623 / 4.565676 (-2.569053) | 0.080783 / 0.424275 (-0.343492) | 0.005337 / 0.007607 (-0.002270) | 0.354996 / 0.226044 (0.128951) | 3.540788 / 2.268929 (1.271860) | 1.997445 / 55.444624 (-53.447179) | 1.682232 / 6.876477 (-5.194245) | 1.883198 / 2.142072 (-0.258875) | 0.814444 / 4.805227 (-3.990783) | 0.135798 / 6.500664 (-6.364867) | 0.041750 / 0.075469 (-0.033719) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.048688 / 1.841788 (-0.793099) | 13.122809 / 8.074308 (5.048501) | 10.893354 / 10.191392 (0.701962) | 0.133710 / 0.680424 (-0.546713) | 0.016357 / 0.534201 (-0.517844) | 0.304364 / 0.579283 (-0.274919) | 0.126457 / 0.434364 (-0.307907) | 0.345747 / 0.540337 (-0.194591) | 0.441620 / 1.386936 (-0.945316) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#27ea8e8ead3e76bb07aa645f882945495d238ef3 \"CML watermark\")\n" ]
Set load_from_disk path type as PathLike
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7081/reactions" }
PR_kwDODunzps521cGm
{ "diff_url": "https://github.com/huggingface/datasets/pull/7081.diff", "html_url": "https://github.com/huggingface/datasets/pull/7081", "merged_at": "2024-07-30T08:21:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/7081.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7081" }
2024-07-30T07:00:38Z
https://api.github.com/repos/huggingface/datasets/issues/7081/comments
Set `load_from_disk` path type as `PathLike`. This way it is aligned with `save_to_disk`.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7081/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7081/timeline
closed
false
7,081
null
2024-07-30T08:21:50Z
null
true
2,434,275,664
https://api.github.com/repos/huggingface/datasets/issues/7080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7080/events
[]
null
2024-07-29T01:42:43Z
[]
https://github.com/huggingface/datasets/issues/7080
NONE
null
null
null
[]
Generating train split takes a long time
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7080/reactions" }
I_kwDODunzps6RGBlQ
null
2024-07-29T01:42:43Z
https://api.github.com/repos/huggingface/datasets/issues/7080/comments
### Describe the bug Loading a simple webdataset takes ~45 minutes. ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("PixArt-alpha/SAM-LLaVA-Captions10M") ``` ### Expected behavior The dataset should load immediately as it does when loaded through a normal indexed WebDataset loader. Generating splits should be optional and there should be a message showing how to disable it. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.14 - `huggingface_hub` version: 0.24.1 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/35648800?v=4", "events_url": "https://api.github.com/users/alexanderswerdlow/events{/privacy}", "followers_url": "https://api.github.com/users/alexanderswerdlow/followers", "following_url": "https://api.github.com/users/alexanderswerdlow/following{/other_user}", "gists_url": "https://api.github.com/users/alexanderswerdlow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alexanderswerdlow", "id": 35648800, "login": "alexanderswerdlow", "node_id": "MDQ6VXNlcjM1NjQ4ODAw", "organizations_url": "https://api.github.com/users/alexanderswerdlow/orgs", "received_events_url": "https://api.github.com/users/alexanderswerdlow/received_events", "repos_url": "https://api.github.com/users/alexanderswerdlow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alexanderswerdlow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexanderswerdlow/subscriptions", "type": "User", "url": "https://api.github.com/users/alexanderswerdlow" }
https://api.github.com/repos/huggingface/datasets/issues/7080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7080/timeline
open
false
7,080
null
null
null
false
2,433,363,298
https://api.github.com/repos/huggingface/datasets/issues/7079
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7079/events
[]
null
2024-07-27T20:06:44Z
[]
https://github.com/huggingface/datasets/issues/7079
NONE
completed
null
null
[ "same issue here. @albertvillanova @lhoestq ", "Also impacted by this issue in many of my datasets (though not all) - in my case, this also seems to affect datasets that have been updated recently. Git cloning and the web interface still work:\r\n- https://huggingface.co/api/datasets/acmc/cheat_reduced\r\n- https://huggingface.co/api/datasets/acmc/ghostbuster_reuter_reduced\r\n- https://huggingface.co/api/datasets/acmc/ghostbuster_wp_reduced\r\n- https://huggingface.co/api/datasets/acmc/ghostbuster_essay_reduced\r\n\r\nOddly enough, the system status looks good: https://status.huggingface.co/", "Hey how to download these datasets using git cloning?", "Also reported here\r\nhttps://github.com/huggingface/huggingface_hub/issues/2425", "I have been getting the same error for the past 8 hours as well", "Same error since yesterday, fails on any new dataset created", "Same here. I cannot download the HelpSteer2 dataset: https://huggingface.co/datasets/nvidia/HelpSteer2 which has been uploaded about a month ago", "> Hey how to download these datasets using git cloning?\n\nYou'll find a guide [here](https://huggingface.co/docs/hub/en/datasets-downloading) 👍🏻", "Same here for imdb dataset", "It also happens with this dataset: https://huggingface.co/datasets/ylacombe/jenny-tts-6h-tagged", "same here for all datsets in the sentence-tramsformers repo and related collections.\r\n\r\nsame issue with dataset that i recently uploaded on my repo.\r\nseems that the upload date of the datset is not relevat (getting this issue with both old datasets and newer ones)\r\n\r\nfor some reason, i was able to get the dataset by turning it private and accessing it with the id token (accessing it as public while providing the token doesn not work)..... but i can say if that is just a random coincidence.\r\n\r\nseems not much deterministic, for a specific dataset (sentence-transformer nq ) , that was \"down\" since some hours , worked for like 5-10 minutes, then stopped again\r\n\r\nnow even this dataset (that worked since some min ago, and that i'm in the middle of processing steps) stopped working: _https://huggingface.co/datasets/bobox/msmarco-bm25-EduScore/_\r\n\r\nas already pointed out, there are no updates on **_https://status.huggingface.co/_**\r\n\r\n\\n\r\n\\n\r\n\r\nan example of the whole error message:\r\n``` \r\nHfHubHTTPError \r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)\r\n 2592 \r\n 2593 # Create a dataset builder\r\n-> 2594 builder_instance = load_dataset_builder(\r\n 2595 path=path,\r\n 2596 name=name,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)\r\n 2264 download_config = download_config.copy() if download_config else DownloadConfig()\r\n 2265 download_config.storage_options.update(storage_options)\r\n-> 2266 dataset_module = dataset_module_factory(\r\n 2267 path,\r\n 2268 revision=revision,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)\r\n 1912 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\r\n 1913 ) from None\r\n-> 1914 raise e1 from None\r\n 1915 else:\r\n 1916 raise FileNotFoundError(\r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)\r\n 1832 hf_api = HfApi(config.HF_ENDPOINT)\r\n 1833 try:\r\n-> 1834 dataset_info = hf_api.dataset_info(\r\n 1835 repo_id=path,\r\n 1836 revision=revision,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in _inner_fn(*args, **kwargs)\r\n 112 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)\r\n 113 \r\n--> 114 return fn(*args, **kwargs)\r\n 115 \r\n 116 return _inner_fn # type: ignore\r\n\r\n[/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py](https://localhost:8080/#) in dataset_info(self, repo_id, revision, timeout, files_metadata, token)\r\n 2362 \r\n 2363 r = get_session().get(path, headers=headers, timeout=timeout, params=params)\r\n-> 2364 hf_raise_for_status(r)\r\n 2365 data = r.json()\r\n 2366 return DatasetInfo(**data)\r\n\r\n[/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py](https://localhost:8080/#) in hf_raise_for_status(response, endpoint_name)\r\n 369 # Convert `HTTPError` into a `HfHubHTTPError` to display request information\r\n 370 # as well (request id and/or server error message)\r\n--> 371 raise HfHubHTTPError(str(e), response=response) from e\r\n 372 \r\n 373 \r\n\r\nHfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/bobox/xSum-processed (Request ID: Root=1-66a527f0-756cfbc35cc466f075382289;7d5dc06a-37e9-4c22-874d-92b0b1023276)\r\n\r\nInternal Error - We're working hard to fix this as soon as possible!\r\n``` ", "we're working on a fix !", "We fixed the issue, you can load datasets again, sorry for the inconvenience !", "I can confirm, it's working now. I can load the dataset, yay. Thank you @lhoestq ", "@lhoestq thank you so much! " ]
HfHubHTTPError: 500 Server Error: Internal Server Error for url:
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 4, "heart": 3, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 7, "url": "https://api.github.com/repos/huggingface/datasets/issues/7079/reactions" }
I_kwDODunzps6RCi1i
null
2024-07-27T08:21:03Z
https://api.github.com/repos/huggingface/datasets/issues/7079/comments
### Describe the bug newly uploaded datasets, since yesterday, yields an error. old datasets, works fine. Seems like the datasets api server returns a 500 I'm getting the same error, when I invoke `load_dataset` with my dataset. Long discussion about it here, but I'm not sure anyone from huggingface have seen it. https://discuss.huggingface.co/t/hfhubhttperror-500-server-error-internal-server-error-for-url/99580/1 ### Steps to reproduce the bug this api url: https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3 respond with: ``` {"error":"Internal Error - We're working hard to fix this as soon as possible!"} ``` ### Expected behavior return no error with newer datasets. With older datasets I can load the datasets fine. ### Environment info # Browser When I access the api in the browser: https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3 ``` {"error":"Internal Error - We're working hard to fix this as soon as possible!"} ``` ### Request headers ``` Accept text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8 Accept-Encoding gzip, deflate, br, zstd Accept-Language en-US,en;q=0.5 Connection keep-alive Host huggingface.co Priority u=1 Sec-Fetch-Dest document Sec-Fetch-Mode navigate Sec-Fetch-Site cross-site Upgrade-Insecure-Requests 1 User-Agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:127.0) Gecko/20100101 Firefox/127.0 ``` ### Response headers ``` X-Firefox-Spdy h2 access-control-allow-origin https://huggingface.co access-control-expose-headers X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range content-length 80 content-type application/json; charset=utf-8 cross-origin-opener-policy same-origin date Fri, 26 Jul 2024 19:09:45 GMT etag W/"50-9qrwU+BNI4SD0Fe32p/nofkmv0c" referrer-policy strict-origin-when-cross-origin vary Origin via 1.1 1624c79cd07e6098196697a6a7907e4a.cloudfront.net (CloudFront) x-amz-cf-id SP8E7n5qRaP6i9c9G83dNAiOzJBU4GXSrDRAcVNTomY895K35H0nJQ== x-amz-cf-pop CPH50-C1 x-cache Error from cloudfront x-error-message Internal Error - We're working hard to fix this as soon as possible! x-powered-by huggingface-moon x-request-id Root=1-66a3f479-026417465ef42f49349fdca1 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4", "events_url": "https://api.github.com/users/neoneye/events{/privacy}", "followers_url": "https://api.github.com/users/neoneye/followers", "following_url": "https://api.github.com/users/neoneye/following{/other_user}", "gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/neoneye", "id": 147971, "login": "neoneye", "node_id": "MDQ6VXNlcjE0Nzk3MQ==", "organizations_url": "https://api.github.com/users/neoneye/orgs", "received_events_url": "https://api.github.com/users/neoneye/received_events", "repos_url": "https://api.github.com/users/neoneye/repos", "site_admin": false, "starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neoneye/subscriptions", "type": "User", "url": "https://api.github.com/users/neoneye" }
https://api.github.com/repos/huggingface/datasets/issues/7079/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7079/timeline
closed
false
7,079
null
2024-07-27T19:52:30Z
null
false
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
36
Edit dataset card