url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
996M
2.3B
node_id
stringlengths
18
19
number
int64
2.91k
6.91k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
sequencelengths
0
30
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
active_lock_reason
null
body
stringlengths
1
36.2k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6906/comments
https://api.github.com/repos/huggingface/datasets/issues/6906/events
https://github.com/huggingface/datasets/issues/6906
2,303,679,119
I_kwDODunzps6JT1qP
6,906
irc_disentangle - Issue with splitting data
{ "login": "eor51355", "id": 114260604, "node_id": "U_kgDOBs96fA", "avatar_url": "https://avatars.githubusercontent.com/u/114260604?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eor51355", "html_url": "https://github.com/eor51355", "followers_url": "https://api.github.com/users/eor51355/followers", "following_url": "https://api.github.com/users/eor51355/following{/other_user}", "gists_url": "https://api.github.com/users/eor51355/gists{/gist_id}", "starred_url": "https://api.github.com/users/eor51355/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eor51355/subscriptions", "organizations_url": "https://api.github.com/users/eor51355/orgs", "repos_url": "https://api.github.com/users/eor51355/repos", "events_url": "https://api.github.com/users/eor51355/events{/privacy}", "received_events_url": "https://api.github.com/users/eor51355/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-05-17T23:19:37
2024-05-17T23:19:37
null
NONE
null
### Describe the bug I am trying to access your database through python using "datasets.load_dataset("irc_disentangle")" and I am getting this error message: ValueError: Instruction "train" corresponds to no data! ### Steps to reproduce the bug import datasets ds = datasets.load_dataset('irc_disentangle') ds ### Expected behavior The data is supposed to load into ds and be accessable as such: ds['train'][1050], ds['train'][1055] ### Environment info I tired Python 3.12 and 3.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6906/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6905
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6905/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6905/comments
https://api.github.com/repos/huggingface/datasets/issues/6905/events
https://github.com/huggingface/datasets/issues/6905
2,303,098,587
I_kwDODunzps6JRn7b
6,905
Extraction protocol for arrow files is not defined
{ "login": "radulescupetru", "id": 26553095, "node_id": "MDQ6VXNlcjI2NTUzMDk1", "avatar_url": "https://avatars.githubusercontent.com/u/26553095?v=4", "gravatar_id": "", "url": "https://api.github.com/users/radulescupetru", "html_url": "https://github.com/radulescupetru", "followers_url": "https://api.github.com/users/radulescupetru/followers", "following_url": "https://api.github.com/users/radulescupetru/following{/other_user}", "gists_url": "https://api.github.com/users/radulescupetru/gists{/gist_id}", "starred_url": "https://api.github.com/users/radulescupetru/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/radulescupetru/subscriptions", "organizations_url": "https://api.github.com/users/radulescupetru/orgs", "repos_url": "https://api.github.com/users/radulescupetru/repos", "events_url": "https://api.github.com/users/radulescupetru/events{/privacy}", "received_events_url": "https://api.github.com/users/radulescupetru/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-05-17T16:01:41
2024-05-17T16:01:41
null
NONE
null
### Describe the bug Passing files with `.arrow` extension into data_files argument, at least when `streaming=True` is very slow. ### Steps to reproduce the bug Basically it goes through the `_get_extraction_protocol` method located [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L820) The method then looks at some base known extensions where `arrow` is not defined so it proceeds to determine the compression with the magic number method which is slow when dealing with a lot of files which are stored in s3 and by looking at this predefined list, I don't see `arrow` in there either so in the end it return None: ``` MAGIC_NUMBER_TO_COMPRESSION_PROTOCOL = { bytes.fromhex("504B0304"): "zip", bytes.fromhex("504B0506"): "zip", # empty archive bytes.fromhex("504B0708"): "zip", # spanned archive bytes.fromhex("425A68"): "bz2", bytes.fromhex("1F8B"): "gzip", bytes.fromhex("FD377A585A00"): "xz", bytes.fromhex("04224D18"): "lz4", bytes.fromhex("28B52FFD"): "zstd", } ``` ### Expected behavior My expectation is that `arrow` would be in the known lists so it would return None without going through the magic number method. ### Environment info datasets 2.19.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6905/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6905/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6904/comments
https://api.github.com/repos/huggingface/datasets/issues/6904/events
https://github.com/huggingface/datasets/pull/6904
2,302,912,179
PR_kwDODunzps5vzRlD
6,904
Fix decoding multi part extension
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6904). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "takign the liberty to merge this for the viewer and a new dataset being released", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005004 / 0.011353 (-0.006349) | 0.003352 / 0.011008 (-0.007657) | 0.063035 / 0.038508 (0.024527) | 0.032031 / 0.023109 (0.008922) | 0.244801 / 0.275898 (-0.031097) | 0.270622 / 0.323480 (-0.052857) | 0.003110 / 0.007986 (-0.004876) | 0.002629 / 0.004328 (-0.001700) | 0.048784 / 0.004250 (0.044534) | 0.045779 / 0.037052 (0.008726) | 0.258642 / 0.258489 (0.000153) | 0.291606 / 0.293841 (-0.002235) | 0.028237 / 0.128546 (-0.100310) | 0.010184 / 0.075646 (-0.065463) | 0.202455 / 0.419271 (-0.216816) | 0.036012 / 0.043533 (-0.007521) | 0.248209 / 0.255139 (-0.006930) | 0.267315 / 0.283200 (-0.015884) | 0.019249 / 0.141683 (-0.122434) | 1.120420 / 1.452155 (-0.331735) | 1.169515 / 1.492716 (-0.323201) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095193 / 0.018006 (0.077187) | 0.300544 / 0.000490 (0.300055) | 0.000214 / 0.000200 (0.000014) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019001 / 0.037411 (-0.018411) | 0.061857 / 0.014526 (0.047331) | 0.073379 / 0.176557 (-0.103178) | 0.121293 / 0.737135 (-0.615843) | 0.075665 / 0.296338 (-0.220673) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285153 / 0.215209 (0.069944) | 2.875527 / 2.077655 (0.797873) | 1.479851 / 1.504120 (-0.024269) | 1.360691 / 1.541195 (-0.180504) | 1.385581 / 1.468490 (-0.082909) | 0.566312 / 4.584777 (-4.018465) | 2.400202 / 3.745712 (-1.345510) | 2.719241 / 5.269862 (-2.550620) | 1.706469 / 4.565676 (-2.859208) | 0.062129 / 0.424275 (-0.362146) | 0.005291 / 0.007607 (-0.002316) | 0.334585 / 0.226044 (0.108540) | 3.293347 / 2.268929 (1.024419) | 1.790490 / 55.444624 (-53.654134) | 1.505519 / 6.876477 (-5.370958) | 1.527730 / 2.142072 (-0.614343) | 0.644554 / 4.805227 (-4.160673) | 0.119775 / 6.500664 (-6.380889) | 0.056912 / 0.075469 (-0.018557) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977512 / 1.841788 (-0.864275) | 11.293883 / 8.074308 (3.219575) | 9.669439 / 10.191392 (-0.521953) | 0.129910 / 0.680424 (-0.550514) | 0.014322 / 0.534201 (-0.519879) | 0.284967 / 0.579283 (-0.294316) | 0.265355 / 0.434364 (-0.169008) | 0.321965 / 0.540337 (-0.218372) | 0.415254 / 1.386936 (-0.971682) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005138 / 0.011353 (-0.006215) | 0.003321 / 0.011008 (-0.007687) | 0.049731 / 0.038508 (0.011223) | 0.032307 / 0.023109 (0.009198) | 0.266331 / 0.275898 (-0.009567) | 0.290863 / 0.323480 (-0.032617) | 0.004151 / 0.007986 (-0.003835) | 0.002684 / 0.004328 (-0.001644) | 0.048760 / 0.004250 (0.044510) | 0.042251 / 0.037052 (0.005199) | 0.280414 / 0.258489 (0.021925) | 0.305089 / 0.293841 (0.011248) | 0.029118 / 0.128546 (-0.099428) | 0.010276 / 0.075646 (-0.065370) | 0.057790 / 0.419271 (-0.361482) | 0.033290 / 0.043533 (-0.010243) | 0.267250 / 0.255139 (0.012111) | 0.285233 / 0.283200 (0.002034) | 0.018587 / 0.141683 (-0.123096) | 1.136198 / 1.452155 (-0.315957) | 1.185274 / 1.492716 (-0.307442) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096355 / 0.018006 (0.078349) | 0.301827 / 0.000490 (0.301337) | 0.000216 / 0.000200 (0.000016) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022607 / 0.037411 (-0.014805) | 0.075724 / 0.014526 (0.061198) | 0.088197 / 0.176557 (-0.088359) | 0.127864 / 0.737135 (-0.609271) | 0.089294 / 0.296338 (-0.207044) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289321 / 0.215209 (0.074112) | 2.832456 / 2.077655 (0.754802) | 1.559208 / 1.504120 (0.055088) | 1.426229 / 1.541195 (-0.114966) | 1.424564 / 1.468490 (-0.043926) | 0.557754 / 4.584777 (-4.027023) | 0.940179 / 3.745712 (-2.805533) | 2.713640 / 5.269862 (-2.556222) | 1.697583 / 4.565676 (-2.868093) | 0.062024 / 0.424275 (-0.362251) | 0.005270 / 0.007607 (-0.002337) | 0.339450 / 0.226044 (0.113406) | 3.333024 / 2.268929 (1.064096) | 1.946087 / 55.444624 (-53.498537) | 1.601057 / 6.876477 (-5.275420) | 1.599862 / 2.142072 (-0.542210) | 0.642838 / 4.805227 (-4.162390) | 0.120470 / 6.500664 (-6.380194) | 0.040815 / 0.075469 (-0.034654) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.012904 / 1.841788 (-0.828884) | 11.917035 / 8.074308 (3.842727) | 9.717822 / 10.191392 (-0.473570) | 0.141730 / 0.680424 (-0.538694) | 0.015750 / 0.534201 (-0.518451) | 0.284470 / 0.579283 (-0.294813) | 0.125662 / 0.434364 (-0.308702) | 0.380740 / 0.540337 (-0.159598) | 0.418119 / 1.386936 (-0.968817) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b3f772468b2bbf77a7510e265f9d41e9eb77d53f \"CML watermark\")\n" ]
2024-05-17T14:32:57
2024-05-17T14:52:56
2024-05-17T14:46:54
MEMBER
null
e.g. a field named `url.txt` should be a treated as text I also included a small fix to support .npz correctly
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6904/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6904", "html_url": "https://github.com/huggingface/datasets/pull/6904", "diff_url": "https://github.com/huggingface/datasets/pull/6904.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6904.patch", "merged_at": "2024-05-17T14:46:54" }
true
https://api.github.com/repos/huggingface/datasets/issues/6903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6903/comments
https://api.github.com/repos/huggingface/datasets/issues/6903/events
https://github.com/huggingface/datasets/issues/6903
2,300,436,053
I_kwDODunzps6JHd5V
6,903
Add the option of saving in parquet instead of arrow
{ "login": "arita37", "id": 18707623, "node_id": "MDQ6VXNlcjE4NzA3NjIz", "avatar_url": "https://avatars.githubusercontent.com/u/18707623?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arita37", "html_url": "https://github.com/arita37", "followers_url": "https://api.github.com/users/arita37/followers", "following_url": "https://api.github.com/users/arita37/following{/other_user}", "gists_url": "https://api.github.com/users/arita37/gists{/gist_id}", "starred_url": "https://api.github.com/users/arita37/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arita37/subscriptions", "organizations_url": "https://api.github.com/users/arita37/orgs", "repos_url": "https://api.github.com/users/arita37/repos", "events_url": "https://api.github.com/users/arita37/events{/privacy}", "received_events_url": "https://api.github.com/users/arita37/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "I think [`Dataset.to_parquet`](https://huggingface.co/docs/datasets/v1.10.2/package_reference/main_classes.html#datasets.Dataset.to_parquet) is what you're looking for.\r\n\r\nLet me know if I'm wrong ", "No, it does not save the metadata json.\r\n\r\nWe have to recode all meta json load/save\r\nwith another custome functions.\r\n\r\nsave_to_disk\r\nand load should have option with\r\n“Parquet” instead of “arrow”\r\n\r\nsince “arrow” is never user for production \r\n(only parquet).\r\n\r\nThanks !\r\n\r\n> On May 17, 2024, at 5:38, Frédéric Branchaud-Charron ***@***.***> wrote:\r\n> \r\n> \r\n> I think Dataset.to_parquet is what you're looking for.\r\n> \r\n> Let me know if I'm wrong\r\n> \r\n> —\r\n> Reply to this email directly, view it on GitHub, or unsubscribe.\r\n> You are receiving this because you authored the thread.\r\n" ]
2024-05-16T13:35:51
2024-05-17T03:40:04
null
NONE
null
### Feature request In dataset.save_to_disk('/path/to/save/dataset'), add the option to save in parquet format dataset.save_to_disk('/path/to/save/dataset', format="parquet"), because arrow is not used for Production Big data.... (only parquet) ### Motivation because arrow is not used for Production Big data.... (only parquet) ### Your contribution I can do the testing !
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6903/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6903/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6902/comments
https://api.github.com/repos/huggingface/datasets/issues/6902/events
https://github.com/huggingface/datasets/pull/6902
2,300,256,241
PR_kwDODunzps5vqLIv
6,902
Make CLI convert_to_parquet not raise error if no rights to create script branch
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6902). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005026 / 0.011353 (-0.006327) | 0.003672 / 0.011008 (-0.007336) | 0.062776 / 0.038508 (0.024268) | 0.032056 / 0.023109 (0.008947) | 0.245359 / 0.275898 (-0.030540) | 0.269371 / 0.323480 (-0.054109) | 0.004205 / 0.007986 (-0.003780) | 0.002774 / 0.004328 (-0.001555) | 0.048958 / 0.004250 (0.044708) | 0.046442 / 0.037052 (0.009390) | 0.263924 / 0.258489 (0.005434) | 0.291854 / 0.293841 (-0.001987) | 0.027299 / 0.128546 (-0.101248) | 0.010332 / 0.075646 (-0.065315) | 0.202677 / 0.419271 (-0.216595) | 0.037732 / 0.043533 (-0.005801) | 0.246028 / 0.255139 (-0.009111) | 0.272100 / 0.283200 (-0.011099) | 0.018497 / 0.141683 (-0.123186) | 1.101192 / 1.452155 (-0.350962) | 1.149683 / 1.492716 (-0.343033) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097838 / 0.018006 (0.079832) | 0.305598 / 0.000490 (0.305108) | 0.000230 / 0.000200 (0.000030) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019489 / 0.037411 (-0.017922) | 0.061902 / 0.014526 (0.047376) | 0.074825 / 0.176557 (-0.101732) | 0.121664 / 0.737135 (-0.615472) | 0.076440 / 0.296338 (-0.219898) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279194 / 0.215209 (0.063985) | 2.756777 / 2.077655 (0.679123) | 1.429298 / 1.504120 (-0.074822) | 1.313423 / 1.541195 (-0.227771) | 1.340466 / 1.468490 (-0.128024) | 0.556349 / 4.584777 (-4.028428) | 2.355910 / 3.745712 (-1.389802) | 2.806733 / 5.269862 (-2.463128) | 1.741903 / 4.565676 (-2.823773) | 0.061556 / 0.424275 (-0.362719) | 0.005477 / 0.007607 (-0.002130) | 0.327856 / 0.226044 (0.101812) | 3.283092 / 2.268929 (1.014164) | 1.797776 / 55.444624 (-53.646848) | 1.498683 / 6.876477 (-5.377794) | 1.518501 / 2.142072 (-0.623572) | 0.632267 / 4.805227 (-4.172960) | 0.116505 / 6.500664 (-6.384159) | 0.042446 / 0.075469 (-0.033023) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982841 / 1.841788 (-0.858947) | 11.709436 / 8.074308 (3.635128) | 9.570519 / 10.191392 (-0.620873) | 0.141968 / 0.680424 (-0.538456) | 0.014299 / 0.534201 (-0.519902) | 0.285101 / 0.579283 (-0.294182) | 0.267118 / 0.434364 (-0.167246) | 0.324720 / 0.540337 (-0.215617) | 0.423626 / 1.386936 (-0.963310) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005567 / 0.011353 (-0.005786) | 0.003703 / 0.011008 (-0.007306) | 0.050516 / 0.038508 (0.012008) | 0.032617 / 0.023109 (0.009508) | 0.276546 / 0.275898 (0.000648) | 0.299798 / 0.323480 (-0.023682) | 0.004282 / 0.007986 (-0.003704) | 0.002719 / 0.004328 (-0.001609) | 0.049424 / 0.004250 (0.045173) | 0.042924 / 0.037052 (0.005871) | 0.287785 / 0.258489 (0.029296) | 0.315490 / 0.293841 (0.021649) | 0.029533 / 0.128546 (-0.099013) | 0.010575 / 0.075646 (-0.065071) | 0.058210 / 0.419271 (-0.361061) | 0.033269 / 0.043533 (-0.010263) | 0.273325 / 0.255139 (0.018186) | 0.291762 / 0.283200 (0.008563) | 0.018922 / 0.141683 (-0.122761) | 1.118913 / 1.452155 (-0.333242) | 1.175554 / 1.492716 (-0.317162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099920 / 0.018006 (0.081914) | 0.317188 / 0.000490 (0.316698) | 0.000211 / 0.000200 (0.000011) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022297 / 0.037411 (-0.015114) | 0.077775 / 0.014526 (0.063249) | 0.090239 / 0.176557 (-0.086317) | 0.130498 / 0.737135 (-0.606638) | 0.092010 / 0.296338 (-0.204328) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293534 / 0.215209 (0.078325) | 2.866070 / 2.077655 (0.788415) | 1.547147 / 1.504120 (0.043027) | 1.419684 / 1.541195 (-0.121510) | 1.432128 / 1.468490 (-0.036362) | 0.571365 / 4.584777 (-4.013412) | 0.968879 / 3.745712 (-2.776833) | 2.797415 / 5.269862 (-2.472446) | 1.767821 / 4.565676 (-2.797856) | 0.063281 / 0.424275 (-0.360994) | 0.005072 / 0.007607 (-0.002535) | 0.344547 / 0.226044 (0.118502) | 3.383888 / 2.268929 (1.114959) | 1.879537 / 55.444624 (-53.565087) | 1.598392 / 6.876477 (-5.278085) | 1.627788 / 2.142072 (-0.514284) | 0.641199 / 4.805227 (-4.164028) | 0.116349 / 6.500664 (-6.384315) | 0.041940 / 0.075469 (-0.033529) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.002494 / 1.841788 (-0.839294) | 12.310056 / 8.074308 (4.235748) | 9.819718 / 10.191392 (-0.371674) | 0.134745 / 0.680424 (-0.545679) | 0.016223 / 0.534201 (-0.517978) | 0.284791 / 0.579283 (-0.294492) | 0.124665 / 0.434364 (-0.309699) | 0.381601 / 0.540337 (-0.158737) | 0.413007 / 1.386936 (-0.973929) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6255b36be14ae22890c78749575f1f0793901f14 \"CML watermark\")\n" ]
2024-05-16T12:21:27
2024-05-16T12:57:02
2024-05-16T12:51:05
MEMBER
null
Make CLI convert_to_parquet not raise error if no rights to create "script" branch. Not that before this PR, the error was not critical because it was raised at the end of the script, once all the rest of the steps were already performed. Fix #6901. Related to: - #6809
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6902/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6902", "html_url": "https://github.com/huggingface/datasets/pull/6902", "diff_url": "https://github.com/huggingface/datasets/pull/6902.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6902.patch", "merged_at": "2024-05-16T12:51:04" }
true
https://api.github.com/repos/huggingface/datasets/issues/6901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6901/comments
https://api.github.com/repos/huggingface/datasets/issues/6901/events
https://github.com/huggingface/datasets/issues/6901
2,300,167,465
I_kwDODunzps6JGcUp
6,901
HTTPError 403 raised by CLI convert_to_parquet when creating script branch on 3rd party repos
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2024-05-16T11:40:22
2024-05-16T12:51:06
2024-05-16T12:51:06
MEMBER
null
CLI convert_to_parquet cannot create "script" branch on 3rd party repos. It can only create it on repos where the user executing the script has write access. Otherwise, a 403 Forbidden HTTPError is raised: ``` Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status response.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/ORG/DATASET/branch/script The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.10/dist-packages/datasets/commands/datasets_cli.py", line 41, in main service.run() File "/usr/local/lib/python3.10/dist-packages/datasets/commands/convert_to_parquet.py", line 92, in run create_branch(dataset_id, branch="script", repo_type="dataset", token=token, exist_ok=True) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py", line 5503, in create_branch hf_raise_for_status(response) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py", line 367, in hf_raise_for_status raise HfHubHTTPError(message, response=response) from e huggingface_hub.utils._errors.HfHubHTTPError: (Request ID: Root=1-6645ee0d-4db1ed8a1fbe04956be15897;139a6e23-df7d-4f62-b5ba-adb6d8e6e696) 403 Forbidden: Forbidden: cannot write to script. Cannot access content at: https://huggingface.co/api/datasets/ORG/DATASET/branch/script. If you are trying to create or update content,make sure you have a token with the `write` role. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6901/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6900/comments
https://api.github.com/repos/huggingface/datasets/issues/6900/events
https://github.com/huggingface/datasets/issues/6900
2,298,489,733
I_kwDODunzps6JACuF
6,900
[WebDataset] KeyError with user-defined `Features` when a field is missing in an example
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-05-15T17:48:34
2024-05-15T17:48:49
null
MEMBER
null
reported at https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions/discussions/1 ``` File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 109, in _generate_examples example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]} ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6900/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/6900/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6899/comments
https://api.github.com/repos/huggingface/datasets/issues/6899/events
https://github.com/huggingface/datasets/issues/6899
2,298,059,597
I_kwDODunzps6I-ZtN
6,899
List of dictionary features get standardized
{ "login": "sohamparikh94", "id": 11831521, "node_id": "MDQ6VXNlcjExODMxNTIx", "avatar_url": "https://avatars.githubusercontent.com/u/11831521?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sohamparikh94", "html_url": "https://github.com/sohamparikh94", "followers_url": "https://api.github.com/users/sohamparikh94/followers", "following_url": "https://api.github.com/users/sohamparikh94/following{/other_user}", "gists_url": "https://api.github.com/users/sohamparikh94/gists{/gist_id}", "starred_url": "https://api.github.com/users/sohamparikh94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sohamparikh94/subscriptions", "organizations_url": "https://api.github.com/users/sohamparikh94/orgs", "repos_url": "https://api.github.com/users/sohamparikh94/repos", "events_url": "https://api.github.com/users/sohamparikh94/events{/privacy}", "received_events_url": "https://api.github.com/users/sohamparikh94/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-05-15T14:11:35
2024-05-15T14:11:35
null
NONE
null
### Describe the bug Hi, i’m trying to create a HF dataset from a list using Dataset.from_list. Each sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets library standardizes all dictionaries under a feature and adds all possible keys (with None value) from all the dictionaries under that feature. How can I keep the same set of keys as in the original list for each dictionary under a feature? ### Steps to reproduce the bug ``` from datasets import Dataset # Define a function to generate a sample with "tools" feature def generate_sample(): # Generate random sample data sample_data = { "text": "Sample text", "feature_1": [] } # Add feature_1 with random keys for this sample feature_1 = [{"key1": "value1"}, {"key2": "value2"}] # Example feature_1 with random keys sample_data["feature_1"].extend(feature_1) return sample_data # Generate multiple samples num_samples = 10 samples = [generate_sample() for _ in range(num_samples)] # Create a Hugging Face Dataset dataset = Dataset.from_list(samples) dataset[0] ``` ```{'text': 'Sample text', 'feature_1': [{'key1': 'value1', 'key2': None}, {'key1': None, 'key2': 'value2'}]}``` ### Expected behavior ```{'text': 'Sample text', 'feature_1': [{'key1': 'value1'}, {'key2': 'value2'}]}``` ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-5.15.0-1040-nvidia-x86_64-with-glibc2.35 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.0 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6899/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6898/comments
https://api.github.com/repos/huggingface/datasets/issues/6898/events
https://github.com/huggingface/datasets/pull/6898
2,294,432,108
PR_kwDODunzps5vWJ9v
6,898
Fix YAML error in README files appearing on GitHub
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6898). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "After this PR, the README file looks like:\r\n\r\n![Screenshot from 2024-05-14 14-19-29](https://github.com/huggingface/datasets/assets/8515462/1f665a06-98be-4dd7-ba7e-7cc025489503)\r\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004936 / 0.011353 (-0.006417) | 0.003591 / 0.011008 (-0.007418) | 0.062967 / 0.038508 (0.024459) | 0.031314 / 0.023109 (0.008205) | 0.248040 / 0.275898 (-0.027858) | 0.271630 / 0.323480 (-0.051850) | 0.003085 / 0.007986 (-0.004901) | 0.002605 / 0.004328 (-0.001724) | 0.049452 / 0.004250 (0.045202) | 0.044929 / 0.037052 (0.007876) | 0.264254 / 0.258489 (0.005765) | 0.287531 / 0.293841 (-0.006310) | 0.027197 / 0.128546 (-0.101349) | 0.009925 / 0.075646 (-0.065721) | 0.203165 / 0.419271 (-0.216107) | 0.035658 / 0.043533 (-0.007875) | 0.250207 / 0.255139 (-0.004932) | 0.269258 / 0.283200 (-0.013941) | 0.019975 / 0.141683 (-0.121708) | 1.093703 / 1.452155 (-0.358452) | 1.134031 / 1.492716 (-0.358685) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095089 / 0.018006 (0.077082) | 0.301410 / 0.000490 (0.300920) | 0.000251 / 0.000200 (0.000051) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018453 / 0.037411 (-0.018958) | 0.061674 / 0.014526 (0.047148) | 0.073442 / 0.176557 (-0.103114) | 0.119743 / 0.737135 (-0.617392) | 0.074518 / 0.296338 (-0.221820) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276351 / 0.215209 (0.061142) | 2.757670 / 2.077655 (0.680015) | 1.471199 / 1.504120 (-0.032921) | 1.363620 / 1.541195 (-0.177575) | 1.374175 / 1.468490 (-0.094315) | 0.556444 / 4.584777 (-4.028333) | 2.340637 / 3.745712 (-1.405075) | 2.728341 / 5.269862 (-2.541521) | 1.701214 / 4.565676 (-2.864463) | 0.061832 / 0.424275 (-0.362443) | 0.005287 / 0.007607 (-0.002320) | 0.331848 / 0.226044 (0.105804) | 3.334204 / 2.268929 (1.065276) | 1.791203 / 55.444624 (-53.653421) | 1.512246 / 6.876477 (-5.364231) | 1.529570 / 2.142072 (-0.612503) | 0.632193 / 4.805227 (-4.173034) | 0.116512 / 6.500664 (-6.384153) | 0.041271 / 0.075469 (-0.034198) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981813 / 1.841788 (-0.859974) | 11.271398 / 8.074308 (3.197090) | 9.654613 / 10.191392 (-0.536780) | 0.140235 / 0.680424 (-0.540188) | 0.014336 / 0.534201 (-0.519865) | 0.284286 / 0.579283 (-0.294997) | 0.260265 / 0.434364 (-0.174099) | 0.321064 / 0.540337 (-0.219274) | 0.417554 / 1.386936 (-0.969382) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005265 / 0.011353 (-0.006088) | 0.003237 / 0.011008 (-0.007772) | 0.049723 / 0.038508 (0.011215) | 0.031705 / 0.023109 (0.008596) | 0.255548 / 0.275898 (-0.020350) | 0.281651 / 0.323480 (-0.041829) | 0.004099 / 0.007986 (-0.003886) | 0.002739 / 0.004328 (-0.001589) | 0.049713 / 0.004250 (0.045463) | 0.041563 / 0.037052 (0.004511) | 0.269500 / 0.258489 (0.011011) | 0.293948 / 0.293841 (0.000107) | 0.029259 / 0.128546 (-0.099287) | 0.010391 / 0.075646 (-0.065255) | 0.057772 / 0.419271 (-0.361500) | 0.033125 / 0.043533 (-0.010408) | 0.258838 / 0.255139 (0.003699) | 0.278616 / 0.283200 (-0.004584) | 0.017543 / 0.141683 (-0.124139) | 1.130319 / 1.452155 (-0.321835) | 1.185976 / 1.492716 (-0.306740) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094827 / 0.018006 (0.076821) | 0.296820 / 0.000490 (0.296331) | 0.000212 / 0.000200 (0.000012) | 0.000046 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022583 / 0.037411 (-0.014828) | 0.076318 / 0.014526 (0.061792) | 0.087435 / 0.176557 (-0.089121) | 0.127351 / 0.737135 (-0.609784) | 0.089051 / 0.296338 (-0.207287) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289476 / 0.215209 (0.074267) | 2.842065 / 2.077655 (0.764410) | 1.536857 / 1.504120 (0.032737) | 1.393914 / 1.541195 (-0.147281) | 1.392636 / 1.468490 (-0.075854) | 0.570299 / 4.584777 (-4.014478) | 0.982246 / 3.745712 (-2.763466) | 2.758773 / 5.269862 (-2.511088) | 1.728615 / 4.565676 (-2.837062) | 0.063944 / 0.424275 (-0.360331) | 0.005014 / 0.007607 (-0.002593) | 0.347474 / 0.226044 (0.121430) | 3.398092 / 2.268929 (1.129164) | 1.855134 / 55.444624 (-53.589491) | 1.568705 / 6.876477 (-5.307772) | 1.574201 / 2.142072 (-0.567871) | 0.649466 / 4.805227 (-4.155761) | 0.116330 / 6.500664 (-6.384334) | 0.040730 / 0.075469 (-0.034739) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000675 / 1.841788 (-0.841113) | 11.899660 / 8.074308 (3.825352) | 9.913335 / 10.191392 (-0.278058) | 0.132517 / 0.680424 (-0.547907) | 0.016467 / 0.534201 (-0.517734) | 0.282221 / 0.579283 (-0.297062) | 0.125205 / 0.434364 (-0.309159) | 0.374986 / 0.540337 (-0.165351) | 0.418666 / 1.386936 (-0.968270) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e2f989d01b49e3d6f98b2014d9ece3307e885b7a \"CML watermark\")\n" ]
2024-05-14T05:21:57
2024-05-16T14:36:57
2024-05-16T14:28:16
MEMBER
null
Fix YAML error in README files appearing on GitHub. See error message: ![Screenshot from 2024-05-14 06-58-02](https://github.com/huggingface/datasets/assets/8515462/7984cc4e-96ee-4e83-99a4-4c0c5791fa05) Fix #6897.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6898/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6898", "html_url": "https://github.com/huggingface/datasets/pull/6898", "diff_url": "https://github.com/huggingface/datasets/pull/6898.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6898.patch", "merged_at": "2024-05-16T14:28:16" }
true
https://api.github.com/repos/huggingface/datasets/issues/6897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6897/comments
https://api.github.com/repos/huggingface/datasets/issues/6897/events
https://github.com/huggingface/datasets/issues/6897
2,293,428,243
I_kwDODunzps6IsvAT
6,897
datasets template guide :: issue in documentation YAML
{ "login": "bghira", "id": 59658056, "node_id": "MDQ6VXNlcjU5NjU4MDU2", "avatar_url": "https://avatars.githubusercontent.com/u/59658056?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bghira", "html_url": "https://github.com/bghira", "followers_url": "https://api.github.com/users/bghira/followers", "following_url": "https://api.github.com/users/bghira/following{/other_user}", "gists_url": "https://api.github.com/users/bghira/gists{/gist_id}", "starred_url": "https://api.github.com/users/bghira/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bghira/subscriptions", "organizations_url": "https://api.github.com/users/bghira/orgs", "repos_url": "https://api.github.com/users/bghira/repos", "events_url": "https://api.github.com/users/bghira/events{/privacy}", "received_events_url": "https://api.github.com/users/bghira/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hello, @bghira.\r\n\r\nThanks for reporting. Please note that the text originating the error is not supposed to be valid YAML: it contains the instructions to generate the actual YAML content, that should replace the instructions comment.\r\n\r\nOn the other hand, I agree that it is not nice to have that YAML error message at the top of the page: \r\n![Screenshot from 2024-05-14 06-58-02](https://github.com/huggingface/datasets/assets/8515462/28409eb4-99e7-4b24-8eaa-21a65a8f23b2)\r\n\r\nI am proposing a change to make the YAML error disappear.", "thanks albert! i looked at it for a while to figure it out. i think the `raw` view option is the correct way to look at it?" ]
2024-05-13T17:33:59
2024-05-16T14:28:17
2024-05-16T14:28:17
NONE
null
### Describe the bug There is a YAML error at the top of the page, and I don't think it's supposed to be there ### Steps to reproduce the bug 1. Browse to [this tutorial document](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md) 2. Observe a big red error at the top 3. The rest of the document remains functional ### Expected behavior I think the YAML block should be displayed or ignored. ### Environment info N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6897/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6896
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6896/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6896/comments
https://api.github.com/repos/huggingface/datasets/issues/6896/events
https://github.com/huggingface/datasets/issues/6896
2,293,176,061
I_kwDODunzps6Irxb9
6,896
Regression bug: `NonMatchingSplitsSizesError` for (possibly) overwritten dataset
{ "login": "finiteautomata", "id": 167943, "node_id": "MDQ6VXNlcjE2Nzk0Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4", "gravatar_id": "", "url": "https://api.github.com/users/finiteautomata", "html_url": "https://github.com/finiteautomata", "followers_url": "https://api.github.com/users/finiteautomata/followers", "following_url": "https://api.github.com/users/finiteautomata/following{/other_user}", "gists_url": "https://api.github.com/users/finiteautomata/gists{/gist_id}", "starred_url": "https://api.github.com/users/finiteautomata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/finiteautomata/subscriptions", "organizations_url": "https://api.github.com/users/finiteautomata/orgs", "repos_url": "https://api.github.com/users/finiteautomata/repos", "events_url": "https://api.github.com/users/finiteautomata/events{/privacy}", "received_events_url": "https://api.github.com/users/finiteautomata/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-05-13T15:41:57
2024-05-13T15:44:48
null
NONE
null
### Describe the bug While trying to load the dataset `https://huggingface.co/datasets/pysentimiento/spanish-tweets-small`, I get this error: ```python --------------------------------------------------------------------------- NonMatchingSplitsSizesError Traceback (most recent call last) [<ipython-input-1-d6a3c721d3b8>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 ds = load_dataset("pysentimiento/spanish-tweets-small") 3 frames [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2150 2151 # Download and prepare data -> 2152 builder_instance.download_and_prepare( 2153 download_config=download_config, 2154 download_mode=download_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 946 if num_proc is not None: 947 prepare_split_kwargs["num_proc"] = num_proc --> 948 self._download_and_prepare( 949 dl_manager=dl_manager, 950 verification_mode=verification_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 1059 1060 if verification_mode == VerificationMode.BASIC_CHECKS or verification_mode == VerificationMode.ALL_CHECKS: -> 1061 verify_splits(self.info.splits, split_dict) 1062 1063 # Update the info object with the splits. [/usr/local/lib/python3.10/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_splits(expected_splits, recorded_splits) 98 ] 99 if len(bad_splits) > 0: --> 100 raise NonMatchingSplitsSizesError(str(bad_splits)) 101 logger.info("All the splits matched successfully.") 102 NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=82649695458, num_examples=597433111, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=3358310095, num_examples=24898932, shard_lengths=[3626991, 3716991, 4036990, 3506990, 3676990, 3716990, 2616990], dataset_name='spanish-tweets-small')}] ``` I think I had this dataset updated, might be related to #6271 It is working fine as late in `2.10.0` , but not in `2.13.0` onwards. ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("pysentimiento/spanish-tweets-small") ``` You can run it in [this notebook](https://colab.research.google.com/drive/1FdhqLiVimHIlkn7B54DbhizeQ4U3vGVl#scrollTo=YgA50cBSibUg) ### Expected behavior Load the dataset without any error ### Environment info - `datasets` version: 2.13.0 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.20.3 - PyArrow version: 14.0.2 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6896/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6896/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6895
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6895/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6895/comments
https://api.github.com/repos/huggingface/datasets/issues/6895/events
https://github.com/huggingface/datasets/pull/6895
2,292,993,156
PR_kwDODunzps5vRK8P
6,895
Document that to_json defaults to JSON Lines
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6895). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004914 / 0.011353 (-0.006439) | 0.003621 / 0.011008 (-0.007387) | 0.062841 / 0.038508 (0.024333) | 0.031630 / 0.023109 (0.008520) | 0.247666 / 0.275898 (-0.028232) | 0.288192 / 0.323480 (-0.035288) | 0.003145 / 0.007986 (-0.004841) | 0.002655 / 0.004328 (-0.001674) | 0.049484 / 0.004250 (0.045233) | 0.046593 / 0.037052 (0.009540) | 0.271550 / 0.258489 (0.013061) | 0.293228 / 0.293841 (-0.000613) | 0.026941 / 0.128546 (-0.101606) | 0.009936 / 0.075646 (-0.065710) | 0.201741 / 0.419271 (-0.217530) | 0.035435 / 0.043533 (-0.008098) | 0.251868 / 0.255139 (-0.003271) | 0.272082 / 0.283200 (-0.011118) | 0.019731 / 0.141683 (-0.121952) | 1.125752 / 1.452155 (-0.326403) | 1.152058 / 1.492716 (-0.340659) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099695 / 0.018006 (0.081689) | 0.308306 / 0.000490 (0.307816) | 0.000223 / 0.000200 (0.000023) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018616 / 0.037411 (-0.018795) | 0.061886 / 0.014526 (0.047360) | 0.074059 / 0.176557 (-0.102498) | 0.124902 / 0.737135 (-0.612234) | 0.075108 / 0.296338 (-0.221230) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.336707 / 0.215209 (0.121498) | 2.805197 / 2.077655 (0.727542) | 1.565826 / 1.504120 (0.061706) | 1.443708 / 1.541195 (-0.097486) | 1.341167 / 1.468490 (-0.127323) | 0.566814 / 4.584777 (-4.017963) | 2.374536 / 3.745712 (-1.371176) | 2.804921 / 5.269862 (-2.464941) | 1.739848 / 4.565676 (-2.825829) | 0.062779 / 0.424275 (-0.361496) | 0.005341 / 0.007607 (-0.002266) | 0.326482 / 0.226044 (0.100438) | 3.273460 / 2.268929 (1.004531) | 1.803656 / 55.444624 (-53.640968) | 1.502518 / 6.876477 (-5.373958) | 1.523665 / 2.142072 (-0.618407) | 0.642443 / 4.805227 (-4.162784) | 0.117820 / 6.500664 (-6.382844) | 0.042540 / 0.075469 (-0.032929) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963399 / 1.841788 (-0.878388) | 11.503648 / 8.074308 (3.429340) | 9.483957 / 10.191392 (-0.707435) | 0.129118 / 0.680424 (-0.551306) | 0.014136 / 0.534201 (-0.520065) | 0.286766 / 0.579283 (-0.292517) | 0.273328 / 0.434364 (-0.161036) | 0.324075 / 0.540337 (-0.216262) | 0.420408 / 1.386936 (-0.966528) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005099 / 0.011353 (-0.006254) | 0.003721 / 0.011008 (-0.007288) | 0.050614 / 0.038508 (0.012106) | 0.031882 / 0.023109 (0.008773) | 0.267619 / 0.275898 (-0.008279) | 0.291874 / 0.323480 (-0.031606) | 0.004254 / 0.007986 (-0.003731) | 0.002766 / 0.004328 (-0.001563) | 0.049291 / 0.004250 (0.045041) | 0.043302 / 0.037052 (0.006249) | 0.274891 / 0.258489 (0.016402) | 0.304977 / 0.293841 (0.011136) | 0.029088 / 0.128546 (-0.099459) | 0.010425 / 0.075646 (-0.065221) | 0.057781 / 0.419271 (-0.361491) | 0.033589 / 0.043533 (-0.009943) | 0.264293 / 0.255139 (0.009154) | 0.284861 / 0.283200 (0.001661) | 0.018025 / 0.141683 (-0.123658) | 1.124954 / 1.452155 (-0.327200) | 1.161957 / 1.492716 (-0.330759) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.103622 / 0.018006 (0.085615) | 0.310915 / 0.000490 (0.310425) | 0.000241 / 0.000200 (0.000041) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022550 / 0.037411 (-0.014862) | 0.076466 / 0.014526 (0.061940) | 0.088297 / 0.176557 (-0.088260) | 0.128659 / 0.737135 (-0.608477) | 0.091823 / 0.296338 (-0.204516) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293431 / 0.215209 (0.078222) | 2.888105 / 2.077655 (0.810450) | 1.559581 / 1.504120 (0.055461) | 1.421424 / 1.541195 (-0.119771) | 1.437941 / 1.468490 (-0.030549) | 0.577544 / 4.584777 (-4.007233) | 0.968840 / 3.745712 (-2.776872) | 2.799796 / 5.269862 (-2.470066) | 1.744791 / 4.565676 (-2.820885) | 0.064159 / 0.424275 (-0.360116) | 0.005043 / 0.007607 (-0.002564) | 0.341039 / 0.226044 (0.114995) | 3.354402 / 2.268929 (1.085474) | 1.904093 / 55.444624 (-53.540532) | 1.604046 / 6.876477 (-5.272431) | 1.610384 / 2.142072 (-0.531688) | 0.658129 / 4.805227 (-4.147098) | 0.119297 / 6.500664 (-6.381367) | 0.041396 / 0.075469 (-0.034073) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001109 / 1.841788 (-0.840678) | 12.081856 / 8.074308 (4.007548) | 10.090943 / 10.191392 (-0.100449) | 0.150433 / 0.680424 (-0.529991) | 0.015850 / 0.534201 (-0.518351) | 0.286590 / 0.579283 (-0.292693) | 0.131137 / 0.434364 (-0.303227) | 0.389033 / 0.540337 (-0.151304) | 0.421382 / 1.386936 (-0.965554) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#22b7baed53f9f295a5dda2fe3eb0b7434bf57e89 \"CML watermark\")\n" ]
2024-05-13T14:22:34
2024-05-16T14:37:25
2024-05-16T14:31:26
MEMBER
null
Document that `Dataset.to_json` defaults to JSON Lines, by adding explanation in the corresponding docstring. Fix #6894.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6895/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6895", "html_url": "https://github.com/huggingface/datasets/pull/6895", "diff_url": "https://github.com/huggingface/datasets/pull/6895.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6895.patch", "merged_at": "2024-05-16T14:31:26" }
true
https://api.github.com/repos/huggingface/datasets/issues/6894
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6894/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6894/comments
https://api.github.com/repos/huggingface/datasets/issues/6894/events
https://github.com/huggingface/datasets/issues/6894
2,292,840,226
I_kwDODunzps6Iqfci
6,894
Better document defaults of to_json
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2024-05-13T13:30:54
2024-05-16T14:31:27
2024-05-16T14:31:27
MEMBER
null
Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/). Related to: - #6891
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6894/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6894/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6893
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6893/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6893/comments
https://api.github.com/repos/huggingface/datasets/issues/6893/events
https://github.com/huggingface/datasets/pull/6893
2,292,677,439
PR_kwDODunzps5vQFEv
6,893
Close gzipped files properly
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6893). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005388 / 0.011353 (-0.005965) | 0.003822 / 0.011008 (-0.007187) | 0.063285 / 0.038508 (0.024777) | 0.033780 / 0.023109 (0.010671) | 0.239580 / 0.275898 (-0.036318) | 0.264203 / 0.323480 (-0.059277) | 0.004207 / 0.007986 (-0.003778) | 0.002716 / 0.004328 (-0.001612) | 0.049569 / 0.004250 (0.045319) | 0.048591 / 0.037052 (0.011538) | 0.252606 / 0.258489 (-0.005884) | 0.285998 / 0.293841 (-0.007843) | 0.028650 / 0.128546 (-0.099896) | 0.010652 / 0.075646 (-0.064994) | 0.203962 / 0.419271 (-0.215310) | 0.036207 / 0.043533 (-0.007326) | 0.240374 / 0.255139 (-0.014765) | 0.263564 / 0.283200 (-0.019636) | 0.017722 / 0.141683 (-0.123961) | 1.143741 / 1.452155 (-0.308414) | 1.192452 / 1.492716 (-0.300264) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.141329 / 0.018006 (0.123323) | 0.320169 / 0.000490 (0.319679) | 0.000240 / 0.000200 (0.000041) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019885 / 0.037411 (-0.017526) | 0.063322 / 0.014526 (0.048796) | 0.075446 / 0.176557 (-0.101110) | 0.122619 / 0.737135 (-0.614517) | 0.077175 / 0.296338 (-0.219163) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281292 / 0.215209 (0.066083) | 2.796220 / 2.077655 (0.718565) | 1.456035 / 1.504120 (-0.048085) | 1.334445 / 1.541195 (-0.206750) | 1.380223 / 1.468490 (-0.088267) | 0.575895 / 4.584777 (-4.008882) | 2.375791 / 3.745712 (-1.369921) | 2.926273 / 5.269862 (-2.343589) | 1.832586 / 4.565676 (-2.733090) | 0.064323 / 0.424275 (-0.359952) | 0.005403 / 0.007607 (-0.002204) | 0.334088 / 0.226044 (0.108043) | 3.321174 / 2.268929 (1.052246) | 1.821432 / 55.444624 (-53.623193) | 1.520181 / 6.876477 (-5.356296) | 1.582487 / 2.142072 (-0.559585) | 0.645641 / 4.805227 (-4.159586) | 0.119596 / 6.500664 (-6.381068) | 0.043144 / 0.075469 (-0.032325) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985104 / 1.841788 (-0.856684) | 12.518240 / 8.074308 (4.443932) | 10.017118 / 10.191392 (-0.174274) | 0.133900 / 0.680424 (-0.546524) | 0.014591 / 0.534201 (-0.519610) | 0.288326 / 0.579283 (-0.290957) | 0.262292 / 0.434364 (-0.172072) | 0.327601 / 0.540337 (-0.212736) | 0.421525 / 1.386936 (-0.965411) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005546 / 0.011353 (-0.005807) | 0.003961 / 0.011008 (-0.007047) | 0.051745 / 0.038508 (0.013237) | 0.032587 / 0.023109 (0.009478) | 0.266886 / 0.275898 (-0.009012) | 0.301327 / 0.323480 (-0.022153) | 0.004273 / 0.007986 (-0.003713) | 0.002851 / 0.004328 (-0.001477) | 0.049333 / 0.004250 (0.045082) | 0.044530 / 0.037052 (0.007478) | 0.286829 / 0.258489 (0.028340) | 0.310732 / 0.293841 (0.016892) | 0.029925 / 0.128546 (-0.098621) | 0.011270 / 0.075646 (-0.064377) | 0.059071 / 0.419271 (-0.360200) | 0.033899 / 0.043533 (-0.009633) | 0.270448 / 0.255139 (0.015309) | 0.286935 / 0.283200 (0.003735) | 0.019516 / 0.141683 (-0.122167) | 1.125815 / 1.452155 (-0.326339) | 1.179893 / 1.492716 (-0.312823) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096476 / 0.018006 (0.078470) | 0.305149 / 0.000490 (0.304660) | 0.000207 / 0.000200 (0.000008) | 0.000046 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023648 / 0.037411 (-0.013763) | 0.082847 / 0.014526 (0.068322) | 0.089210 / 0.176557 (-0.087347) | 0.130194 / 0.737135 (-0.606941) | 0.091700 / 0.296338 (-0.204639) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290995 / 0.215209 (0.075786) | 2.870335 / 2.077655 (0.792680) | 1.595661 / 1.504120 (0.091541) | 1.452319 / 1.541195 (-0.088876) | 1.505647 / 1.468490 (0.037157) | 0.575856 / 4.584777 (-4.008921) | 1.005527 / 3.745712 (-2.740185) | 2.927824 / 5.269862 (-2.342038) | 1.791702 / 4.565676 (-2.773975) | 0.064804 / 0.424275 (-0.359471) | 0.005203 / 0.007607 (-0.002404) | 0.348615 / 0.226044 (0.122570) | 3.463989 / 2.268929 (1.195060) | 1.947758 / 55.444624 (-53.496866) | 1.669974 / 6.876477 (-5.206502) | 1.721663 / 2.142072 (-0.420410) | 0.650999 / 4.805227 (-4.154228) | 0.117769 / 6.500664 (-6.382895) | 0.041738 / 0.075469 (-0.033731) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004140 / 1.841788 (-0.837648) | 13.035487 / 8.074308 (4.961179) | 10.318152 / 10.191392 (0.126760) | 0.143776 / 0.680424 (-0.536648) | 0.016272 / 0.534201 (-0.517929) | 0.286564 / 0.579283 (-0.292719) | 0.126579 / 0.434364 (-0.307785) | 0.397253 / 0.540337 (-0.143085) | 0.424968 / 1.386936 (-0.961968) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ddb6a283d7dfccc81a9fb12e761b819fed86c7a0 \"CML watermark\")\n", "Supersede and close: #6889" ]
2024-05-13T12:24:39
2024-05-13T13:53:17
2024-05-13T13:01:54
MEMBER
null
close https://github.com/huggingface/datasets/issues/6877
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6893/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6893/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6893", "html_url": "https://github.com/huggingface/datasets/pull/6893", "diff_url": "https://github.com/huggingface/datasets/pull/6893.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6893.patch", "merged_at": "2024-05-13T13:01:54" }
true
https://api.github.com/repos/huggingface/datasets/issues/6892
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6892/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6892/comments
https://api.github.com/repos/huggingface/datasets/issues/6892/events
https://github.com/huggingface/datasets/pull/6892
2,291,201,347
PR_kwDODunzps5vLIlp
6,892
Add support for categorical/dictionary types
{ "login": "EthanSteinberg", "id": 342233, "node_id": "MDQ6VXNlcjM0MjIzMw==", "avatar_url": "https://avatars.githubusercontent.com/u/342233?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EthanSteinberg", "html_url": "https://github.com/EthanSteinberg", "followers_url": "https://api.github.com/users/EthanSteinberg/followers", "following_url": "https://api.github.com/users/EthanSteinberg/following{/other_user}", "gists_url": "https://api.github.com/users/EthanSteinberg/gists{/gist_id}", "starred_url": "https://api.github.com/users/EthanSteinberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EthanSteinberg/subscriptions", "organizations_url": "https://api.github.com/users/EthanSteinberg/orgs", "repos_url": "https://api.github.com/users/EthanSteinberg/repos", "events_url": "https://api.github.com/users/EthanSteinberg/events{/privacy}", "received_events_url": "https://api.github.com/users/EthanSteinberg/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-05-12T07:15:08
2024-05-12T07:15:37
null
NONE
null
Arrow has a very useful dictionary/categorical type (https://arrow.apache.org/docs/python/generated/pyarrow.dictionary.html). This data type has significant speed, memory and disk benefits over pa.string() when there are only a few unique text strings in a column. Unfortunately, huggingface datasets currently does not support this type. So huggingface datasets cannot natively read many parquet files that use this datatype .This PR adds support for Huggingface Datasets to read categorical/dictionary data. Note: This PR functions by simply converting those dictionary/categorical types to strings. This means that huggingface datasets cannot take advantage of the compute benefits of categoricals, but it significantly simplifies logic. At this time, I do not think it makes sense to optimize categorical support within huggingface datasets and that we should only try to optimize later, if necessary. Closes #5706
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6892/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6892/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6892", "html_url": "https://github.com/huggingface/datasets/pull/6892", "diff_url": "https://github.com/huggingface/datasets/pull/6892.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6892.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6891
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6891/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6891/comments
https://api.github.com/repos/huggingface/datasets/issues/6891/events
https://github.com/huggingface/datasets/issues/6891
2,291,118,869
I_kwDODunzps6Ij7MV
6,891
Unable to load JSON saved using `to_json`
{ "login": "DarshanDeshpande", "id": 39432636, "node_id": "MDQ6VXNlcjM5NDMyNjM2", "avatar_url": "https://avatars.githubusercontent.com/u/39432636?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DarshanDeshpande", "html_url": "https://github.com/DarshanDeshpande", "followers_url": "https://api.github.com/users/DarshanDeshpande/followers", "following_url": "https://api.github.com/users/DarshanDeshpande/following{/other_user}", "gists_url": "https://api.github.com/users/DarshanDeshpande/gists{/gist_id}", "starred_url": "https://api.github.com/users/DarshanDeshpande/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DarshanDeshpande/subscriptions", "organizations_url": "https://api.github.com/users/DarshanDeshpande/orgs", "repos_url": "https://api.github.com/users/DarshanDeshpande/repos", "events_url": "https://api.github.com/users/DarshanDeshpande/events{/privacy}", "received_events_url": "https://api.github.com/users/DarshanDeshpande/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @DarshanDeshpande,\r\n\r\nPlease note that the default format of the method `Dataset.to_json` is [JSON-Lines](https://jsonlines.org/): it passes `orient=\"records\", lines=True` to `pandas.DataFrame.to_json`. This format is specially useful for large datasets, since unlike regular JSON files, it does not require loading all the data into memory at once, but can be done iteratively by batches.\r\n\r\nIn order to read this file using the `json` library, you should parse line by line:\r\n```python\r\nwith open(\"full_dataset.json\", \"r\") as f:\r\n data = [json.loads(line) for line in f]\r\nlen(data)\r\n```\r\nMaybe we should explain this better in our docs.", "Now we explain this better in out docs:\r\n- #6895" ]
2024-05-12T01:02:51
2024-05-16T14:32:55
2024-05-12T07:02:02
NONE
null
### Describe the bug Datasets stored in the JSON format cannot be loaded using `json.load()` ### Steps to reproduce the bug ``` import json from datasets import load_dataset dataset = load_dataset("squad") train_dataset, test_dataset = dataset["train"], dataset["validation"] test_dataset.to_json("full_dataset.json") # This works loaded_test = load_dataset("json", data_files="full_dataset.json") # This fails loaded_test = json.load(open("full_dataset.json", "r")) ``` ### Expected behavior The JSON should be correctly formatted when writing so that it can be loaded using `json.load()`. ### Environment info Colab: https://colab.research.google.com/drive/1st1iStFUVgu9ZPvnzSzL4vDeYWDwYpUm?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6891/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6890
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6890/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6890/comments
https://api.github.com/repos/huggingface/datasets/issues/6890/events
https://github.com/huggingface/datasets/issues/6890
2,288,699,041
I_kwDODunzps6Iasah
6,890
add `with_transform` and/or `set_transform` to IterableDataset
{ "login": "not-lain", "id": 70411813, "node_id": "MDQ6VXNlcjcwNDExODEz", "avatar_url": "https://avatars.githubusercontent.com/u/70411813?v=4", "gravatar_id": "", "url": "https://api.github.com/users/not-lain", "html_url": "https://github.com/not-lain", "followers_url": "https://api.github.com/users/not-lain/followers", "following_url": "https://api.github.com/users/not-lain/following{/other_user}", "gists_url": "https://api.github.com/users/not-lain/gists{/gist_id}", "starred_url": "https://api.github.com/users/not-lain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/not-lain/subscriptions", "organizations_url": "https://api.github.com/users/not-lain/orgs", "repos_url": "https://api.github.com/users/not-lain/repos", "events_url": "https://api.github.com/users/not-lain/events{/privacy}", "received_events_url": "https://api.github.com/users/not-lain/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2024-05-10T01:00:12
2024-05-10T01:00:46
null
NONE
null
### Feature request when working with a really large dataset it would save us a lot of time (and compute resources) to use either with_transform or the set_transform from the Dataset class instead of waiting for the entire dataset to map ### Motivation don't want to wait for a really long dataset to map, this would give IterableDataset an extra advantage over the Dataset class. reducing time and resources ### Your contribution I am a little busy with my job search lately, but would post about this feature in my social media. Apologies again (dad going to kick me out soon), if I ever have some free time I will contribute to making this a reality, but that's going to be hard     / (┬┬﹏┬┬)\
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6890/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6890/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6889
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6889/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6889/comments
https://api.github.com/repos/huggingface/datasets/issues/6889/events
https://github.com/huggingface/datasets/pull/6889
2,287,720,539
PR_kwDODunzps5u_hW-
6,889
fix bug #6877
{ "login": "arthasking123", "id": 16257131, "node_id": "MDQ6VXNlcjE2MjU3MTMx", "avatar_url": "https://avatars.githubusercontent.com/u/16257131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arthasking123", "html_url": "https://github.com/arthasking123", "followers_url": "https://api.github.com/users/arthasking123/followers", "following_url": "https://api.github.com/users/arthasking123/following{/other_user}", "gists_url": "https://api.github.com/users/arthasking123/gists{/gist_id}", "starred_url": "https://api.github.com/users/arthasking123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arthasking123/subscriptions", "organizations_url": "https://api.github.com/users/arthasking123/orgs", "repos_url": "https://api.github.com/users/arthasking123/repos", "events_url": "https://api.github.com/users/arthasking123/events{/privacy}", "received_events_url": "https://api.github.com/users/arthasking123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@loicmagne, @KennethEnevoldsen", "Can you give more details on why this fix works ?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6889). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "> Can you give more details on why this fix works ?\r\n\r\nIn order to locate this file handle problem, I defined a print_open_files_count() function using psutil library:\r\n```python\r\ndef print_open_files_count(markstr):\r\n pid = os.getpid()\r\n p = psutil.Process(pid)\r\n open_files = p.open_files()\r\n print(f\"{markstr}_Open files count: {len(open_files)}\")\r\n\r\n\r\n```\r\n\r\nand added this function as below:\r\n```python\r\n\r\nwith open(file, \"rb\") as f:\r\n print_open_files_count('Before')\r\n...\r\n...\r\n batch_idx += 1\r\nprint_open_files_count('After')\r\n```\r\nand the console output as below when loading the 'mteb/biblenlp-corpus-mmteb' dataset :\r\n```shell\r\nBefore_Open files count: 1\r\nAfter_Open files count: 1\r\nBefore_Open files count: 2\r\nAfter_Open files count: 2\r\nBefore_Open files count: 3\r\nAfter_Open files count: 3\r\n...\r\n```\r\nwhich indicated there was a file handle leakage in the dataset loading process. So I tried to close the file handle manually using os library and found it works although the core issue has not been found temporarily", "I think it would be better to find the cause and have a cleaner fix, because while your suggested fix works for a simple case, it will lead to files that will stay open if there is an error during the dataset generation for example.\r\n\r\n\r\nBtw I was not able to reproduce locally (macbook pro m2) or on colab, so it might be something related to your environment. Also `open()` should close the file at the end of the `with` block so I don't really get how you can get this issue :/", "> Btw I was not able to reproduce locally (macbook pro m2) or on colab, so it might be something related to your environment. Also `open()` should close the file at the end of the `with` block so I don't really get how you can get this issue :/\r\n\r\nhow about setting the limitation of open files to 1024?", "I was able to reproduce on colab with\r\n\r\n```\r\n!ulimit -n 256 && python -c \"from datasets import load_dataset; load_dataset('mteb/biblenlp-corpus-mmteb')\"\r\n```\r\n\r\n(also needed to `!pip install -qq git+https://github.com/huggingface/huggingface_hub.git@less-paths-info-calls` to fix a rate limit for some reason)\r\n\r\nwhich led to me find that the issue came from the `GzipFileSystem` that wasn't closing files.\r\n\r\nto reproduce:\r\n\r\n```python\r\nimport gzip\r\nimport os\r\n\r\nimport datasets\r\nimport fsspec\r\n\r\n# os.mkdir(\"tmp\")\r\n# for i in range(300):\r\n# with gzip.open(f\"tmp/{i}.txt.gz\", \"wt\") as f:\r\n# f.write(\"yo\")\r\n\r\nfor i in range(300):\r\n with fsspec.open(f\"gzip://{i}.txt::tmp/{i}.txt.gz\", \"rb\") as f:\r\n f.read()\r\n```\r\n\r\nI opened https://github.com/huggingface/datasets/pull/6893 to fix this, can you try if it works on your side ?", "ok\n\n\n\n---- Replied Message ----\n| From | Quentin ***@***.***> |\n| Date | 05/13/2024 20:28 |\n| To | ***@***.***> |\n| Cc | ***@***.***>***@***.***> |\n| Subject | Re: [huggingface/datasets] fix bug #6877 (PR #6889) |\n\nI was able to reproduce on colab with\n\n!ulimit -n 256 && python -c \"from datasets import load_dataset; load_dataset('mteb/biblenlp-corpus-mmteb')\"\n\n\n(also needed to !pip install -qq ***@***.*** to fix a rate limit for some reason)\n\nwhich lead to me find that the issue came from the GzipFileSystem that wasn't closing files.\n\nto reproduce:\n\nimportgzipimportosimportdatasetsimportfsspec# os.mkdir(\"tmp\")# for i in range(300):# with gzip.open(f\"tmp/{i}.txt.gz\", \"wt\") as f:# f.write(\"yo\")foriinrange(300):\n withfsspec.open(f\"gzip://::tmp/{i}.txt.gz\", \"rb\") asf:\n f.read()\n\nI opened #6893 to fix this, can you try if it works on your side ?\n\n—\nReply to this email directly, view it on GitHub, or unsubscribe.\nYou are receiving this because you authored the thread.Message ID: ***@***.***>", "Superseded by:\r\n- #6893" ]
2024-05-09T13:38:40
2024-05-13T13:35:32
2024-05-13T13:35:32
NONE
null
fix bug #6877 due to maybe f becomes invaild after yield process the results are below: Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:01<00:00, 420.41it/s] Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 26148.48it/s] Resolving data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 409731.44it/s] Resolving data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 289720.84it/s] Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 26663.42it/s] Resolving data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 434056.21it/s] Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 13222.33files/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:04<00:00, 180.67files/s] Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [01:35<00:00, 8.70files/s] Generating train split: 1571592 examples [00:08, 176736.09 examples/s] Generating test split: 85533 examples [00:01, 48224.56 examples/s] Generating validation split: 86246 examples [00:01, 50164.16 examples/s] Fix https://github.com/huggingface/datasets/issues/6877. CC: @natolambert
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6889/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6889/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6889", "html_url": "https://github.com/huggingface/datasets/pull/6889", "diff_url": "https://github.com/huggingface/datasets/pull/6889.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6889.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6888/comments
https://api.github.com/repos/huggingface/datasets/issues/6888/events
https://github.com/huggingface/datasets/pull/6888
2,287,169,676
PR_kwDODunzps5u9omr
6,888
Support WebDataset containing file basenames with dots
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6888). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I think webdataset splits the file name and extension using the first dot no ?\r\n\r\nhttps://github.com/webdataset/webdataset/blob/945b251a872ec0d337be8f9ea17a9c5b0d017ff3/webdataset/tariterators.py#L226\r\n\r\nlinks to this function that splits on first dot\r\n\r\n```python\r\n\r\ndef base_plus_ext(path):\r\n \"\"\"Split off all file extensions.\r\n\r\n Returns base, allext.\r\n\r\n Args:\r\n path: path with extensions\r\n\r\n Returns:\r\n path with all extensions removed\r\n \"\"\"\r\n match = re.match(r\"^((?:.*/|)[^.]+)[.]([^/]*)$\", path)\r\n if not match:\r\n return None, None\r\n return match.group(1), match.group(2)\r\n```", "So maybe the original issue is actually due to one of the files containing a dot in its file name that is not for the extension\r\n\r\n```python\r\n>>> base_plus_ext(\"15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b.png\")\r\n('15_Cohen_1-s2', '0-S0929664620300449-gr3_lrg-b.png')\r\n```", "Thanks for your review, @lhoestq.\r\n\r\nI was not aware that `webdataset` requires filenames without dots in their basenames.", "I they can have dots for the extension (that becomes the column name) but not in the key used to group files into samples" ]
2024-05-09T08:25:30
2024-05-10T13:54:06
2024-05-10T13:54:06
MEMBER
null
Support WebDataset containing file basenames with dots. Fix #6880.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6888/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6888", "html_url": "https://github.com/huggingface/datasets/pull/6888", "diff_url": "https://github.com/huggingface/datasets/pull/6888.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6888.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6887/comments
https://api.github.com/repos/huggingface/datasets/issues/6887/events
https://github.com/huggingface/datasets/issues/6887
2,286,786,396
I_kwDODunzps6ITZdc
6,887
FAISS load to None
{ "login": "brainer3220", "id": 40418544, "node_id": "MDQ6VXNlcjQwNDE4NTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/40418544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brainer3220", "html_url": "https://github.com/brainer3220", "followers_url": "https://api.github.com/users/brainer3220/followers", "following_url": "https://api.github.com/users/brainer3220/following{/other_user}", "gists_url": "https://api.github.com/users/brainer3220/gists{/gist_id}", "starred_url": "https://api.github.com/users/brainer3220/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brainer3220/subscriptions", "organizations_url": "https://api.github.com/users/brainer3220/orgs", "repos_url": "https://api.github.com/users/brainer3220/repos", "events_url": "https://api.github.com/users/brainer3220/events{/privacy}", "received_events_url": "https://api.github.com/users/brainer3220/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hello,\r\n\r\nI'm not sure I understand. \r\nThe return value of `ds.load_faiss_index` is None as expected.\r\n\r\nI see that loading an Index on a dataset that doesn't have an `embedding` column doesn't raise an Issue. Is that the issue?\r\n\r\nSo `ds` doesn't have an `embedding` column, but we load an index that looks for it. But this will raise an issue only when calling `ds.search`." ]
2024-05-09T02:43:50
2024-05-16T20:44:23
null
NONE
null
### Describe the bug I've use FAISS with Datasets and save to FAISS. Then load to save FAISS then no error, then ds to None ```python ds.load_faiss_index('embeddings', 'my_index.faiss') ``` ### Steps to reproduce the bug # 1. ```python ds_with_embeddings = ds.map(lambda example: {'embeddings': model(transforms(example['image']).unsqueeze(0)).squeeze()}, batch_size=64) ds_with_embeddings.add_faiss_index(column='embeddings') ds_with_embeddings.save_faiss_index('embeddings', 'index.faiss') ``` # 2. ```python ds.load_faiss_index('embeddings', 'my_index.faiss') ``` ### Expected behavior Add column in Datasets. ### Environment info Google Colab, SageMaker Notebook
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6887/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6887/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6886
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6886/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6886/comments
https://api.github.com/repos/huggingface/datasets/issues/6886/events
https://github.com/huggingface/datasets/issues/6886
2,286,328,984
I_kwDODunzps6IRpyY
6,886
load_dataset with data_dir and cache_dir set fail with not supported
{ "login": "fah", "id": 322496, "node_id": "MDQ6VXNlcjMyMjQ5Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/322496?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fah", "html_url": "https://github.com/fah", "followers_url": "https://api.github.com/users/fah/followers", "following_url": "https://api.github.com/users/fah/following{/other_user}", "gists_url": "https://api.github.com/users/fah/gists{/gist_id}", "starred_url": "https://api.github.com/users/fah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fah/subscriptions", "organizations_url": "https://api.github.com/users/fah/orgs", "repos_url": "https://api.github.com/users/fah/repos", "events_url": "https://api.github.com/users/fah/events{/privacy}", "received_events_url": "https://api.github.com/users/fah/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-05-08T19:52:35
2024-05-08T19:58:11
null
NONE
null
### Describe the bug with python 3.11 I execute: ```py from transformers import Wav2Vec2Processor, Data2VecAudioModel import torch from torch import nn from datasets import load_dataset, concatenate_datasets # load demo audio and set processor dataset_clean = load_dataset("librispeech_asr", "clean", split="validation", data_dir="data", cache_dir="cache") ``` This fails in the last line with ```log Found cached dataset librispeech_asr (file:///Users/as/Documents/Project/git/audio2vec/cache/librispeech_asr/clean-data_dir=data/2.1.0/cff5df6e7955c80a67f80e27e7e655de71c689e2d2364bece785b972acb37fe7) Traceback (most recent call last): File "/Users/as/Documents/Project/git/audio2vec/src/music2vec-v1.py", line 7, in <module> dataset_clean = load_dataset("librispeech_asr", "clean", split="validation", data_dir="data", cache_dir="cache") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/as/anaconda3/lib/python3.11/site-packages/datasets/load.py", line 1810, in load_dataset ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/as/anaconda3/lib/python3.11/site-packages/datasets/builder.py", line 1113, in as_dataset raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.") NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported. ``` ### Steps to reproduce the bug I setup an venv with requirements.txt ```txt transformers==4.40.2 torch==2.2.2 datasets==2.16.0 fsspec==2023.9.2 ``` pip freeze is: ``` aiohttp==3.9.5 aiosignal==1.3.1 attrs==23.2.0 certifi==2024.2.2 charset-normalizer==3.3.2 datasets==2.16.0 dill==0.3.7 filelock==3.14.0 frozenlist==1.4.1 fsspec==2023.9.2 huggingface-hub==0.23.0 idna==3.7 Jinja2==3.1.4 MarkupSafe==2.1.5 mpmath==1.3.0 multidict==6.0.5 multiprocess==0.70.15 networkx==3.3 numpy==1.26.4 packaging==24.0 pandas==2.2.2 pyarrow==16.0.0 pyarrow-hotfix==0.6 python-dateutil==2.9.0.post0 pytz==2024.1 PyYAML==6.0.1 regex==2024.4.28 requests==2.31.0 safetensors==0.4.3 six==1.16.0 sympy==1.12 tokenizers==0.19.1 torch==2.2.2 tqdm==4.66.4 transformers==4.40.2 typing_extensions==4.11.0 tzdata==2024.1 urllib3==2.2.1 xxhash==3.4.1 yarl==1.9.4 ``` I execute this on a M1 Mac. ### Expected behavior I don't understand the error message. Why is "local" caching not supported. Would it possible to give some additional hint with the error message how to solve this issue? ### Environment info source .... python -u example.py
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6886/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6885
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6885/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6885/comments
https://api.github.com/repos/huggingface/datasets/issues/6885/events
https://github.com/huggingface/datasets/pull/6885
2,285,115,400
PR_kwDODunzps5u2urB
6,885
Support jax 0.4.27 in CI tests
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6885). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005232 / 0.011353 (-0.006121) | 0.003749 / 0.011008 (-0.007260) | 0.063451 / 0.038508 (0.024943) | 0.031164 / 0.023109 (0.008055) | 0.252024 / 0.275898 (-0.023874) | 0.274479 / 0.323480 (-0.049001) | 0.003238 / 0.007986 (-0.004748) | 0.002668 / 0.004328 (-0.001660) | 0.049570 / 0.004250 (0.045320) | 0.046159 / 0.037052 (0.009107) | 0.273416 / 0.258489 (0.014927) | 0.299064 / 0.293841 (0.005223) | 0.027758 / 0.128546 (-0.100788) | 0.010702 / 0.075646 (-0.064944) | 0.207244 / 0.419271 (-0.212028) | 0.036139 / 0.043533 (-0.007394) | 0.249966 / 0.255139 (-0.005173) | 0.270685 / 0.283200 (-0.012515) | 0.019938 / 0.141683 (-0.121745) | 1.133642 / 1.452155 (-0.318512) | 1.170712 / 1.492716 (-0.322004) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098352 / 0.018006 (0.080346) | 0.310738 / 0.000490 (0.310248) | 0.000225 / 0.000200 (0.000025) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018151 / 0.037411 (-0.019261) | 0.061169 / 0.014526 (0.046644) | 0.073275 / 0.176557 (-0.103281) | 0.120320 / 0.737135 (-0.616815) | 0.083945 / 0.296338 (-0.212394) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283285 / 0.215209 (0.068075) | 2.766129 / 2.077655 (0.688475) | 1.477831 / 1.504120 (-0.026289) | 1.363365 / 1.541195 (-0.177830) | 1.402081 / 1.468490 (-0.066409) | 0.554100 / 4.584777 (-4.030677) | 2.374885 / 3.745712 (-1.370827) | 2.866260 / 5.269862 (-2.403601) | 1.775109 / 4.565676 (-2.790567) | 0.062416 / 0.424275 (-0.361859) | 0.005490 / 0.007607 (-0.002117) | 0.379293 / 0.226044 (0.153248) | 3.330534 / 2.268929 (1.061606) | 1.881648 / 55.444624 (-53.562977) | 1.549847 / 6.876477 (-5.326629) | 1.660350 / 2.142072 (-0.481722) | 0.631013 / 4.805227 (-4.174214) | 0.116646 / 6.500664 (-6.384018) | 0.042977 / 0.075469 (-0.032492) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996102 / 1.841788 (-0.845685) | 12.079143 / 8.074308 (4.004835) | 9.903568 / 10.191392 (-0.287824) | 0.141447 / 0.680424 (-0.538976) | 0.014115 / 0.534201 (-0.520086) | 0.287576 / 0.579283 (-0.291707) | 0.262951 / 0.434364 (-0.171413) | 0.325167 / 0.540337 (-0.215170) | 0.425780 / 1.386936 (-0.961156) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005213 / 0.011353 (-0.006139) | 0.003686 / 0.011008 (-0.007322) | 0.049963 / 0.038508 (0.011455) | 0.030635 / 0.023109 (0.007525) | 0.263992 / 0.275898 (-0.011906) | 0.289960 / 0.323480 (-0.033520) | 0.004281 / 0.007986 (-0.003704) | 0.002709 / 0.004328 (-0.001619) | 0.049147 / 0.004250 (0.044897) | 0.041036 / 0.037052 (0.003984) | 0.277621 / 0.258489 (0.019132) | 0.305689 / 0.293841 (0.011848) | 0.029342 / 0.128546 (-0.099205) | 0.010350 / 0.075646 (-0.065296) | 0.058221 / 0.419271 (-0.361051) | 0.033774 / 0.043533 (-0.009759) | 0.266163 / 0.255139 (0.011024) | 0.286866 / 0.283200 (0.003666) | 0.018463 / 0.141683 (-0.123219) | 1.136930 / 1.452155 (-0.315225) | 1.193974 / 1.492716 (-0.298742) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.106787 / 0.018006 (0.088781) | 0.304229 / 0.000490 (0.303740) | 0.000209 / 0.000200 (0.000009) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022066 / 0.037411 (-0.015346) | 0.075510 / 0.014526 (0.060984) | 0.087273 / 0.176557 (-0.089284) | 0.128050 / 0.737135 (-0.609085) | 0.090492 / 0.296338 (-0.205847) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299034 / 0.215209 (0.083825) | 2.899115 / 2.077655 (0.821461) | 1.625169 / 1.504120 (0.121049) | 1.456491 / 1.541195 (-0.084703) | 1.433063 / 1.468490 (-0.035427) | 0.565416 / 4.584777 (-4.019361) | 0.979298 / 3.745712 (-2.766415) | 2.748965 / 5.269862 (-2.520897) | 1.738671 / 4.565676 (-2.827005) | 0.062869 / 0.424275 (-0.361407) | 0.005001 / 0.007607 (-0.002606) | 0.348534 / 0.226044 (0.122489) | 3.437791 / 2.268929 (1.168862) | 1.896804 / 55.444624 (-53.547821) | 1.658544 / 6.876477 (-5.217933) | 1.649106 / 2.142072 (-0.492966) | 0.653791 / 4.805227 (-4.151436) | 0.125522 / 6.500664 (-6.375142) | 0.051260 / 0.075469 (-0.024209) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.025170 / 1.841788 (-0.816617) | 12.247968 / 8.074308 (4.173660) | 9.863777 / 10.191392 (-0.327615) | 0.140498 / 0.680424 (-0.539926) | 0.015158 / 0.534201 (-0.519043) | 0.288210 / 0.579283 (-0.291073) | 0.128207 / 0.434364 (-0.306157) | 0.398735 / 0.540337 (-0.141603) | 0.418217 / 1.386936 (-0.968719) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#871eabc7b23c27d677bc06ae2cc1ec3a2a04b10f \"CML watermark\")\n" ]
2024-05-08T09:19:37
2024-05-08T09:43:19
2024-05-08T09:35:16
MEMBER
null
Support jax 0.4.27 in CI tests by using jax Array `devices` method instead of `device` (which no longer exists). Fix #6884.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6885/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6885/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6885", "html_url": "https://github.com/huggingface/datasets/pull/6885", "diff_url": "https://github.com/huggingface/datasets/pull/6885.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6885.patch", "merged_at": "2024-05-08T09:35:16" }
true
https://api.github.com/repos/huggingface/datasets/issues/6884
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6884/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6884/comments
https://api.github.com/repos/huggingface/datasets/issues/6884/events
https://github.com/huggingface/datasets/issues/6884
2,284,839,687
I_kwDODunzps6IL-MH
6,884
CI is broken after jax-0.4.27 release: AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2024-05-08T07:01:47
2024-05-08T09:35:17
2024-05-08T09:35:17
MEMBER
null
After jax-0.4.27 release (https://github.com/google/jax/releases/tag/jax-v0.4.27), our CI is broken with the error: ```Python traceback AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'. Did you mean: 'devices'? ``` See: https://github.com/huggingface/datasets/actions/runs/8997488610/job/24715736153 ```Python traceback ___________________ FormatterTest.test_jax_formatter_device ____________________ [gw1] linux -- Python 3.10.14 /opt/hostedtoolcache/Python/3.10.14/x64/bin/python self = <tests.test_formatting.FormatterTest testMethod=test_jax_formatter_device> @require_jax def test_jax_formatter_device(self): import jax from datasets.formatting import JaxFormatter pa_table = self._create_dummy_table() device = jax.devices()[0] formatter = JaxFormatter(device=str(device)) row = formatter.format_row(pa_table) > assert row["a"].device() == device E AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'. Did you mean: 'devices'? tests/test_formatting.py:630: AttributeError ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6884/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6883
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6883/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6883/comments
https://api.github.com/repos/huggingface/datasets/issues/6883/events
https://github.com/huggingface/datasets/pull/6883
2,284,808,399
PR_kwDODunzps5u1sL1
6,883
Require Pillow >= 9.4.0 to avoid AttributeError when loading image dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6883). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Do you think this is worth making a patch release for?\r\nCC: @huggingface/datasets ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005764 / 0.011353 (-0.005589) | 0.004182 / 0.011008 (-0.006826) | 0.064520 / 0.038508 (0.026012) | 0.034260 / 0.023109 (0.011151) | 0.245677 / 0.275898 (-0.030221) | 0.277889 / 0.323480 (-0.045591) | 0.004569 / 0.007986 (-0.003417) | 0.002905 / 0.004328 (-0.001423) | 0.049346 / 0.004250 (0.045095) | 0.050529 / 0.037052 (0.013476) | 0.264718 / 0.258489 (0.006229) | 0.295705 / 0.293841 (0.001864) | 0.028144 / 0.128546 (-0.100402) | 0.011048 / 0.075646 (-0.064598) | 0.206290 / 0.419271 (-0.212982) | 0.035886 / 0.043533 (-0.007647) | 0.245038 / 0.255139 (-0.010101) | 0.269835 / 0.283200 (-0.013365) | 0.018927 / 0.141683 (-0.122756) | 1.136536 / 1.452155 (-0.315619) | 1.183256 / 1.492716 (-0.309460) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.115372 / 0.018006 (0.097366) | 0.315471 / 0.000490 (0.314982) | 0.000238 / 0.000200 (0.000038) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021201 / 0.037411 (-0.016210) | 0.070374 / 0.014526 (0.055848) | 0.077557 / 0.176557 (-0.099000) | 0.124713 / 0.737135 (-0.612423) | 0.078850 / 0.296338 (-0.217489) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278674 / 0.215209 (0.063465) | 2.739597 / 2.077655 (0.661942) | 1.438214 / 1.504120 (-0.065906) | 1.326373 / 1.541195 (-0.214822) | 1.370961 / 1.468490 (-0.097529) | 0.569160 / 4.584777 (-4.015617) | 2.411890 / 3.745712 (-1.333822) | 2.954073 / 5.269862 (-2.315788) | 1.816883 / 4.565676 (-2.748794) | 0.063123 / 0.424275 (-0.361152) | 0.005531 / 0.007607 (-0.002076) | 0.328184 / 0.226044 (0.102140) | 3.263083 / 2.268929 (0.994155) | 1.809159 / 55.444624 (-53.635465) | 1.535257 / 6.876477 (-5.341220) | 1.583428 / 2.142072 (-0.558644) | 0.642950 / 4.805227 (-4.162277) | 0.122240 / 6.500664 (-6.378424) | 0.044596 / 0.075469 (-0.030873) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.999993 / 1.841788 (-0.841795) | 12.941508 / 8.074308 (4.867200) | 10.417519 / 10.191392 (0.226127) | 0.134345 / 0.680424 (-0.546079) | 0.014651 / 0.534201 (-0.519550) | 0.288660 / 0.579283 (-0.290623) | 0.274550 / 0.434364 (-0.159814) | 0.327785 / 0.540337 (-0.212553) | 0.422954 / 1.386936 (-0.963982) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006051 / 0.011353 (-0.005302) | 0.003926 / 0.011008 (-0.007082) | 0.051480 / 0.038508 (0.012972) | 0.036102 / 0.023109 (0.012992) | 0.273358 / 0.275898 (-0.002540) | 0.293261 / 0.323480 (-0.030219) | 0.004562 / 0.007986 (-0.003424) | 0.002918 / 0.004328 (-0.001410) | 0.050386 / 0.004250 (0.046135) | 0.048427 / 0.037052 (0.011375) | 0.280178 / 0.258489 (0.021689) | 0.314599 / 0.293841 (0.020758) | 0.030876 / 0.128546 (-0.097670) | 0.010571 / 0.075646 (-0.065076) | 0.058555 / 0.419271 (-0.360717) | 0.034974 / 0.043533 (-0.008559) | 0.266604 / 0.255139 (0.011465) | 0.284712 / 0.283200 (0.001512) | 0.020296 / 0.141683 (-0.121387) | 1.116760 / 1.452155 (-0.335395) | 1.157794 / 1.492716 (-0.334922) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.103777 / 0.018006 (0.085771) | 0.314267 / 0.000490 (0.313778) | 0.000226 / 0.000200 (0.000026) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023837 / 0.037411 (-0.013574) | 0.082145 / 0.014526 (0.067619) | 0.090434 / 0.176557 (-0.086123) | 0.132096 / 0.737135 (-0.605040) | 0.092426 / 0.296338 (-0.203913) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299554 / 0.215209 (0.084345) | 2.932382 / 2.077655 (0.854727) | 1.549994 / 1.504120 (0.045874) | 1.454944 / 1.541195 (-0.086251) | 1.474987 / 1.468490 (0.006497) | 0.586149 / 4.584777 (-3.998628) | 0.972118 / 3.745712 (-2.773594) | 2.991719 / 5.269862 (-2.278142) | 1.876365 / 4.565676 (-2.689311) | 0.065178 / 0.424275 (-0.359098) | 0.005114 / 0.007607 (-0.002493) | 0.353704 / 0.226044 (0.127660) | 3.500940 / 2.268929 (1.232012) | 1.965581 / 55.444624 (-53.479043) | 1.662594 / 6.876477 (-5.213883) | 1.702761 / 2.142072 (-0.439311) | 0.663879 / 4.805227 (-4.141348) | 0.120036 / 6.500664 (-6.380628) | 0.043195 / 0.075469 (-0.032274) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.997690 / 1.841788 (-0.844098) | 13.448914 / 8.074308 (5.374606) | 10.132469 / 10.191392 (-0.058923) | 0.148493 / 0.680424 (-0.531930) | 0.016670 / 0.534201 (-0.517531) | 0.289708 / 0.579283 (-0.289575) | 0.132938 / 0.434364 (-0.301425) | 0.411425 / 0.540337 (-0.128913) | 0.430748 / 1.386936 (-0.956188) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#70e38090f070d323d452b5e746686f31b1086bd8 \"CML watermark\")\n", "maybe not super important since it was not reported by users, this can be included in the next release" ]
2024-05-08T06:43:29
2024-05-17T09:31:48
2024-05-16T14:34:02
MEMBER
null
Require Pillow >= 9.4.0 to avoid AttributeError when loading image dataset. The `PIL.Image.ExifTags` that we use in our code was implemented in Pillow-9.4.0: https://github.com/python-pillow/Pillow/commit/24a5405a9f7ea22f28f9c98b3e407292ea5ee1d3 The bug #6881 was introduced in datasets-2.19.0 by this PR: - #6739 Fix #6881.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6883/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6883", "html_url": "https://github.com/huggingface/datasets/pull/6883", "diff_url": "https://github.com/huggingface/datasets/pull/6883.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6883.patch", "merged_at": "2024-05-16T14:34:02" }
true
https://api.github.com/repos/huggingface/datasets/issues/6882
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6882/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6882/comments
https://api.github.com/repos/huggingface/datasets/issues/6882/events
https://github.com/huggingface/datasets/issues/6882
2,284,803,158
I_kwDODunzps6IL1RW
6,882
Connection Error When Using By-pass Proxies
{ "login": "MRNOBODY-ZST", "id": 78351684, "node_id": "MDQ6VXNlcjc4MzUxNjg0", "avatar_url": "https://avatars.githubusercontent.com/u/78351684?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MRNOBODY-ZST", "html_url": "https://github.com/MRNOBODY-ZST", "followers_url": "https://api.github.com/users/MRNOBODY-ZST/followers", "following_url": "https://api.github.com/users/MRNOBODY-ZST/following{/other_user}", "gists_url": "https://api.github.com/users/MRNOBODY-ZST/gists{/gist_id}", "starred_url": "https://api.github.com/users/MRNOBODY-ZST/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MRNOBODY-ZST/subscriptions", "organizations_url": "https://api.github.com/users/MRNOBODY-ZST/orgs", "repos_url": "https://api.github.com/users/MRNOBODY-ZST/repos", "events_url": "https://api.github.com/users/MRNOBODY-ZST/events{/privacy}", "received_events_url": "https://api.github.com/users/MRNOBODY-ZST/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Changing the supplier of the proxy will solve this problem, or you can visit and follow the instructions in https://hf-mirror.com " ]
2024-05-08T06:40:14
2024-05-17T06:38:30
null
NONE
null
### Describe the bug I'm currently using Clash for Windows as my proxy tunnel, after exporting HTTP_PROXY and HTTPS_PROXY to the port that clash provides🤔, it runs into a connection error saying "Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f969d391870>: Failed to establish a new connection: [Errno 111] Connection refused'))")))" I have already read the documentation provided on the hugginface, but I think I didn't see the detailed instruction on how to set up proxies for this library. ### Steps to reproduce the bug 1. Turn on any proxy software like Clash / ShadosocksR etc. 2. export system varibles to the port provided by your proxy software in wsl (It's ok for other applications to use proxy expect dataset-library) 3. load any dataset from hugginface online ### Expected behavior --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) Cell In[33], [line 3](vscode-notebook-cell:?execution_count=33&line=3) [1](vscode-notebook-cell:?execution_count=33&line=1) from datasets import load_metric ----> [3](vscode-notebook-cell:?execution_count=33&line=3) metric = load_metric("seqeval") File ~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46, in deprecated.<locals>.decorator.<locals>.wrapper(*args, **kwargs) [44](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:44) warnings.warn(warning_msg, category=FutureWarning, stacklevel=2) [45](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:45) _emitted_deprecation_warnings.add(func_hash) ---> [46](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46) return deprecated_function(*args, **kwargs) File ~/.local/lib/python3.10/site-packages/datasets/load.py:2104, in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, trust_remote_code, **metric_init_kwargs) [2101](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2101) warnings.filterwarnings("ignore", message=".*https://huggingface.co/docs/evaluate$", category=FutureWarning) [2103](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2103) download_mode = DownloadMode(download_mode or DownloadMode.REUSE_DATASET_IF_EXISTS) -> [2104](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2104) metric_module = metric_module_factory( [2105](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2105) path, [2106](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2106) revision=revision, [2107](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2107) download_config=download_config, [2108](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2108) download_mode=download_mode, [2109](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2109) trust_remote_code=trust_remote_code, [2110](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2110) ).module_path [2111](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2111) metric_cls = import_main_class(metric_module, dataset=False) [2112](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2112) metric = metric_cls( [2113](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2113) config_name=config_name, [2114](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2114) process_id=process_id, ... --> [633](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:633) raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})") [634](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:634) elif response is not None: [635](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:635) raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})") ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (SSLError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))"))) ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6882/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6882/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6881
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6881/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6881/comments
https://api.github.com/repos/huggingface/datasets/issues/6881/events
https://github.com/huggingface/datasets/issues/6881
2,284,794,009
I_kwDODunzps6ILzCZ
6,881
AttributeError: module 'PIL.Image' has no attribute 'ExifTags'
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2024-05-08T06:33:57
2024-05-16T14:34:03
2024-05-16T14:34:03
MEMBER
null
When trying to load an image dataset in an old Python environment (with Pillow-8.4.0), an error is raised: ```Python traceback AttributeError: module 'PIL.Image' has no attribute 'ExifTags' ``` The error traceback: ```Python traceback ~/huggingface/datasets/src/datasets/iterable_dataset.py in __iter__(self) 1391 # `IterableDataset` automatically fills missing columns with None. 1392 # This is done with `_apply_feature_types_on_example`. -> 1393 example = _apply_feature_types_on_example( 1394 example, self.features, token_per_repo_id=self._token_per_repo_id 1395 ) ~/huggingface/datasets/src/datasets/iterable_dataset.py in _apply_feature_types_on_example(example, features, token_per_repo_id) 1080 encoded_example = features.encode_example(example) 1081 # Decode example for Audio feature, e.g. -> 1082 decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id) 1083 return decoded_example 1084 ~/huggingface/datasets/src/datasets/features/features.py in decode_example(self, example, token_per_repo_id) 1974 -> 1975 return { 1976 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1977 if self._column_requires_decoding[column_name] ~/huggingface/datasets/src/datasets/features/features.py in <dictcomp>(.0) 1974 1975 return { -> 1976 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1977 if self._column_requires_decoding[column_name] 1978 else value ~/huggingface/datasets/src/datasets/features/features.py in decode_nested_example(schema, obj, token_per_repo_id) 1339 # we pass the token to read and decode files from private repositories in streaming mode 1340 if obj is not None and schema.decode: -> 1341 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) 1342 return obj 1343 ~/huggingface/datasets/src/datasets/features/image.py in decode_example(self, value, token_per_repo_id) 187 image = PIL.Image.open(BytesIO(bytes_)) 188 image.load() # to avoid "Too many open files" errors --> 189 if image.getexif().get(PIL.Image.ExifTags.Base.Orientation) is not None: 190 image = PIL.ImageOps.exif_transpose(image) 191 if self.mode and self.mode != image.mode: ~/huggingface/datasets/venv/lib/python3.9/site-packages/PIL/Image.py in __getattr__(name) 75 ) 76 return categories[name] ---> 77 raise AttributeError(f"module '{__name__}' has no attribute '{name}'") 78 79 AttributeError: module 'PIL.Image' has no attribute 'ExifTags' ``` ### Environment info Since datasets 2.19.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6881/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6881/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6880
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6880/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6880/comments
https://api.github.com/repos/huggingface/datasets/issues/6880/events
https://github.com/huggingface/datasets/issues/6880
2,283,278,337
I_kwDODunzps6IGBAB
6,880
Webdataset: KeyError: 'png' on some datasets when streaming
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "The error is caused by malformed basenames of the files within the TARs:\r\n- `15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b.png` becomes `15_Cohen_1-s2` as the grouping `__key__`, and `0-S0929664620300449-gr3_lrg-b.png` as the additional key to be added to the example\r\n- whereas the intended behavior was to use `15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b` as the grouping `__key__`, and `png` as the additional key to be added to the example\r\n\r\nTo get the expected behavior, the basenames of the files within the TARs should be fixed so that they only contain a single dot, the one separating the file extension.", "I reopen it because I think we should try to give a clearer error message with a specific error code.\r\n\r\nFor now, it's hard for the user to understand where the error comes from (not everybody knows the subtleties of the webdataset filename structure).\r\n\r\n(we can transfer it to https://github.com/huggingface/dataset-viewer if it fits better there)", "same with .jpg -> https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions\r\n\r\n```\r\nError code: DatasetGenerationError\r\nException: DatasetGenerationError\r\nMessage: An error occurred while generating the dataset\r\nTraceback: Traceback (most recent call last):\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1748, in _prepare_split_single\r\n for key, record in generator:\r\n File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 818, in wrapped\r\n for item in generator(*args, **kwargs):\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py\", line 109, in _generate_examples\r\n example[field_name] = {\"path\": example[\"__key__\"] + \".\" + field_name, \"bytes\": example[field_name]}\r\n KeyError: 'jpg'\r\n \r\n The above exception was the direct cause of the following exception:\r\n \r\n Traceback (most recent call last):\r\n File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 1316, in compute_config_parquet_and_info_response\r\n parquet_operations, partial = stream_convert_to_parquet(\r\n File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 909, in stream_convert_to_parquet\r\n builder._prepare_split(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1627, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1784, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\n datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset\r\n```\r\n", "More details in the spec (https://docs.google.com/document/d/18OdLjruFNX74ILmgrdiCI9J1fQZuhzzRBCHV9URWto0/edit#heading=h.hkptaq2kct2s)\r\n\r\n> The prefix of a file is all directory components of the file plus the file name component up to the first “.” in the file name.\r\n> The last extension (i.e., the portion after the last “.”) in a file name determines the file type.\r\n\r\n> Example:\r\n\timages17/image194.left.jpg\r\n\timages17/image194.right.jpg\r\n\timages17/image194.json\r\n\timages17/image12.left.jpg\r\n\timages17/image12.json\r\n\timages17/image12.right.jpg\r\n\timages3/image1459.left.jpg\r\n> \t…\r\n> When reading this with a WebDataset library, you would get the following two dictionaries back in sequence:\r\n\r\n { “__key__”: “images17/image194”, “left.jpg”: b”...”, “right.jpg”: b”...”, “json”: b”...”}\r\n { “__key__”: “images17/image12”, “left.jpg”: b”...”, “right.jpg”: b”...”, “json”: b”...”}\r\n", "OK, the issue is different in the latter case: some files are suffixed as `.jpeg`, and others as `.jpg` :)\r\n\r\nIs it a limitation of the webdataset format, or of the datasets library @lhoestq? And could we be able to give a clearer error?" ]
2024-05-07T13:09:02
2024-05-14T20:34:05
null
MEMBER
null
reported at https://huggingface.co/datasets/tbone5563/tar_images/discussions/1 ```python >>> from datasets import load_dataset >>> ds = load_dataset("tbone5563/tar_images") Downloading data: 100%  1.41G/1.41G [00:48<00:00, 17.2MB/s] Downloading data: 100%  619M/619M [00:11<00:00, 57.4MB/s] Generating train split:   970/0 [00:02<00:00, 534.94 examples/s] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1747 _time = time.time() -> 1748 for key, record in generator: 1749 if max_shard_size is not None and writer._num_bytes > max_shard_size: 7 frames [/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/webdataset/webdataset.py](https://localhost:8080/#) in _generate_examples(self, tar_paths, tar_iterators) 108 for field_name in image_field_names + audio_field_names: --> 109 example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]} 110 yield f"{tar_idx}_{example_idx}", example KeyError: 'png' The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) [<ipython-input-2-8e0fbb7badc9>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 ds = load_dataset("tbone5563/tar_images") [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2607 2608 # Download and prepare data -> 2609 builder_instance.download_and_prepare( 2610 download_config=download_config, 2611 download_mode=download_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 1025 if num_proc is not None: 1026 prepare_split_kwargs["num_proc"] = num_proc -> 1027 self._download_and_prepare( 1028 dl_manager=dl_manager, 1029 verification_mode=verification_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs) 1787 1788 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs): -> 1789 super()._download_and_prepare( 1790 dl_manager, 1791 verification_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 1120 try: 1121 # Prepare split will record examples associated to the split -> 1122 self._prepare_split(split_generator, **prepare_split_kwargs) 1123 except OSError as e: 1124 raise OSError( [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) 1625 job_id = 0 1626 with pbar: -> 1627 for job_id, done, content in self._prepare_split_single( 1628 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args 1629 ): [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1782 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1783 e = e.__context__ -> 1784 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1785 1786 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6880/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6880/timeline
null
reopened
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6879
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6879/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6879/comments
https://api.github.com/repos/huggingface/datasets/issues/6879/events
https://github.com/huggingface/datasets/issues/6879
2,282,968,259
I_kwDODunzps6IE1TD
6,879
Batched mapping does not raise an error if values for an existing column are empty
{ "login": "felix-schneider", "id": 208336, "node_id": "MDQ6VXNlcjIwODMzNg==", "avatar_url": "https://avatars.githubusercontent.com/u/208336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/felix-schneider", "html_url": "https://github.com/felix-schneider", "followers_url": "https://api.github.com/users/felix-schneider/followers", "following_url": "https://api.github.com/users/felix-schneider/following{/other_user}", "gists_url": "https://api.github.com/users/felix-schneider/gists{/gist_id}", "starred_url": "https://api.github.com/users/felix-schneider/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/felix-schneider/subscriptions", "organizations_url": "https://api.github.com/users/felix-schneider/orgs", "repos_url": "https://api.github.com/users/felix-schneider/repos", "events_url": "https://api.github.com/users/felix-schneider/events{/privacy}", "received_events_url": "https://api.github.com/users/felix-schneider/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-05-07T11:02:40
2024-05-07T11:02:40
null
NONE
null
### Describe the bug Using `Dataset.map(fn, batched=True)` allows resizing the dataset by returning a dict of lists, all of which must be the same size. If they are not the same size, an error like `pyarrow.lib.ArrowInvalid: Column 1 named x expected length 1 but got length 0` is raised. This is not the case if the function returns an empty list for an existing column in the dataset. In that case, the dataset is silently resized to 0 rows. ### Steps to reproduce the bug MWE: ``` import datasets data = datasets.Dataset.from_dict({"test": [1]}) def mapping_fn(examples): return {"test": [], "y": [1]} data = data.map(mapping_fn, batched=True) print(len(data)) ``` Note that when returning `"x": []`, the error is raised correctly, also when returning `"test": [1,2]`. ### Expected behavior Expected an exception: `pyarrow.lib.ArrowInvalid: Column 1 named test expected length 1 but got length 0` or `pyarrow.lib.ArrowInvalid: Column 2 named y expected length 0 but got length 1`. Any exception would be acceptable. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.31 - Python version: 3.11.8 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.2.1 - `fsspec` version: 2024.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6879/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6878
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6878/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6878/comments
https://api.github.com/repos/huggingface/datasets/issues/6878/events
https://github.com/huggingface/datasets/pull/6878
2,282,879,491
PR_kwDODunzps5uviBh
6,878
Create function to convert to parquet
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6878). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005519 / 0.011353 (-0.005834) | 0.003877 / 0.011008 (-0.007131) | 0.063989 / 0.038508 (0.025480) | 0.032348 / 0.023109 (0.009239) | 0.238288 / 0.275898 (-0.037611) | 0.265337 / 0.323480 (-0.058143) | 0.004363 / 0.007986 (-0.003623) | 0.002755 / 0.004328 (-0.001574) | 0.049836 / 0.004250 (0.045585) | 0.048456 / 0.037052 (0.011403) | 0.246526 / 0.258489 (-0.011963) | 0.280753 / 0.293841 (-0.013088) | 0.027721 / 0.128546 (-0.100825) | 0.011031 / 0.075646 (-0.064615) | 0.204168 / 0.419271 (-0.215104) | 0.036203 / 0.043533 (-0.007330) | 0.238282 / 0.255139 (-0.016857) | 0.259608 / 0.283200 (-0.023591) | 0.017781 / 0.141683 (-0.123902) | 1.147821 / 1.452155 (-0.304334) | 1.194855 / 1.492716 (-0.297861) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102837 / 0.018006 (0.084831) | 0.312300 / 0.000490 (0.311811) | 0.000224 / 0.000200 (0.000024) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019410 / 0.037411 (-0.018001) | 0.065114 / 0.014526 (0.050588) | 0.076828 / 0.176557 (-0.099728) | 0.121741 / 0.737135 (-0.615394) | 0.079864 / 0.296338 (-0.216474) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287773 / 0.215209 (0.072564) | 2.848936 / 2.077655 (0.771281) | 1.543819 / 1.504120 (0.039700) | 1.412708 / 1.541195 (-0.128487) | 1.454685 / 1.468490 (-0.013805) | 0.580155 / 4.584777 (-4.004622) | 2.372783 / 3.745712 (-1.372929) | 2.910514 / 5.269862 (-2.359347) | 1.813542 / 4.565676 (-2.752134) | 0.064569 / 0.424275 (-0.359706) | 0.005434 / 0.007607 (-0.002173) | 0.339309 / 0.226044 (0.113265) | 3.329972 / 2.268929 (1.061043) | 1.827597 / 55.444624 (-53.617028) | 1.592324 / 6.876477 (-5.284152) | 1.619743 / 2.142072 (-0.522329) | 0.659358 / 4.805227 (-4.145869) | 0.119887 / 6.500664 (-6.380777) | 0.043649 / 0.075469 (-0.031821) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.984563 / 1.841788 (-0.857225) | 12.395302 / 8.074308 (4.320994) | 9.904944 / 10.191392 (-0.286448) | 0.136141 / 0.680424 (-0.544282) | 0.014779 / 0.534201 (-0.519422) | 0.286146 / 0.579283 (-0.293137) | 0.265392 / 0.434364 (-0.168972) | 0.329484 / 0.540337 (-0.210854) | 0.425530 / 1.386936 (-0.961406) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005920 / 0.011353 (-0.005433) | 0.004068 / 0.011008 (-0.006940) | 0.052281 / 0.038508 (0.013773) | 0.034907 / 0.023109 (0.011798) | 0.269551 / 0.275898 (-0.006347) | 0.292390 / 0.323480 (-0.031090) | 0.004340 / 0.007986 (-0.003646) | 0.002864 / 0.004328 (-0.001464) | 0.051466 / 0.004250 (0.047216) | 0.046410 / 0.037052 (0.009358) | 0.280103 / 0.258489 (0.021614) | 0.310616 / 0.293841 (0.016775) | 0.031044 / 0.128546 (-0.097502) | 0.011004 / 0.075646 (-0.064643) | 0.059955 / 0.419271 (-0.359316) | 0.034156 / 0.043533 (-0.009377) | 0.268113 / 0.255139 (0.012974) | 0.283569 / 0.283200 (0.000369) | 0.019758 / 0.141683 (-0.121925) | 1.155583 / 1.452155 (-0.296572) | 1.225611 / 1.492716 (-0.267106) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.104302 / 0.018006 (0.086295) | 0.307324 / 0.000490 (0.306834) | 0.000219 / 0.000200 (0.000019) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023672 / 0.037411 (-0.013739) | 0.081110 / 0.014526 (0.066584) | 0.091783 / 0.176557 (-0.084773) | 0.131738 / 0.737135 (-0.605397) | 0.092391 / 0.296338 (-0.203948) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289341 / 0.215209 (0.074132) | 2.849894 / 2.077655 (0.772239) | 1.539679 / 1.504120 (0.035559) | 1.417975 / 1.541195 (-0.123220) | 1.473631 / 1.468490 (0.005141) | 0.583013 / 4.584777 (-4.001764) | 0.960106 / 3.745712 (-2.785606) | 2.962785 / 5.269862 (-2.307077) | 1.827539 / 4.565676 (-2.738138) | 0.063875 / 0.424275 (-0.360400) | 0.005251 / 0.007607 (-0.002356) | 0.347127 / 0.226044 (0.121082) | 3.417364 / 2.268929 (1.148435) | 1.965901 / 55.444624 (-53.478723) | 1.632337 / 6.876477 (-5.244140) | 1.683100 / 2.142072 (-0.458972) | 0.664951 / 4.805227 (-4.140277) | 0.119046 / 6.500664 (-6.381618) | 0.042828 / 0.075469 (-0.032641) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.999569 / 1.841788 (-0.842218) | 13.366482 / 8.074308 (5.292174) | 10.635396 / 10.191392 (0.444004) | 0.133840 / 0.680424 (-0.546584) | 0.016232 / 0.534201 (-0.517969) | 0.292764 / 0.579283 (-0.286519) | 0.128558 / 0.434364 (-0.305806) | 0.405596 / 0.540337 (-0.134741) | 0.429633 / 1.386936 (-0.957303) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4d92856bbfda0d48d07e82bb520d9434d20fae4b \"CML watermark\")\n" ]
2024-05-07T10:27:07
2024-05-16T14:46:44
2024-05-16T14:38:23
MEMBER
null
Analogously with `delete_from_hub`, this PR: - creates the Python function `convert_to_parquet` - makes the corresponding CLI command use that function. This way, the functionality can be used both from a terminal and from a Python console. This PR also implements a test for convert_to_parquet function.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6878/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6878/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6878", "html_url": "https://github.com/huggingface/datasets/pull/6878", "diff_url": "https://github.com/huggingface/datasets/pull/6878.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6878.patch", "merged_at": "2024-05-16T14:38:22" }
true
https://api.github.com/repos/huggingface/datasets/issues/6877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6877/comments
https://api.github.com/repos/huggingface/datasets/issues/6877/events
https://github.com/huggingface/datasets/issues/6877
2,282,068,337
I_kwDODunzps6IBZlx
6,877
OSError: [Errno 24] Too many open files
{ "login": "loicmagne", "id": 53355258, "node_id": "MDQ6VXNlcjUzMzU1MjU4", "avatar_url": "https://avatars.githubusercontent.com/u/53355258?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loicmagne", "html_url": "https://github.com/loicmagne", "followers_url": "https://api.github.com/users/loicmagne/followers", "following_url": "https://api.github.com/users/loicmagne/following{/other_user}", "gists_url": "https://api.github.com/users/loicmagne/gists{/gist_id}", "starred_url": "https://api.github.com/users/loicmagne/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loicmagne/subscriptions", "organizations_url": "https://api.github.com/users/loicmagne/orgs", "repos_url": "https://api.github.com/users/loicmagne/repos", "events_url": "https://api.github.com/users/loicmagne/events{/privacy}", "received_events_url": "https://api.github.com/users/loicmagne/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "ulimit -n 8192 can solve this problem", "> ulimit -n 8192 can solve this problem\r\n\r\nWould there be a systematic way to do this ? The data loading is part of the [MTEB](https://github.com/embeddings-benchmark/mteb) library", "> > ulimit -n 8192 can solve this problem\r\n> \r\n> Would there be a systematic way to do this ? The data loading is part of the [MTEB](https://github.com/embeddings-benchmark/mteb) library\r\n\r\n I think we could modify the _prepare_split_single function", "I fixed it with https://github.com/huggingface/datasets/pull/6893, feel free to re-open if you're still having the issue :)", "> I fixed it with #6893, feel free to re-open if you're still having the issue :)\r\n\r\nThanks a lot!" ]
2024-05-07T01:15:09
2024-05-13T15:36:08
2024-05-13T13:01:55
NONE
null
### Describe the bug I am trying to load the 'default' subset of the following dataset which contains lots of files (828 per split): [https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb](https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb) When trying to load it using the `load_dataset` function I get the following error ```python >>> from datasets import load_dataset >>> d = load_dataset('mteb/biblenlp-corpus-mmteb') Downloading readme: 100%|████████████████████████████████████████████████████████████████████████| 201k/201k [00:00<00:00, 1.07MB/s] Resolving data files: 100%|█████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 1069.15it/s] Resolving data files: 100%|███████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 436182.33it/s] Resolving data files: 100%|█████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 2228.75it/s] Resolving data files: 100%|███████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 646478.73it/s] Resolving data files: 100%|███████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 831032.24it/s] Resolving data files: 100%|███████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 517645.51it/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████| 828/828 [00:33<00:00, 24.87files/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████| 828/828 [00:30<00:00, 27.48files/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████| 828/828 [00:30<00:00, 26.94files/s] Generating train split: 1571592 examples [00:03, 461438.97 examples/s] Generating test split: 11163 examples [00:00, 118190.72 examples/s] Traceback (most recent call last): File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1995, in _prepare_split_single for _, table in generator: File ".env/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 99, in _generate_tables with open(file, "rb") as f: ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/datasets/streaming.py", line 75, in wrapper return function(*args, download_config=download_config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1224, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 135, in open return self.__enter__() ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 103, in __enter__ f = self.fs.open(self.path, mode=mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/spec.py", line 1293, in open f = self._open( ^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/datasets/filesystems/compression.py", line 81, in _open return self.file.open() ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 135, in open return self.__enter__() ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 103, in __enter__ f = self.fs.open(self.path, mode=mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/spec.py", line 1293, in open f = self._open( ^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/implementations/local.py", line 197, in _open return LocalFileOpener(path, mode, fs=self, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/implementations/local.py", line 322, in __init__ self._open() File ".env/lib/python3.12/site-packages/fsspec/implementations/local.py", line 327, in _open self.f = open(self.path, mode=self.mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ OSError: [Errno 24] Too many open files: '.cache/huggingface/datasets/downloads/3a347186abfc0f9c924dde0221d246db758c7232c0101523f04a87c17d696618' The above exception was the direct cause of the following exception: Traceback (most recent call last): File ".env/lib/python3.12/site-packages/datasets/builder.py", line 981, in incomplete_dir yield tmp_dir File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File ".env/lib/python3.12/site-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File ".env/lib/python3.12/site-packages/datasets/load.py", line 2609, in load_dataset builder_instance.download_and_prepare( File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1007, in download_and_prepare with incomplete_dir(self._output_dir) as tmp_output_dir: File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__ self.gen.throw(value) File ".env/lib/python3.12/site-packages/datasets/builder.py", line 988, in incomplete_dir shutil.rmtree(tmp_dir) File "/usr/lib/python3.12/shutil.py", line 785, in rmtree _rmtree_safe_fd(fd, path, onexc) File "/usr/lib/python3.12/shutil.py", line 661, in _rmtree_safe_fd onexc(os.scandir, path, err) File "/usr/lib/python3.12/shutil.py", line 657, in _rmtree_safe_fd with os.scandir(topfd) as scandir_it: ^^^^^^^^^^^^^^^^^ OSError: [Errno 24] Too many open files: '.cache/huggingface/datasets/mteb___biblenlp-corpus-mmteb/default/0.0.0/3912ed967b0834547f35b2da9470c4976b357c9a.incomplete' ``` I looked for the maximum number of open files on my machine (Ubuntu 24.04) and it seems to be 1024, but even when I try to load a single split (`load_dataset('mteb/biblenlp-corpus-mmteb', split='train')`) I get the same error ### Steps to reproduce the bug ```python from datasets import load_dataset d = load_dataset('mteb/biblenlp-corpus-mmteb') ``` ### Expected behavior Load the dataset without error ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-6.8.0-31-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6877/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6877/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6876/comments
https://api.github.com/repos/huggingface/datasets/issues/6876/events
https://github.com/huggingface/datasets/pull/6876
2,281,450,743
PR_kwDODunzps5uqs46
6,876
Unpin hfh
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6876). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "transformers 4.40.2 was release yesterday but not sure if it contains the fix", "@lhoestq yes I knew transformers 4.40.2 was released yesterday, but I had checked that it does not contain the fix: only 2 bug fixes. That is why our CI continues failing in this PR. We will have to wait until the next minor version.", "> If we urgently need some dev feature for dataset-viewer, I would suggest pushing the feature (cherry-picked) to a dedicated branch with 2.19.1 as its starting point (without opening a PR), and install datasets from that branch.\r\n\r\nI have done so:\r\n- Created a branch from 2.19.1: https://github.com/huggingface/datasets/tree/datasets-2.19.1-hotfix\r\n- Cherry-picked the commit in this PR: https://github.com/huggingface/datasets/commit/3638183e2f7e0dce8924e46e7cc21bf6d5d7adfb\r\n- Opened a PR in dataset-viewer to update datasets to this revision: https://github.com/huggingface/dataset-viewer/pull/2783" ]
2024-05-06T18:10:49
2024-05-07T13:24:08
null
MEMBER
null
Needed to use those in dataset-viewer: - dev version of hfh https://github.com/huggingface/dataset-viewer/pull/2781: don't span the hub with /paths-info requests - dev version of datasets at https://github.com/huggingface/datasets/pull/6875: don't write too big logs in the viewer close https://github.com/huggingface/datasets/issues/6863
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6876/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6876", "html_url": "https://github.com/huggingface/datasets/pull/6876", "diff_url": "https://github.com/huggingface/datasets/pull/6876.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6876.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6875
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6875/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6875/comments
https://api.github.com/repos/huggingface/datasets/issues/6875/events
https://github.com/huggingface/datasets/pull/6875
2,281,428,826
PR_kwDODunzps5uqoJ_
6,875
Shorten long logs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6875). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005191 / 0.011353 (-0.006162) | 0.003691 / 0.011008 (-0.007317) | 0.063511 / 0.038508 (0.025003) | 0.031849 / 0.023109 (0.008740) | 0.251691 / 0.275898 (-0.024207) | 0.276585 / 0.323480 (-0.046895) | 0.004080 / 0.007986 (-0.003906) | 0.002751 / 0.004328 (-0.001577) | 0.049572 / 0.004250 (0.045322) | 0.043010 / 0.037052 (0.005957) | 0.267161 / 0.258489 (0.008672) | 0.301054 / 0.293841 (0.007213) | 0.028068 / 0.128546 (-0.100479) | 0.010479 / 0.075646 (-0.065167) | 0.208458 / 0.419271 (-0.210814) | 0.035688 / 0.043533 (-0.007845) | 0.255985 / 0.255139 (0.000846) | 0.296016 / 0.283200 (0.012817) | 0.017041 / 0.141683 (-0.124642) | 1.168626 / 1.452155 (-0.283528) | 1.173419 / 1.492716 (-0.319297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092975 / 0.018006 (0.074969) | 0.302309 / 0.000490 (0.301820) | 0.000219 / 0.000200 (0.000020) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018809 / 0.037411 (-0.018602) | 0.062606 / 0.014526 (0.048080) | 0.073820 / 0.176557 (-0.102736) | 0.119451 / 0.737135 (-0.617684) | 0.075086 / 0.296338 (-0.221253) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280342 / 0.215209 (0.065133) | 2.742477 / 2.077655 (0.664822) | 1.409221 / 1.504120 (-0.094899) | 1.291679 / 1.541195 (-0.249516) | 1.316628 / 1.468490 (-0.151862) | 0.554942 / 4.584777 (-4.029835) | 2.363301 / 3.745712 (-1.382411) | 2.775766 / 5.269862 (-2.494096) | 1.729123 / 4.565676 (-2.836554) | 0.061254 / 0.424275 (-0.363021) | 0.005444 / 0.007607 (-0.002163) | 0.330450 / 0.226044 (0.104406) | 3.249453 / 2.268929 (0.980524) | 1.782415 / 55.444624 (-53.662210) | 1.489778 / 6.876477 (-5.386699) | 1.521809 / 2.142072 (-0.620263) | 0.626622 / 4.805227 (-4.178605) | 0.117320 / 6.500664 (-6.383344) | 0.043110 / 0.075469 (-0.032359) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981954 / 1.841788 (-0.859834) | 11.706373 / 8.074308 (3.632064) | 9.870815 / 10.191392 (-0.320577) | 0.141768 / 0.680424 (-0.538656) | 0.014455 / 0.534201 (-0.519746) | 0.287451 / 0.579283 (-0.291832) | 0.264559 / 0.434364 (-0.169805) | 0.326321 / 0.540337 (-0.214017) | 0.424084 / 1.386936 (-0.962852) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005461 / 0.011353 (-0.005892) | 0.003804 / 0.011008 (-0.007204) | 0.049872 / 0.038508 (0.011364) | 0.029543 / 0.023109 (0.006433) | 0.260772 / 0.275898 (-0.015126) | 0.291571 / 0.323480 (-0.031909) | 0.004305 / 0.007986 (-0.003681) | 0.002845 / 0.004328 (-0.001484) | 0.049129 / 0.004250 (0.044879) | 0.040743 / 0.037052 (0.003690) | 0.276497 / 0.258489 (0.018008) | 0.303126 / 0.293841 (0.009285) | 0.030423 / 0.128546 (-0.098123) | 0.010660 / 0.075646 (-0.064986) | 0.058857 / 0.419271 (-0.360415) | 0.033185 / 0.043533 (-0.010348) | 0.260452 / 0.255139 (0.005313) | 0.282648 / 0.283200 (-0.000552) | 0.018025 / 0.141683 (-0.123658) | 1.147432 / 1.452155 (-0.304723) | 1.192034 / 1.492716 (-0.300683) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093094 / 0.018006 (0.075088) | 0.301608 / 0.000490 (0.301119) | 0.000209 / 0.000200 (0.000009) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022071 / 0.037411 (-0.015340) | 0.075244 / 0.014526 (0.060718) | 0.087157 / 0.176557 (-0.089400) | 0.127339 / 0.737135 (-0.609797) | 0.088527 / 0.296338 (-0.207812) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293033 / 0.215209 (0.077824) | 2.839842 / 2.077655 (0.762188) | 1.544730 / 1.504120 (0.040610) | 1.421727 / 1.541195 (-0.119468) | 1.446054 / 1.468490 (-0.022436) | 0.573285 / 4.584777 (-4.011492) | 0.980977 / 3.745712 (-2.764735) | 2.829034 / 5.269862 (-2.440828) | 1.800747 / 4.565676 (-2.764930) | 0.064916 / 0.424275 (-0.359360) | 0.005099 / 0.007607 (-0.002508) | 0.348054 / 0.226044 (0.122009) | 3.449111 / 2.268929 (1.180182) | 1.900115 / 55.444624 (-53.544509) | 1.620564 / 6.876477 (-5.255913) | 1.675474 / 2.142072 (-0.466598) | 0.652302 / 4.805227 (-4.152925) | 0.118438 / 6.500664 (-6.382226) | 0.041779 / 0.075469 (-0.033690) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.003703 / 1.841788 (-0.838085) | 12.466921 / 8.074308 (4.392613) | 9.800419 / 10.191392 (-0.390973) | 0.131567 / 0.680424 (-0.548856) | 0.015684 / 0.534201 (-0.518517) | 0.288754 / 0.579283 (-0.290530) | 0.126435 / 0.434364 (-0.307929) | 0.398608 / 0.540337 (-0.141729) | 0.427043 / 1.386936 (-0.959894) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#865e9b1f2ecbe934be49a2d8d46451aba4af3485 \"CML watermark\")\n" ]
2024-05-06T17:57:07
2024-05-07T12:31:46
2024-05-07T12:25:45
MEMBER
null
Some datasets may have unexpectedly long features/types (e.g. if the files are not formatted correctly). In that case we should still be able to log something readable
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6875/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6875", "html_url": "https://github.com/huggingface/datasets/pull/6875", "diff_url": "https://github.com/huggingface/datasets/pull/6875.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6875.patch", "merged_at": "2024-05-07T12:25:45" }
true
https://api.github.com/repos/huggingface/datasets/issues/6874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6874/comments
https://api.github.com/repos/huggingface/datasets/issues/6874/events
https://github.com/huggingface/datasets/pull/6874
2,280,717,233
PR_kwDODunzps5uoOk-
6,874
Use pandas ujson in JSON loader to improve performance
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6874). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Before pandas-2.2.0, the function `ujson_loads` was named `loads`: https://github.com/pandas-dev/pandas/blob/v2.1.0/pandas/io/json/__init__.py#L5\r\n```python\r\nimport ujson_loads as loads\r\n```", "Thanks for your review, @lhoestq.\r\n\r\nThe performance gain depends on many factors, such as underlying data structures, file size...\r\n\r\nIn my benchmark, the performance gain was around 8.1%. ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005428 / 0.011353 (-0.005925) | 0.003682 / 0.011008 (-0.007326) | 0.064360 / 0.038508 (0.025852) | 0.032044 / 0.023109 (0.008934) | 0.238281 / 0.275898 (-0.037617) | 0.267542 / 0.323480 (-0.055937) | 0.003152 / 0.007986 (-0.004834) | 0.003292 / 0.004328 (-0.001037) | 0.050157 / 0.004250 (0.045906) | 0.048311 / 0.037052 (0.011259) | 0.253743 / 0.258489 (-0.004746) | 0.282729 / 0.293841 (-0.011112) | 0.027271 / 0.128546 (-0.101275) | 0.010238 / 0.075646 (-0.065408) | 0.208179 / 0.419271 (-0.211092) | 0.035607 / 0.043533 (-0.007925) | 0.246750 / 0.255139 (-0.008389) | 0.263362 / 0.283200 (-0.019837) | 0.018475 / 0.141683 (-0.123208) | 1.152978 / 1.452155 (-0.299177) | 1.158545 / 1.492716 (-0.334171) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096645 / 0.018006 (0.078639) | 0.313186 / 0.000490 (0.312696) | 0.000209 / 0.000200 (0.000009) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018800 / 0.037411 (-0.018612) | 0.065833 / 0.014526 (0.051307) | 0.073668 / 0.176557 (-0.102888) | 0.120608 / 0.737135 (-0.616527) | 0.074936 / 0.296338 (-0.221403) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281596 / 0.215209 (0.066387) | 2.814537 / 2.077655 (0.736882) | 1.482781 / 1.504120 (-0.021338) | 1.349770 / 1.541195 (-0.191424) | 1.371571 / 1.468490 (-0.096919) | 0.555068 / 4.584777 (-4.029709) | 2.369588 / 3.745712 (-1.376124) | 2.742771 / 5.269862 (-2.527091) | 1.711519 / 4.565676 (-2.854158) | 0.060921 / 0.424275 (-0.363354) | 0.005263 / 0.007607 (-0.002344) | 0.333721 / 0.226044 (0.107677) | 3.329598 / 2.268929 (1.060669) | 1.806983 / 55.444624 (-53.637641) | 1.515730 / 6.876477 (-5.360746) | 1.557622 / 2.142072 (-0.584451) | 0.619564 / 4.805227 (-4.185663) | 0.115503 / 6.500664 (-6.385161) | 0.041728 / 0.075469 (-0.033741) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.967300 / 1.841788 (-0.874487) | 11.295081 / 8.074308 (3.220773) | 9.535119 / 10.191392 (-0.656273) | 0.140232 / 0.680424 (-0.540192) | 0.013774 / 0.534201 (-0.520427) | 0.281847 / 0.579283 (-0.297436) | 0.260076 / 0.434364 (-0.174288) | 0.323657 / 0.540337 (-0.216681) | 0.421116 / 1.386936 (-0.965820) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005276 / 0.011353 (-0.006077) | 0.003639 / 0.011008 (-0.007370) | 0.050451 / 0.038508 (0.011943) | 0.032787 / 0.023109 (0.009678) | 0.267029 / 0.275898 (-0.008869) | 0.299899 / 0.323480 (-0.023581) | 0.004177 / 0.007986 (-0.003809) | 0.002697 / 0.004328 (-0.001631) | 0.049631 / 0.004250 (0.045380) | 0.041942 / 0.037052 (0.004889) | 0.279249 / 0.258489 (0.020760) | 0.306512 / 0.293841 (0.012671) | 0.029340 / 0.128546 (-0.099207) | 0.010118 / 0.075646 (-0.065528) | 0.058243 / 0.419271 (-0.361028) | 0.033871 / 0.043533 (-0.009662) | 0.265949 / 0.255139 (0.010810) | 0.284263 / 0.283200 (0.001064) | 0.017351 / 0.141683 (-0.124332) | 1.107081 / 1.452155 (-0.345074) | 1.184946 / 1.492716 (-0.307770) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095621 / 0.018006 (0.077614) | 0.304758 / 0.000490 (0.304269) | 0.000204 / 0.000200 (0.000004) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022444 / 0.037411 (-0.014967) | 0.075894 / 0.014526 (0.061368) | 0.089077 / 0.176557 (-0.087480) | 0.126960 / 0.737135 (-0.610176) | 0.089120 / 0.296338 (-0.207218) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289885 / 0.215209 (0.074676) | 2.843219 / 2.077655 (0.765565) | 1.582704 / 1.504120 (0.078584) | 1.426551 / 1.541195 (-0.114644) | 1.431591 / 1.468490 (-0.036899) | 0.577265 / 4.584777 (-4.007512) | 0.956040 / 3.745712 (-2.789673) | 2.753517 / 5.269862 (-2.516345) | 1.732503 / 4.565676 (-2.833173) | 0.063511 / 0.424275 (-0.360764) | 0.005089 / 0.007607 (-0.002518) | 0.339205 / 0.226044 (0.113160) | 3.339148 / 2.268929 (1.070219) | 1.901543 / 55.444624 (-53.543081) | 1.618392 / 6.876477 (-5.258084) | 1.612885 / 2.142072 (-0.529188) | 0.656563 / 4.805227 (-4.148664) | 0.116740 / 6.500664 (-6.383924) | 0.040497 / 0.075469 (-0.034973) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005568 / 1.841788 (-0.836219) | 11.872770 / 8.074308 (3.798462) | 9.867118 / 10.191392 (-0.324274) | 0.130193 / 0.680424 (-0.550231) | 0.022857 / 0.534201 (-0.511344) | 0.281908 / 0.579283 (-0.297375) | 0.125978 / 0.434364 (-0.308386) | 0.382604 / 0.540337 (-0.157733) | 0.415078 / 1.386936 (-0.971858) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1eabcfaf87368a5cbfa0341aa2223f457508b3e9 \"CML watermark\")\n" ]
2024-05-06T12:01:27
2024-05-17T16:28:29
2024-05-17T16:22:27
MEMBER
null
Use pandas ujson in JSON loader to improve performance. Note that `datasets` has `pandas` as required dependency. And `pandas` includes `ujson` in `pd.io.json.ujson_loads`. Fix #6867. CC: @natolambert
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6874/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6874/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6874", "html_url": "https://github.com/huggingface/datasets/pull/6874", "diff_url": "https://github.com/huggingface/datasets/pull/6874.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6874.patch", "merged_at": "2024-05-17T16:22:27" }
true
https://api.github.com/repos/huggingface/datasets/issues/6873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6873/comments
https://api.github.com/repos/huggingface/datasets/issues/6873/events
https://github.com/huggingface/datasets/pull/6873
2,280,463,182
PR_kwDODunzps5unXnq
6,873
Set dev version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6873). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005301 / 0.011353 (-0.006052) | 0.003633 / 0.011008 (-0.007375) | 0.063414 / 0.038508 (0.024906) | 0.042406 / 0.023109 (0.019297) | 0.253414 / 0.275898 (-0.022484) | 0.276811 / 0.323480 (-0.046668) | 0.003148 / 0.007986 (-0.004837) | 0.002614 / 0.004328 (-0.001715) | 0.049208 / 0.004250 (0.044958) | 0.045819 / 0.037052 (0.008767) | 0.268027 / 0.258489 (0.009538) | 0.298821 / 0.293841 (0.004980) | 0.028460 / 0.128546 (-0.100086) | 0.010671 / 0.075646 (-0.064975) | 0.208602 / 0.419271 (-0.210669) | 0.036057 / 0.043533 (-0.007476) | 0.256079 / 0.255139 (0.000940) | 0.277040 / 0.283200 (-0.006160) | 0.019018 / 0.141683 (-0.122665) | 1.147070 / 1.452155 (-0.305085) | 1.175838 / 1.492716 (-0.316878) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092216 / 0.018006 (0.074210) | 0.304774 / 0.000490 (0.304284) | 0.000212 / 0.000200 (0.000012) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018242 / 0.037411 (-0.019170) | 0.061088 / 0.014526 (0.046562) | 0.074517 / 0.176557 (-0.102039) | 0.120444 / 0.737135 (-0.616691) | 0.074628 / 0.296338 (-0.221710) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283914 / 0.215209 (0.068705) | 2.859123 / 2.077655 (0.781469) | 1.495152 / 1.504120 (-0.008967) | 1.395514 / 1.541195 (-0.145681) | 1.454076 / 1.468490 (-0.014414) | 0.568758 / 4.584777 (-4.016019) | 2.461304 / 3.745712 (-1.284408) | 2.836192 / 5.269862 (-2.433670) | 1.815463 / 4.565676 (-2.750213) | 0.065762 / 0.424275 (-0.358513) | 0.006872 / 0.007607 (-0.000736) | 0.339304 / 0.226044 (0.113260) | 3.326544 / 2.268929 (1.057616) | 1.847970 / 55.444624 (-53.596654) | 1.572667 / 6.876477 (-5.303809) | 1.595717 / 2.142072 (-0.546355) | 0.644196 / 4.805227 (-4.161031) | 0.120320 / 6.500664 (-6.380344) | 0.043334 / 0.075469 (-0.032135) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965807 / 1.841788 (-0.875981) | 11.628715 / 8.074308 (3.554406) | 9.485618 / 10.191392 (-0.705774) | 0.152387 / 0.680424 (-0.528037) | 0.013852 / 0.534201 (-0.520349) | 0.285833 / 0.579283 (-0.293450) | 0.263692 / 0.434364 (-0.170672) | 0.323086 / 0.540337 (-0.217251) | 0.418178 / 1.386936 (-0.968758) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005505 / 0.011353 (-0.005848) | 0.003630 / 0.011008 (-0.007378) | 0.049780 / 0.038508 (0.011272) | 0.030469 / 0.023109 (0.007359) | 0.270052 / 0.275898 (-0.005846) | 0.294370 / 0.323480 (-0.029110) | 0.004207 / 0.007986 (-0.003779) | 0.002720 / 0.004328 (-0.001609) | 0.048952 / 0.004250 (0.044701) | 0.041006 / 0.037052 (0.003953) | 0.281585 / 0.258489 (0.023096) | 0.310600 / 0.293841 (0.016759) | 0.029457 / 0.128546 (-0.099089) | 0.010508 / 0.075646 (-0.065138) | 0.058090 / 0.419271 (-0.361181) | 0.032814 / 0.043533 (-0.010718) | 0.272755 / 0.255139 (0.017616) | 0.292154 / 0.283200 (0.008954) | 0.018312 / 0.141683 (-0.123371) | 1.177199 / 1.452155 (-0.274955) | 1.238803 / 1.492716 (-0.253913) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093889 / 0.018006 (0.075883) | 0.303054 / 0.000490 (0.302564) | 0.000204 / 0.000200 (0.000004) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022556 / 0.037411 (-0.014856) | 0.075951 / 0.014526 (0.061425) | 0.086824 / 0.176557 (-0.089732) | 0.128091 / 0.737135 (-0.609044) | 0.088146 / 0.296338 (-0.208192) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292563 / 0.215209 (0.077354) | 2.882656 / 2.077655 (0.805001) | 1.559814 / 1.504120 (0.055695) | 1.443760 / 1.541195 (-0.097435) | 1.460967 / 1.468490 (-0.007523) | 0.567812 / 4.584777 (-4.016965) | 0.964407 / 3.745712 (-2.781305) | 2.819782 / 5.269862 (-2.450079) | 1.733334 / 4.565676 (-2.832343) | 0.064745 / 0.424275 (-0.359530) | 0.005178 / 0.007607 (-0.002429) | 0.345322 / 0.226044 (0.119278) | 3.407204 / 2.268929 (1.138275) | 1.919337 / 55.444624 (-53.525288) | 1.643463 / 6.876477 (-5.233013) | 1.682191 / 2.142072 (-0.459881) | 0.639432 / 4.805227 (-4.165795) | 0.115659 / 6.500664 (-6.385005) | 0.041202 / 0.075469 (-0.034267) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004664 / 1.841788 (-0.837123) | 12.043460 / 8.074308 (3.969152) | 9.856431 / 10.191392 (-0.334961) | 0.131351 / 0.680424 (-0.549072) | 0.015800 / 0.534201 (-0.518401) | 0.288211 / 0.579283 (-0.291072) | 0.126065 / 0.434364 (-0.308298) | 0.386494 / 0.540337 (-0.153843) | 0.424203 / 1.386936 (-0.962733) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#039e275549627f22d9e04278d7cad2e80c644459 \"CML watermark\")\n" ]
2024-05-06T09:43:18
2024-05-06T10:03:19
2024-05-06T09:57:12
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6873/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6873/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6873", "html_url": "https://github.com/huggingface/datasets/pull/6873", "diff_url": "https://github.com/huggingface/datasets/pull/6873.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6873.patch", "merged_at": "2024-05-06T09:57:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/6872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6872/comments
https://api.github.com/repos/huggingface/datasets/issues/6872/events
https://github.com/huggingface/datasets/pull/6872
2,280,438,432
PR_kwDODunzps5unSPA
6,872
Release 2.19.1
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2024-05-06T09:29:15
2024-05-06T09:35:33
2024-05-06T09:35:32
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6872/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6872", "html_url": "https://github.com/huggingface/datasets/pull/6872", "diff_url": "https://github.com/huggingface/datasets/pull/6872.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6872.patch", "merged_at": "2024-05-06T09:35:32" }
true
https://api.github.com/repos/huggingface/datasets/issues/6871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6871/comments
https://api.github.com/repos/huggingface/datasets/issues/6871/events
https://github.com/huggingface/datasets/pull/6871
2,280,102,869
PR_kwDODunzps5umJS6
6,871
Fix download for dict of dicts of URLs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6871). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Once merged, I think a patch release is needed.", "Once the CI is green, I am merging this PR and making a patch release, @huggingface/datasets. ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005352 / 0.011353 (-0.006001) | 0.004140 / 0.011008 (-0.006868) | 0.063844 / 0.038508 (0.025336) | 0.030712 / 0.023109 (0.007603) | 0.232790 / 0.275898 (-0.043108) | 0.262334 / 0.323480 (-0.061145) | 0.003264 / 0.007986 (-0.004721) | 0.002654 / 0.004328 (-0.001674) | 0.049775 / 0.004250 (0.045524) | 0.046803 / 0.037052 (0.009751) | 0.250667 / 0.258489 (-0.007822) | 0.283581 / 0.293841 (-0.010260) | 0.027660 / 0.128546 (-0.100886) | 0.010560 / 0.075646 (-0.065087) | 0.208676 / 0.419271 (-0.210596) | 0.035415 / 0.043533 (-0.008118) | 0.235380 / 0.255139 (-0.019759) | 0.261220 / 0.283200 (-0.021980) | 0.019551 / 0.141683 (-0.122132) | 1.140196 / 1.452155 (-0.311959) | 1.173021 / 1.492716 (-0.319696) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092665 / 0.018006 (0.074659) | 0.301524 / 0.000490 (0.301034) | 0.000216 / 0.000200 (0.000016) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018485 / 0.037411 (-0.018927) | 0.061722 / 0.014526 (0.047196) | 0.074701 / 0.176557 (-0.101855) | 0.121443 / 0.737135 (-0.615692) | 0.076268 / 0.296338 (-0.220070) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284143 / 0.215209 (0.068934) | 2.789979 / 2.077655 (0.712324) | 1.501156 / 1.504120 (-0.002964) | 1.379414 / 1.541195 (-0.161781) | 1.419092 / 1.468490 (-0.049398) | 0.554107 / 4.584777 (-4.030670) | 2.365659 / 3.745712 (-1.380053) | 2.763963 / 5.269862 (-2.505898) | 1.712587 / 4.565676 (-2.853090) | 0.060961 / 0.424275 (-0.363314) | 0.005301 / 0.007607 (-0.002306) | 0.346253 / 0.226044 (0.120209) | 3.351833 / 2.268929 (1.082905) | 1.831946 / 55.444624 (-53.612679) | 1.556530 / 6.876477 (-5.319947) | 1.574185 / 2.142072 (-0.567887) | 0.630396 / 4.805227 (-4.174831) | 0.116126 / 6.500664 (-6.384538) | 0.042391 / 0.075469 (-0.033078) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981430 / 1.841788 (-0.860358) | 11.619671 / 8.074308 (3.545363) | 9.718227 / 10.191392 (-0.473165) | 0.130918 / 0.680424 (-0.549506) | 0.014116 / 0.534201 (-0.520085) | 0.288729 / 0.579283 (-0.290554) | 0.259183 / 0.434364 (-0.175181) | 0.323764 / 0.540337 (-0.216574) | 0.420336 / 1.386936 (-0.966600) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005255 / 0.011353 (-0.006098) | 0.003664 / 0.011008 (-0.007344) | 0.051376 / 0.038508 (0.012868) | 0.030429 / 0.023109 (0.007320) | 0.263090 / 0.275898 (-0.012808) | 0.289959 / 0.323480 (-0.033521) | 0.004214 / 0.007986 (-0.003772) | 0.002782 / 0.004328 (-0.001546) | 0.049043 / 0.004250 (0.044793) | 0.041016 / 0.037052 (0.003964) | 0.275616 / 0.258489 (0.017127) | 0.303350 / 0.293841 (0.009509) | 0.029484 / 0.128546 (-0.099062) | 0.010329 / 0.075646 (-0.065317) | 0.058680 / 0.419271 (-0.360591) | 0.032818 / 0.043533 (-0.010715) | 0.263368 / 0.255139 (0.008229) | 0.286839 / 0.283200 (0.003640) | 0.018029 / 0.141683 (-0.123654) | 1.169207 / 1.452155 (-0.282948) | 1.206568 / 1.492716 (-0.286148) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101394 / 0.018006 (0.083387) | 0.310414 / 0.000490 (0.309924) | 0.000213 / 0.000200 (0.000013) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021662 / 0.037411 (-0.015749) | 0.075320 / 0.014526 (0.060794) | 0.086607 / 0.176557 (-0.089949) | 0.127268 / 0.737135 (-0.609867) | 0.088244 / 0.296338 (-0.208095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293591 / 0.215209 (0.078382) | 2.871845 / 2.077655 (0.794190) | 1.543624 / 1.504120 (0.039504) | 1.426698 / 1.541195 (-0.114497) | 1.445348 / 1.468490 (-0.023142) | 0.565156 / 4.584777 (-4.019621) | 0.961782 / 3.745712 (-2.783930) | 2.827904 / 5.269862 (-2.441958) | 1.747728 / 4.565676 (-2.817949) | 0.063275 / 0.424275 (-0.361000) | 0.004987 / 0.007607 (-0.002620) | 0.349652 / 0.226044 (0.123607) | 3.448635 / 2.268929 (1.179707) | 1.891734 / 55.444624 (-53.552890) | 1.624274 / 6.876477 (-5.252202) | 1.641531 / 2.142072 (-0.500541) | 0.642081 / 4.805227 (-4.163146) | 0.116136 / 6.500664 (-6.384528) | 0.040807 / 0.075469 (-0.034662) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.002090 / 1.841788 (-0.839697) | 12.401097 / 8.074308 (4.326788) | 9.799316 / 10.191392 (-0.392076) | 0.131770 / 0.680424 (-0.548654) | 0.016817 / 0.534201 (-0.517384) | 0.301136 / 0.579283 (-0.278147) | 0.136810 / 0.434364 (-0.297554) | 0.384740 / 0.540337 (-0.155598) | 0.423779 / 1.386936 (-0.963157) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2ebd8233ad8142da73bc8b4d380e9a32046d7829 \"CML watermark\")\n" ]
2024-05-06T06:06:52
2024-05-06T09:32:03
2024-05-06T09:25:52
MEMBER
null
Fix download for a dict of dicts of URLs when batched (default), introduced by: - #6794 This PR also implements regression tests. Fix #6869, fix #6850.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6871/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6871/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6871", "html_url": "https://github.com/huggingface/datasets/pull/6871", "diff_url": "https://github.com/huggingface/datasets/pull/6871.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6871.patch", "merged_at": "2024-05-06T09:25:52" }
true
https://api.github.com/repos/huggingface/datasets/issues/6870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6870/comments
https://api.github.com/repos/huggingface/datasets/issues/6870/events
https://github.com/huggingface/datasets/pull/6870
2,280,084,008
PR_kwDODunzps5umFOL
6,870
Update tqdm >= 4.66.3 to fix vulnerability
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6870). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004997 / 0.011353 (-0.006356) | 0.003260 / 0.011008 (-0.007748) | 0.063342 / 0.038508 (0.024833) | 0.030399 / 0.023109 (0.007290) | 0.235665 / 0.275898 (-0.040233) | 0.256502 / 0.323480 (-0.066978) | 0.004113 / 0.007986 (-0.003873) | 0.002677 / 0.004328 (-0.001652) | 0.049614 / 0.004250 (0.045363) | 0.043075 / 0.037052 (0.006022) | 0.251788 / 0.258489 (-0.006701) | 0.280875 / 0.293841 (-0.012965) | 0.027479 / 0.128546 (-0.101067) | 0.010402 / 0.075646 (-0.065245) | 0.207296 / 0.419271 (-0.211975) | 0.035323 / 0.043533 (-0.008209) | 0.237719 / 0.255139 (-0.017420) | 0.259401 / 0.283200 (-0.023799) | 0.017574 / 0.141683 (-0.124109) | 1.109025 / 1.452155 (-0.343129) | 1.176264 / 1.492716 (-0.316452) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098780 / 0.018006 (0.080774) | 0.304427 / 0.000490 (0.303937) | 0.000215 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018189 / 0.037411 (-0.019222) | 0.061356 / 0.014526 (0.046830) | 0.073568 / 0.176557 (-0.102988) | 0.122412 / 0.737135 (-0.614723) | 0.074428 / 0.296338 (-0.221911) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284719 / 0.215209 (0.069510) | 2.805719 / 2.077655 (0.728064) | 1.474386 / 1.504120 (-0.029734) | 1.341552 / 1.541195 (-0.199642) | 1.385354 / 1.468490 (-0.083136) | 0.575694 / 4.584777 (-4.009083) | 2.435102 / 3.745712 (-1.310610) | 2.822424 / 5.269862 (-2.447437) | 1.747609 / 4.565676 (-2.818068) | 0.064461 / 0.424275 (-0.359815) | 0.005370 / 0.007607 (-0.002237) | 0.341511 / 0.226044 (0.115467) | 3.384546 / 2.268929 (1.115617) | 1.846960 / 55.444624 (-53.597665) | 1.549294 / 6.876477 (-5.327183) | 1.562997 / 2.142072 (-0.579075) | 0.651108 / 4.805227 (-4.154120) | 0.118502 / 6.500664 (-6.382162) | 0.042356 / 0.075469 (-0.033113) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.015542 / 1.841788 (-0.826245) | 11.504899 / 8.074308 (3.430591) | 9.660870 / 10.191392 (-0.530522) | 0.145255 / 0.680424 (-0.535169) | 0.014602 / 0.534201 (-0.519599) | 0.286148 / 0.579283 (-0.293135) | 0.268358 / 0.434364 (-0.166006) | 0.323648 / 0.540337 (-0.216689) | 0.427384 / 1.386936 (-0.959552) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005671 / 0.011353 (-0.005681) | 0.004056 / 0.011008 (-0.006952) | 0.050673 / 0.038508 (0.012165) | 0.032334 / 0.023109 (0.009225) | 0.268541 / 0.275898 (-0.007357) | 0.294528 / 0.323480 (-0.028952) | 0.004592 / 0.007986 (-0.003393) | 0.002918 / 0.004328 (-0.001411) | 0.048857 / 0.004250 (0.044607) | 0.043072 / 0.037052 (0.006020) | 0.277031 / 0.258489 (0.018542) | 0.307189 / 0.293841 (0.013348) | 0.030500 / 0.128546 (-0.098046) | 0.010945 / 0.075646 (-0.064701) | 0.061067 / 0.419271 (-0.358204) | 0.060311 / 0.043533 (0.016778) | 0.268011 / 0.255139 (0.012872) | 0.290423 / 0.283200 (0.007224) | 0.019578 / 0.141683 (-0.122105) | 1.136353 / 1.452155 (-0.315802) | 1.196308 / 1.492716 (-0.296408) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099429 / 0.018006 (0.081422) | 0.308350 / 0.000490 (0.307861) | 0.000221 / 0.000200 (0.000021) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022221 / 0.037411 (-0.015190) | 0.076744 / 0.014526 (0.062218) | 0.087768 / 0.176557 (-0.088788) | 0.129939 / 0.737135 (-0.607196) | 0.089763 / 0.296338 (-0.206576) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299566 / 0.215209 (0.084357) | 2.916789 / 2.077655 (0.839134) | 1.555535 / 1.504120 (0.051415) | 1.432787 / 1.541195 (-0.108407) | 1.470983 / 1.468490 (0.002493) | 0.581468 / 4.584777 (-4.003309) | 0.993418 / 3.745712 (-2.752294) | 2.917487 / 5.269862 (-2.352374) | 1.799045 / 4.565676 (-2.766632) | 0.064520 / 0.424275 (-0.359755) | 0.005131 / 0.007607 (-0.002477) | 0.352277 / 0.226044 (0.126232) | 3.456564 / 2.268929 (1.187636) | 1.949195 / 55.444624 (-53.495430) | 1.627568 / 6.876477 (-5.248909) | 1.685246 / 2.142072 (-0.456826) | 0.653161 / 4.805227 (-4.152066) | 0.118308 / 6.500664 (-6.382356) | 0.042106 / 0.075469 (-0.033364) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.048028 / 1.841788 (-0.793759) | 12.425232 / 8.074308 (4.350924) | 10.127637 / 10.191392 (-0.063755) | 0.133095 / 0.680424 (-0.547329) | 0.015255 / 0.534201 (-0.518946) | 0.287927 / 0.579283 (-0.291357) | 0.129384 / 0.434364 (-0.304980) | 0.384828 / 0.540337 (-0.155510) | 0.427881 / 1.386936 (-0.959055) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a0bdb664436fad1d82c7988d5b413c76207f5037 \"CML watermark\")\n" ]
2024-05-06T05:49:36
2024-05-06T06:08:06
2024-05-06T06:02:00
MEMBER
null
Update tqdm >= 4.66.3 to fix vulnerability,
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6870/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6870", "html_url": "https://github.com/huggingface/datasets/pull/6870", "diff_url": "https://github.com/huggingface/datasets/pull/6870.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6870.patch", "merged_at": "2024-05-06T06:02:00" }
true
https://api.github.com/repos/huggingface/datasets/issues/6869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6869/comments
https://api.github.com/repos/huggingface/datasets/issues/6869/events
https://github.com/huggingface/datasets/issues/6869
2,280,048,297
I_kwDODunzps6H5sap
6,869
Download is broken for dict of dicts: FileNotFoundError
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2024-05-06T05:13:36
2024-05-06T09:25:53
2024-05-06T09:25:53
MEMBER
null
It seems there is a bug when downloading a dict of dicts of URLs introduced by: - #6794 ## Steps to reproduce the bug: ```python from datasets import DownloadManager dl_manager = DownloadManager() paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet"}}) ``` Stack trace: ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-7-0e0d76d25b09> in <module> ----> 1 paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet"}}) .../huggingface/datasets/src/datasets/download/download_manager.py in download(self, url_or_urls) 255 start_time = datetime.now() 256 with stack_multiprocessing_download_progress_bars(): --> 257 downloaded_path_or_paths = map_nested( 258 download_func, 259 url_or_urls, .../huggingface/datasets/src/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, batched, batch_size, types, disable_tqdm, desc) 506 batch_size = max(len(iterable) // num_proc + int(len(iterable) % num_proc > 0), 1) 507 iterable = list(iter_batched(iterable, batch_size)) --> 508 mapped = [ 509 _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) 510 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc) .../huggingface/datasets/src/datasets/utils/py_utils.py in <listcomp>(.0) 507 iterable = list(iter_batched(iterable, batch_size)) 508 mapped = [ --> 509 _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) 510 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc) 511 ] .../huggingface/datasets/src/datasets/utils/py_utils.py in _single_map_nested(args) 375 and all(not isinstance(v, types) for v in data_struct) 376 ): --> 377 return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)] 378 379 # Reduce logging to keep things readable in multiprocessing with tqdm .../huggingface/datasets/src/datasets/utils/py_utils.py in <listcomp>(.0) 375 and all(not isinstance(v, types) for v in data_struct) 376 ): --> 377 return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)] 378 379 # Reduce logging to keep things readable in multiprocessing with tqdm .../huggingface/datasets/src/datasets/download/download_manager.py in _download_batched(self, url_or_filenames, download_config) 311 ) 312 else: --> 313 return [ 314 self._download_single(url_or_filename, download_config=download_config) 315 for url_or_filename in url_or_filenames .../huggingface/datasets/src/datasets/download/download_manager.py in <listcomp>(.0) 312 else: 313 return [ --> 314 self._download_single(url_or_filename, download_config=download_config) 315 for url_or_filename in url_or_filenames 316 ] .../huggingface/datasets/src/datasets/download/download_manager.py in _download_single(self, url_or_filename, download_config) 321 # append the relative path to the base_path 322 url_or_filename = url_or_path_join(self._base_path, url_or_filename) --> 323 out = cached_path(url_or_filename, download_config=download_config) 324 out = tracked_str(out) 325 out.set_origin(url_or_filename) .../huggingface/datasets/src/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 220 elif is_local_path(url_or_filename): 221 # File, but it doesn't exist. --> 222 raise FileNotFoundError(f"Local file {url_or_filename} doesn't exist") 223 else: 224 # Something unknown FileNotFoundError: Local file .../huggingface/datasets/{'frr': 'hf:/datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet'} doesn't exist ``` Related to: - #6850
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6869/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6869/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6868/comments
https://api.github.com/repos/huggingface/datasets/issues/6868/events
https://github.com/huggingface/datasets/issues/6868
2,279,385,159
I_kwDODunzps6H3KhH
6,868
datasets.BuilderConfig does not work.
{ "login": "jdm4pku", "id": 148830652, "node_id": "U_kgDOCN75vA", "avatar_url": "https://avatars.githubusercontent.com/u/148830652?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jdm4pku", "html_url": "https://github.com/jdm4pku", "followers_url": "https://api.github.com/users/jdm4pku/followers", "following_url": "https://api.github.com/users/jdm4pku/following{/other_user}", "gists_url": "https://api.github.com/users/jdm4pku/gists{/gist_id}", "starred_url": "https://api.github.com/users/jdm4pku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jdm4pku/subscriptions", "organizations_url": "https://api.github.com/users/jdm4pku/orgs", "repos_url": "https://api.github.com/users/jdm4pku/repos", "events_url": "https://api.github.com/users/jdm4pku/events{/privacy}", "received_events_url": "https://api.github.com/users/jdm4pku/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I guess the issue is caused by the customization of BuilderConfig that you use from the repo [https://github.com/BeyonderXX/InstructUIE](https://github.com/BeyonderXX/InstructUIE/blob/master/src/uie_dataset.py). You should report to them.\r\n\r\nI see you already opened an issue in their repo:\r\n- https://github.com/BeyonderXX/InstructUIE/issues/40" ]
2024-05-05T08:08:55
2024-05-05T12:15:02
2024-05-05T12:15:01
NONE
null
### Describe the bug I custom a BuilderConfig and GeneratorBasedBuilder. Here is the code for BuilderConfig ``` class UIEConfig(datasets.BuilderConfig): def __init__( self, *args, data_dir=None, instruction_file=None, instruction_strategy=None, task_config_dir=None, num_examples=None, max_num_instances_per_task=None, max_num_instances_per_eval_task=None, over_sampling=None, **kwargs ): super().__init__(*args, **kwargs) self.data_dir = data_dir self.num_examples = num_examples self.over_sampling = over_sampling self.instructions = self._parse_instruction(instruction_file) self.task_configs = self._parse_task_config(task_config_dir) self.instruction_strategy = instruction_strategy self.max_num_instances_per_task = max_num_instances_per_task self.max_num_instances_per_eval_task = max_num_instances_per_eval_task ``` Besides, here is the code for GeneratorBasedBuilder. ``` class UIEInstructions(datasets.GeneratorBasedBuilder): VERSION = datasets.Version("2.0.0") BUILDER_CONFIG_CLASS = UIEConfig BUILDER_CONFIGS = [ UIEConfig(name="default", description="Default config for NaturalInstructions") ] DEFAULT_CONFIG_NAME = "default" ``` Here is the load_dataset ``` raw_datasets = load_dataset( os.path.join(CURRENT_DIR, "uie_dataset.py"), data_dir=data_args.data_dir, task_config_dir=data_args.task_config_dir, instruction_file=data_args.instruction_file, instruction_strategy=data_args.instruction_strategy, cache_dir=data_cache_dir, # for debug, change dataset size, otherwise open it max_num_instances_per_task=data_args.max_num_instances_per_task, max_num_instances_per_eval_task=data_args.max_num_instances_per_eval_task, num_examples=data_args.num_examples, over_sampling=data_args.over_sampling ) ``` Finally, I met the error. ``` BuilderConfig UIEConfig(name='default', version=0.0.0, data_dir=None, data_files=None, description='Default config for NaturalInstructions') doesn't have a 'task_config_dir' key. ``` I debugged the code, but I find the parameters added by me may not work. ### Steps to reproduce the bug https://github.com/BeyonderXX/InstructUIE/blob/master/src/uie_dataset.py ### Expected behavior ``` BuilderConfig UIEConfig(name='default', version=0.0.0, data_dir=None, data_files=None, description='Default config for NaturalInstructions') doesn't have a 'task_config_dir' key. ``` ### Environment info torch 2.3.0+cu118 transformers 4.40.1 python 3.8
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6868/timeline
null
not_planned
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6867
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6867/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6867/comments
https://api.github.com/repos/huggingface/datasets/issues/6867/events
https://github.com/huggingface/datasets/issues/6867
2,279,059,787
I_kwDODunzps6H17FL
6,867
Improve performance of JSON loader
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks! Feel free to ping me for examples. May not respond immediately because we're all busy but would like to help.", "Hi @natolambert, could you please give some examples of JSON files to benchmark?\r\n\r\nPlease note that this JSON file (https://huggingface.co/datasets/allenai/reward-bench-results/blob/main/eval-set-scores/Ray2333/reward-model-Mistral-7B-instruct-Unified-Feedback.json) is not in \"records\" orient; instead it has the following structure:\r\n```json\r\n{\r\n \"chat_template\": \"tulu\",\r\n \"id\": [30, 34, 35,...],\r\n \"model\": \"Ray2333/reward-model-Mistral-7B-instruct-Unified-Feedback\",\r\n \"model_type\": \"Seq. Classifier\",\r\n \"results\": [1, 1, 1, ...],\r\n \"scores_chosen\": [4.421875, 1.8916015625, 3.8515625,...],\r\n \"scores_rejected\": [-2.416015625, -1.47265625, -0.9912109375,...],\r\n \"subset\": [\"alpacaeval-easy\", \"alpacaeval-easy\", \"alpacaeval-easy\",...]\r\n \"text_chosen\": [\"<s>[INST] How do I detail a...\",...],\r\n \"text_rejected\": [\"<s>[INST] How do I detail a...\",...]\r\n}\r\n```\r\n\r\nNote that \"records\" orient should be a list (not a dict) with each row as one item of the list:\r\n```json\r\n[\r\n {\"chat_template\": \"tulu\", \"id\": 30,... },\r\n {\"chat_template\": \"tulu\", \"id\": 34,... },\r\n ...\r\n]\r\n```", "We use a mix (which is a mess), here's an example with the records orient\r\nhttps://huggingface.co/datasets/allenai/reward-bench-results/blob/main/best-of-n/alpaca_eval/tulu-13b/OpenAssistant/oasst-rm-2.1-pythia-1.4b-epoch-2.5.json\r\n\r\nThere are more in that folder, ~40mb maybe?", "@albertvillanova here's a snippet so you don't need to click\r\n```\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 0\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.076171875\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 1\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.87890625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 2\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.287109375\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 3\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 1.6337890625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 4\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 5.27734375\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 5\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.0625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 6\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 2.29296875\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 7\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 6.77734375\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 8\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.853515625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 9\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 4.86328125\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 10\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 2.890625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 11\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 4.70703125\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 12\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 4.45703125\r\n}\r\n```", "Thanks again for your feedback, @natolambert.\r\n\r\nHowever, strictly speaking, the last file is not in JSON format but in kind of JSON-Lines like format (although not properly either because there are multiple newline characters within each object). Not even pandas can read that file format.\r\n\r\nAnyway, for JSON-Lines, I would expect that `datasets` and `pandas` have the same performance for JSON Lines files, as both use `pyarrow` under the hood...\r\n\r\nA proper JSON file in records orient should be a list (a JSON array): the first character should be `[`.\r\n\r\nAnyway, I am generating a JSON file from your JSON-Lines file to test performance." ]
2024-05-04T15:04:16
2024-05-17T16:22:28
2024-05-17T16:22:28
MEMBER
null
As reported by @natolambert, loading regular JSON files with `datasets` shows poor performance. The cause is that we use the `json` Python standard library instead of other faster libraries. See my old comment: https://github.com/huggingface/datasets/pull/2638#pullrequestreview-706983714 > There are benchmarks that compare different JSON packages, with the Standard Library one among the worst performant: > - https://github.com/ultrajson/ultrajson#benchmarks > - https://github.com/ijl/orjson#performance I remember having a discussion about this and it was decided that it was better not to include an additional dependency on a 3rd-party library. However: - We already depend on `pandas` and `pandas` depends on `ujson`: so we have an indirect dependency on `ujson` - Even if the above were not the case, we always could include `ujson` as an optional extra dependency, and check at runtime if it is installed to decide which library to use, either json or ujson
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6867/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6867/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6866/comments
https://api.github.com/repos/huggingface/datasets/issues/6866/events
https://github.com/huggingface/datasets/issues/6866
2,278,736,221
I_kwDODunzps6H0sFd
6,866
DataFilesNotFoundError for datasets in the open-llm-leaderboard
{ "login": "jerome-white", "id": 6140840, "node_id": "MDQ6VXNlcjYxNDA4NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/6140840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jerome-white", "html_url": "https://github.com/jerome-white", "followers_url": "https://api.github.com/users/jerome-white/followers", "following_url": "https://api.github.com/users/jerome-white/following{/other_user}", "gists_url": "https://api.github.com/users/jerome-white/gists{/gist_id}", "starred_url": "https://api.github.com/users/jerome-white/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerome-white/subscriptions", "organizations_url": "https://api.github.com/users/jerome-white/orgs", "repos_url": "https://api.github.com/users/jerome-white/repos", "events_url": "https://api.github.com/users/jerome-white/events{/privacy}", "received_events_url": "https://api.github.com/users/jerome-white/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Potentially related:\r\n* #6864\r\n* #6850\r\n* #6848\r\n* #6819", "Hi @jerome-white, thnaks for reporting.\r\n\r\nHowever, I cannot reproduce your issue:\r\n```python\r\n>>> from datasets import get_dataset_config_names\r\n\r\n>>> get_dataset_config_names(\"open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5\")\r\n['harness_arc_challenge_25',\r\n 'harness_gsm8k_5',\r\n 'harness_hellaswag_10',\r\n 'harness_hendrycksTest_5',\r\n 'harness_hendrycksTest_abstract_algebra_5',\r\n 'harness_hendrycksTest_anatomy_5',\r\n 'harness_hendrycksTest_astronomy_5',\r\n 'harness_hendrycksTest_business_ethics_5',\r\n 'harness_hendrycksTest_clinical_knowledge_5',\r\n 'harness_hendrycksTest_college_biology_5',\r\n 'harness_hendrycksTest_college_chemistry_5',\r\n 'harness_hendrycksTest_college_computer_science_5',\r\n 'harness_hendrycksTest_college_mathematics_5',\r\n 'harness_hendrycksTest_college_medicine_5',\r\n 'harness_hendrycksTest_college_physics_5',\r\n 'harness_hendrycksTest_computer_security_5',\r\n 'harness_hendrycksTest_conceptual_physics_5',\r\n 'harness_hendrycksTest_econometrics_5',\r\n 'harness_hendrycksTest_electrical_engineering_5',\r\n 'harness_hendrycksTest_elementary_mathematics_5',\r\n 'harness_hendrycksTest_formal_logic_5',\r\n 'harness_hendrycksTest_global_facts_5',\r\n 'harness_hendrycksTest_high_school_biology_5',\r\n 'harness_hendrycksTest_high_school_chemistry_5',\r\n 'harness_hendrycksTest_high_school_computer_science_5',\r\n 'harness_hendrycksTest_high_school_european_history_5',\r\n 'harness_hendrycksTest_high_school_geography_5',\r\n 'harness_hendrycksTest_high_school_government_and_politics_5',\r\n 'harness_hendrycksTest_high_school_macroeconomics_5',\r\n 'harness_hendrycksTest_high_school_mathematics_5',\r\n 'harness_hendrycksTest_high_school_microeconomics_5',\r\n 'harness_hendrycksTest_high_school_physics_5',\r\n 'harness_hendrycksTest_high_school_psychology_5',\r\n 'harness_hendrycksTest_high_school_statistics_5',\r\n 'harness_hendrycksTest_high_school_us_history_5',\r\n 'harness_hendrycksTest_high_school_world_history_5',\r\n 'harness_hendrycksTest_human_aging_5',\r\n 'harness_hendrycksTest_human_sexuality_5',\r\n 'harness_hendrycksTest_international_law_5',\r\n 'harness_hendrycksTest_jurisprudence_5',\r\n 'harness_hendrycksTest_logical_fallacies_5',\r\n 'harness_hendrycksTest_machine_learning_5',\r\n 'harness_hendrycksTest_management_5',\r\n 'harness_hendrycksTest_marketing_5',\r\n 'harness_hendrycksTest_medical_genetics_5',\r\n 'harness_hendrycksTest_miscellaneous_5',\r\n 'harness_hendrycksTest_moral_disputes_5',\r\n 'harness_hendrycksTest_moral_scenarios_5',\r\n 'harness_hendrycksTest_nutrition_5',\r\n 'harness_hendrycksTest_philosophy_5',\r\n 'harness_hendrycksTest_prehistory_5',\r\n 'harness_hendrycksTest_professional_accounting_5',\r\n 'harness_hendrycksTest_professional_law_5',\r\n 'harness_hendrycksTest_professional_medicine_5',\r\n 'harness_hendrycksTest_professional_psychology_5',\r\n 'harness_hendrycksTest_public_relations_5',\r\n 'harness_hendrycksTest_security_studies_5',\r\n 'harness_hendrycksTest_sociology_5',\r\n 'harness_hendrycksTest_us_foreign_policy_5',\r\n 'harness_hendrycksTest_virology_5',\r\n 'harness_hendrycksTest_world_religions_5',\r\n 'harness_truthfulqa_mc_0',\r\n 'harness_winogrande_5',\r\n 'results']\r\n```\r\n\r\nMaybe it was just a temporary issue...", "> Maybe it was just a temporary issue...\r\n\r\nPerhaps. I've changed my workflow to use the hub's `HfFileSystem`, so for now this is no longer a blocker for me. I'll reopen the issue if that changes." ]
2024-05-04T04:59:00
2024-05-14T08:09:56
2024-05-14T08:09:56
NONE
null
### Describe the bug When trying to get config names or load any dataset within the open-llm-leaderboard ecosystem (`open-llm-leaderboard/details_`) I receive the DataFilesNotFoundError. For the last month or so I've been loading datasets from the leaderboard almost everyday; yesterday was the first time I started seeing this. ### Steps to reproduce the bug This snippet has three cells: 1. Loads the modules 2. Tries to get config names 3. Tries to load the dataset I've chosen "davidkim205"'s Rhea-72b-v0.5 model because it is one of the best performers on the leaderboard should likely have no dataset issues: ```python In [1]: from datasets import load_dataset, get_dataset_config_names In [2]: get_dataset_config_names("open-llm-leaderboard/details_davidkim205__Rhea ...: -72b-v0.5") --------------------------------------------------------------------------- DataFilesNotFoundError Traceback (most recent call last) Cell In[2], line 1 ----> 1 get_dataset_config_names("open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5") File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/inspect.py:347, in get_dataset_config_names(path, revision, download_config, download_mode, dynamic_modules_path, data_files, **download_kwargs) 291 def get_dataset_config_names( 292 path: str, 293 revision: Optional[Union[str, Version]] = None, (...) 298 **download_kwargs, 299 ): 300 """Get the list of available config names for a particular dataset. 301 302 Args: (...) 345 ``` 346 """ --> 347 dataset_module = dataset_module_factory( 348 path, 349 revision=revision, 350 download_config=download_config, 351 download_mode=download_mode, 352 dynamic_modules_path=dynamic_modules_path, 353 data_files=data_files, 354 **download_kwargs, 355 ) 356 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=os.path.basename(path)) 357 return list(builder_cls.builder_configs.keys()) or [ 358 dataset_module.builder_kwargs.get("config_name", builder_cls.DEFAULT_CONFIG_NAME or "default") 359 ] File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1821, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) 1812 return LocalDatasetModuleFactoryWithScript( 1813 combined_path, 1814 download_mode=download_mode, 1815 dynamic_modules_path=dynamic_modules_path, 1816 trust_remote_code=trust_remote_code, 1817 ).get_module() 1818 elif os.path.isdir(path): 1819 return LocalDatasetModuleFactoryWithoutScript( 1820 path, data_dir=data_dir, data_files=data_files, download_mode=download_mode -> 1821 ).get_module() 1822 # Try remotely 1823 elif is_relative_path(path) and path.count("/") <= 1: File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1039, in LocalDatasetModuleFactoryWithoutScript.get_module(self) 1033 patterns = get_data_patterns(base_path) 1034 data_files = DataFilesDict.from_patterns( 1035 patterns, 1036 base_path=base_path, 1037 allowed_extensions=ALL_ALLOWED_EXTENSIONS, 1038 ) -> 1039 module_name, default_builder_kwargs = infer_module_for_data_files( 1040 data_files=data_files, 1041 path=self.path, 1042 ) 1043 data_files = data_files.filter_extensions(_MODULE_TO_EXTENSIONS[module_name]) 1044 # Collect metadata files if the module supports them File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:597, in infer_module_for_data_files(data_files, path, download_config) 595 raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") 596 if not module_name: --> 597 raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else "")) 598 return module_name, default_builder_kwargs DataFilesNotFoundError: No (supported) data files found in open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5 In [3]: data = load_dataset("open-llm-leaderboard/details_davidkim205__Rhea-72b- ...: v0.5", "harness_winogrande_5") --------------------------------------------------------------------------- DataFilesNotFoundError Traceback (most recent call last) Cell In[3], line 1 ----> 1 data = load_dataset("open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5", "harness_winogrande_5") File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:2587, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2582 verification_mode = VerificationMode( 2583 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS 2584 ) 2586 # Create a dataset builder -> 2587 builder_instance = load_dataset_builder( 2588 path=path, 2589 name=name, 2590 data_dir=data_dir, 2591 data_files=data_files, 2592 cache_dir=cache_dir, 2593 features=features, 2594 download_config=download_config, 2595 download_mode=download_mode, 2596 revision=revision, 2597 token=token, 2598 storage_options=storage_options, 2599 trust_remote_code=trust_remote_code, 2600 _require_default_config_name=name is None, 2601 **config_kwargs, 2602 ) 2604 # Return iterable dataset in case of streaming 2605 if streaming: File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:2259, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs) 2257 download_config = download_config.copy() if download_config else DownloadConfig() 2258 download_config.storage_options.update(storage_options) -> 2259 dataset_module = dataset_module_factory( 2260 path, 2261 revision=revision, 2262 download_config=download_config, 2263 download_mode=download_mode, 2264 data_dir=data_dir, 2265 data_files=data_files, 2266 cache_dir=cache_dir, 2267 trust_remote_code=trust_remote_code, 2268 _require_default_config_name=_require_default_config_name, 2269 _require_custom_configs=bool(config_kwargs), 2270 ) 2271 # Get dataset builder class from the processing script 2272 builder_kwargs = dataset_module.builder_kwargs File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1821, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) 1812 return LocalDatasetModuleFactoryWithScript( 1813 combined_path, 1814 download_mode=download_mode, 1815 dynamic_modules_path=dynamic_modules_path, 1816 trust_remote_code=trust_remote_code, 1817 ).get_module() 1818 elif os.path.isdir(path): 1819 return LocalDatasetModuleFactoryWithoutScript( 1820 path, data_dir=data_dir, data_files=data_files, download_mode=download_mode -> 1821 ).get_module() 1822 # Try remotely 1823 elif is_relative_path(path) and path.count("/") <= 1: File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1039, in LocalDatasetModuleFactoryWithoutScript.get_module(self) 1033 patterns = get_data_patterns(base_path) 1034 data_files = DataFilesDict.from_patterns( 1035 patterns, 1036 base_path=base_path, 1037 allowed_extensions=ALL_ALLOWED_EXTENSIONS, 1038 ) -> 1039 module_name, default_builder_kwargs = infer_module_for_data_files( 1040 data_files=data_files, 1041 path=self.path, 1042 ) 1043 data_files = data_files.filter_extensions(_MODULE_TO_EXTENSIONS[module_name]) 1044 # Collect metadata files if the module supports them File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:597, in infer_module_for_data_files(data_files, path, download_config) 595 raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") 596 if not module_name: --> 597 raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else "")) 598 return module_name, default_builder_kwargs DataFilesNotFoundError: No (supported) data files found in open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5 ``` ### Expected behavior No exceptions from `get_dataset_config_names` or `load_dataset` ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-6.5.0-1018-aws-aarch64-with-glibc2.35 - Python version: 3.11.8 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6866/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6865
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6865/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6865/comments
https://api.github.com/repos/huggingface/datasets/issues/6865/events
https://github.com/huggingface/datasets/issues/6865
2,277,304,832
I_kwDODunzps6HvOoA
6,865
Example on Semantic segmentation contains bug
{ "login": "ducha-aiki", "id": 4803565, "node_id": "MDQ6VXNlcjQ4MDM1NjU=", "avatar_url": "https://avatars.githubusercontent.com/u/4803565?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ducha-aiki", "html_url": "https://github.com/ducha-aiki", "followers_url": "https://api.github.com/users/ducha-aiki/followers", "following_url": "https://api.github.com/users/ducha-aiki/following{/other_user}", "gists_url": "https://api.github.com/users/ducha-aiki/gists{/gist_id}", "starred_url": "https://api.github.com/users/ducha-aiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ducha-aiki/subscriptions", "organizations_url": "https://api.github.com/users/ducha-aiki/orgs", "repos_url": "https://api.github.com/users/ducha-aiki/repos", "events_url": "https://api.github.com/users/ducha-aiki/events{/privacy}", "received_events_url": "https://api.github.com/users/ducha-aiki/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-05-03T09:40:12
2024-05-03T09:40:12
null
NONE
null
### Describe the bug https://huggingface.co/docs/datasets/en/semantic_segmentation shows wrong example with torchvision transforms. Specifically, as one can see in screenshot below, the object boundaries have weird colors. <img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/59aa0e2c-2e3e-415b-9d42-2314044c5aee"> Original example with `albumentations` is correct <img width="705" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/27dbd725-cea5-4e48-ba59-7050c3ce17b3"> That is because `torch vision.transforms.Resize` interpolates with bilinear everything which is wrong when used for segmentation labels - you just cannot mix them. Overall, `torchvision.transforms` is designed for classification only and cannot be used to images and masks together, unless you write two separate branches of augmentations. The correct way would be to use `v2` version of transforms and convert the segmentation labels to https://pytorch.org/vision/main/generated/torchvision.tv_tensors.Mask.html#torchvision.tv_tensors.Mask object ### Steps to reproduce the bug Go to the website. <img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/ea1276d0-d69a-48cf-b9c2-cd61217815ef"> https://huggingface.co/docs/datasets/en/semantic_segmentation ### Expected behavior Results, similar to `albumentation`. Or remove the torch vision part altogether. Or use `kornia` instead. ### Environment info Irrelevant
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6865/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6865/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6864/comments
https://api.github.com/repos/huggingface/datasets/issues/6864/events
https://github.com/huggingface/datasets/issues/6864
2,276,986,981
I_kwDODunzps6HuBBl
6,864
Dataset 'rewardsignal/reddit_writing_prompts' doesn't exist on the Hub
{ "login": "vinodrajendran001", "id": 5783246, "node_id": "MDQ6VXNlcjU3ODMyNDY=", "avatar_url": "https://avatars.githubusercontent.com/u/5783246?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vinodrajendran001", "html_url": "https://github.com/vinodrajendran001", "followers_url": "https://api.github.com/users/vinodrajendran001/followers", "following_url": "https://api.github.com/users/vinodrajendran001/following{/other_user}", "gists_url": "https://api.github.com/users/vinodrajendran001/gists{/gist_id}", "starred_url": "https://api.github.com/users/vinodrajendran001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vinodrajendran001/subscriptions", "organizations_url": "https://api.github.com/users/vinodrajendran001/orgs", "repos_url": "https://api.github.com/users/vinodrajendran001/repos", "events_url": "https://api.github.com/users/vinodrajendran001/events{/privacy}", "received_events_url": "https://api.github.com/users/vinodrajendran001/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @vinodrajendran001, thanks for reporting.\r\n\r\nIndeed the dataset no longer exists on the Hub. The URL https://huggingface.co/datasets/rewardsignal/reddit_writing_prompts gives 404 Not Found error." ]
2024-05-03T06:03:30
2024-05-06T06:36:42
2024-05-06T06:36:41
NONE
null
### Describe the bug The dataset `rewardsignal/reddit_writing_prompts` is missing in Huggingface Hub. ### Steps to reproduce the bug ``` from datasets import load_dataset prompt_response_dataset = load_dataset("rewardsignal/reddit_writing_prompts", data_files="prompt_responses_full.csv", split='train[:80%]') ``` ### Expected behavior DatasetNotFoundError: Dataset 'rewardsignal/reddit_writing_prompts' doesn't exist on the Hub or cannot be accessed ### Environment info Nothing to do with versions
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6864/timeline
null
not_planned
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6863
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6863/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6863/comments
https://api.github.com/repos/huggingface/datasets/issues/6863/events
https://github.com/huggingface/datasets/issues/6863
2,276,977,534
I_kwDODunzps6Ht-t-
6,863
Revert temporary pin huggingface-hub < 0.23.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2024-05-03T05:53:55
2024-05-03T05:53:55
null
MEMBER
null
Revert temporary pin huggingface-hub < 0.23.0 introduced by - #6861 once the following issue is fixed and released: - huggingface/transformers#30618
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6863/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6862
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6862/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6862/comments
https://api.github.com/repos/huggingface/datasets/issues/6862/events
https://github.com/huggingface/datasets/pull/6862
2,276,763,745
PR_kwDODunzps5ubOoL
6,862
Issue 6598: load_dataset broken for data_files on s3
{ "login": "matstrand", "id": 544843, "node_id": "MDQ6VXNlcjU0NDg0Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/544843?v=4", "gravatar_id": "", "url": "https://api.github.com/users/matstrand", "html_url": "https://github.com/matstrand", "followers_url": "https://api.github.com/users/matstrand/followers", "following_url": "https://api.github.com/users/matstrand/following{/other_user}", "gists_url": "https://api.github.com/users/matstrand/gists{/gist_id}", "starred_url": "https://api.github.com/users/matstrand/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/matstrand/subscriptions", "organizations_url": "https://api.github.com/users/matstrand/orgs", "repos_url": "https://api.github.com/users/matstrand/repos", "events_url": "https://api.github.com/users/matstrand/events{/privacy}", "received_events_url": "https://api.github.com/users/matstrand/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-05-03T01:43:47
2024-05-03T09:04:55
null
NONE
null
Fixes huggingface/datasets/issues/6598 I've added a new test case and a solution. Before applying the solution the test case was failing with the same error described in the linked issue. I encountered this issue while following the Hugging Face documentation, trying to perform GPT-2 fine-tuning using `run_clm.py` on SageMaker with a data file stored on S3. MRE: ``` pip install "datasets[s3]" python -c "from datasets import load_dataset; load_dataset('csv', data_files={'train': 's3://noaa-gsod-pds/2024/A5125600451.csv'})" ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6862/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6862/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6862", "html_url": "https://github.com/huggingface/datasets/pull/6862", "diff_url": "https://github.com/huggingface/datasets/pull/6862.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6862.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6861/comments
https://api.github.com/repos/huggingface/datasets/issues/6861/events
https://github.com/huggingface/datasets/pull/6861
2,275,988,990
PR_kwDODunzps5uYkMy
6,861
Fix CI by temporarily pinning huggingface-hub < 0.23.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6861). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005029 / 0.011353 (-0.006324) | 0.003217 / 0.011008 (-0.007791) | 0.062747 / 0.038508 (0.024239) | 0.030086 / 0.023109 (0.006976) | 0.251548 / 0.275898 (-0.024350) | 0.273215 / 0.323480 (-0.050265) | 0.003197 / 0.007986 (-0.004789) | 0.002706 / 0.004328 (-0.001623) | 0.049013 / 0.004250 (0.044763) | 0.044160 / 0.037052 (0.007107) | 0.266556 / 0.258489 (0.008067) | 0.291854 / 0.293841 (-0.001987) | 0.027463 / 0.128546 (-0.101083) | 0.010331 / 0.075646 (-0.065315) | 0.207195 / 0.419271 (-0.212077) | 0.035416 / 0.043533 (-0.008116) | 0.253180 / 0.255139 (-0.001959) | 0.274663 / 0.283200 (-0.008536) | 0.019132 / 0.141683 (-0.122551) | 1.174875 / 1.452155 (-0.277279) | 1.166828 / 1.492716 (-0.325888) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092240 / 0.018006 (0.074234) | 0.299385 / 0.000490 (0.298895) | 0.000222 / 0.000200 (0.000022) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017994 / 0.037411 (-0.019417) | 0.066868 / 0.014526 (0.052342) | 0.074616 / 0.176557 (-0.101941) | 0.120632 / 0.737135 (-0.616503) | 0.074595 / 0.296338 (-0.221743) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279008 / 0.215209 (0.063798) | 2.777927 / 2.077655 (0.700273) | 1.529495 / 1.504120 (0.025376) | 1.391528 / 1.541195 (-0.149666) | 1.420149 / 1.468490 (-0.048341) | 0.567526 / 4.584777 (-4.017251) | 2.400467 / 3.745712 (-1.345245) | 2.735778 / 5.269862 (-2.534083) | 1.718224 / 4.565676 (-2.847452) | 0.063009 / 0.424275 (-0.361266) | 0.005339 / 0.007607 (-0.002268) | 0.340130 / 0.226044 (0.114086) | 3.352796 / 2.268929 (1.083868) | 1.887427 / 55.444624 (-53.557198) | 1.598804 / 6.876477 (-5.277672) | 1.601566 / 2.142072 (-0.540506) | 0.640684 / 4.805227 (-4.164543) | 0.116694 / 6.500664 (-6.383970) | 0.041206 / 0.075469 (-0.034263) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969163 / 1.841788 (-0.872625) | 11.475685 / 8.074308 (3.401377) | 9.397987 / 10.191392 (-0.793405) | 0.140131 / 0.680424 (-0.540293) | 0.014544 / 0.534201 (-0.519657) | 0.288122 / 0.579283 (-0.291161) | 0.262631 / 0.434364 (-0.171733) | 0.323565 / 0.540337 (-0.216773) | 0.421775 / 1.386936 (-0.965161) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005059 / 0.011353 (-0.006294) | 0.003185 / 0.011008 (-0.007824) | 0.050132 / 0.038508 (0.011624) | 0.030872 / 0.023109 (0.007763) | 0.257822 / 0.275898 (-0.018076) | 0.281645 / 0.323480 (-0.041835) | 0.004129 / 0.007986 (-0.003857) | 0.002703 / 0.004328 (-0.001625) | 0.049695 / 0.004250 (0.045445) | 0.040452 / 0.037052 (0.003400) | 0.278701 / 0.258489 (0.020212) | 0.297726 / 0.293841 (0.003885) | 0.028829 / 0.128546 (-0.099717) | 0.010011 / 0.075646 (-0.065636) | 0.058569 / 0.419271 (-0.360703) | 0.032564 / 0.043533 (-0.010969) | 0.259944 / 0.255139 (0.004805) | 0.279954 / 0.283200 (-0.003245) | 0.017804 / 0.141683 (-0.123879) | 1.147748 / 1.452155 (-0.304406) | 1.188390 / 1.492716 (-0.304327) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091252 / 0.018006 (0.073246) | 0.308462 / 0.000490 (0.307972) | 0.000217 / 0.000200 (0.000017) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022216 / 0.037411 (-0.015195) | 0.075547 / 0.014526 (0.061021) | 0.086085 / 0.176557 (-0.090471) | 0.128326 / 0.737135 (-0.608809) | 0.087253 / 0.296338 (-0.209085) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301886 / 0.215209 (0.086677) | 2.940181 / 2.077655 (0.862527) | 1.663247 / 1.504120 (0.159127) | 1.545711 / 1.541195 (0.004517) | 1.542904 / 1.468490 (0.074414) | 0.556951 / 4.584777 (-4.027826) | 0.941925 / 3.745712 (-2.803788) | 2.740733 / 5.269862 (-2.529128) | 1.722801 / 4.565676 (-2.842875) | 0.060156 / 0.424275 (-0.364120) | 0.005008 / 0.007607 (-0.002599) | 0.348988 / 0.226044 (0.122944) | 3.454972 / 2.268929 (1.186044) | 2.015828 / 55.444624 (-53.428796) | 1.737828 / 6.876477 (-5.138649) | 1.747451 / 2.142072 (-0.394622) | 0.626865 / 4.805227 (-4.178362) | 0.114565 / 6.500664 (-6.386099) | 0.040562 / 0.075469 (-0.034907) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.997070 / 1.841788 (-0.844718) | 11.748577 / 8.074308 (3.674269) | 9.591721 / 10.191392 (-0.599671) | 0.131613 / 0.680424 (-0.548811) | 0.016560 / 0.534201 (-0.517641) | 0.288938 / 0.579283 (-0.290345) | 0.122196 / 0.434364 (-0.312168) | 0.380217 / 0.540337 (-0.160121) | 0.429886 / 1.386936 (-0.957050) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7ae4314c34dae6a5339c11f7d1a2cbdfb76144d7 \"CML watermark\")\n" ]
2024-05-02T16:40:04
2024-05-02T16:59:42
2024-05-02T16:53:42
MEMBER
null
As a hotfix for CI, temporarily pin `huggingface-hub` upper version Fix #6860. Revert once root cause is fixed, see: - https://github.com/huggingface/transformers/issues/30618
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6861/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6861/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6861", "html_url": "https://github.com/huggingface/datasets/pull/6861", "diff_url": "https://github.com/huggingface/datasets/pull/6861.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6861.patch", "merged_at": "2024-05-02T16:53:42" }
true
https://api.github.com/repos/huggingface/datasets/issues/6860
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6860/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6860/comments
https://api.github.com/repos/huggingface/datasets/issues/6860/events
https://github.com/huggingface/datasets/issues/6860
2,275,537,137
I_kwDODunzps6HofDx
6,860
CI fails after huggingface_hub-0.23.0 release: FutureWarning: "resume_download"
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "I think this needs to be fixed on transformers.\r\n\r\nCC: @Wauplin ", "See:\r\n- https://github.com/huggingface/transformers/issues/30618", "Opened https://github.com/huggingface/transformers/pull/30620" ]
2024-05-02T13:24:17
2024-05-02T16:53:45
2024-05-02T16:53:45
MEMBER
null
CI fails after latest huggingface_hub-0.23.0 release: https://github.com/huggingface/huggingface_hub/releases/tag/v0.23.0 ``` FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_bertscore - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_perplexity - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_tokenizer - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_tokenizer_with_cache - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_arrow_dataset.py::MiscellaneousDatasetTest::test_set_format_encode - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6860/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6859
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6859/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6859/comments
https://api.github.com/repos/huggingface/datasets/issues/6859/events
https://github.com/huggingface/datasets/pull/6859
2,274,996,774
PR_kwDODunzps5uVIoZ
6,859
Support folder-based datasets with large metadata.jsonl
{ "login": "gbenson", "id": 580564, "node_id": "MDQ6VXNlcjU4MDU2NA==", "avatar_url": "https://avatars.githubusercontent.com/u/580564?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gbenson", "html_url": "https://github.com/gbenson", "followers_url": "https://api.github.com/users/gbenson/followers", "following_url": "https://api.github.com/users/gbenson/following{/other_user}", "gists_url": "https://api.github.com/users/gbenson/gists{/gist_id}", "starred_url": "https://api.github.com/users/gbenson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gbenson/subscriptions", "organizations_url": "https://api.github.com/users/gbenson/orgs", "repos_url": "https://api.github.com/users/gbenson/repos", "events_url": "https://api.github.com/users/gbenson/events{/privacy}", "received_events_url": "https://api.github.com/users/gbenson/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-05-02T09:07:26
2024-05-02T09:07:26
null
NONE
null
I tried creating an `imagefolder` dataset with a 714MB `metadata.jsonl` but got the error below. This pull request fixes the problem by increasing the block size like the message suggests. ``` >>> from datasets import load_dataset >>> dataset = load_dataset("imagefolder", data_dir="data-for-upload") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/path/to/datasets/load.py", line 2609, in load_dataset builder_instance.download_and_prepare( ... File "/path/to/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 245, in _read_metadata return paj.read_json(f) File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6859/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6859/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6859", "html_url": "https://github.com/huggingface/datasets/pull/6859", "diff_url": "https://github.com/huggingface/datasets/pull/6859.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6859.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6858
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6858/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6858/comments
https://api.github.com/repos/huggingface/datasets/issues/6858/events
https://github.com/huggingface/datasets/issues/6858
2,274,917,185
I_kwDODunzps6HmHtB
6,858
Segmentation fault
{ "login": "scampion", "id": 554155, "node_id": "MDQ6VXNlcjU1NDE1NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/554155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/scampion", "html_url": "https://github.com/scampion", "followers_url": "https://api.github.com/users/scampion/followers", "following_url": "https://api.github.com/users/scampion/following{/other_user}", "gists_url": "https://api.github.com/users/scampion/gists{/gist_id}", "starred_url": "https://api.github.com/users/scampion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/scampion/subscriptions", "organizations_url": "https://api.github.com/users/scampion/orgs", "repos_url": "https://api.github.com/users/scampion/repos", "events_url": "https://api.github.com/users/scampion/events{/privacy}", "received_events_url": "https://api.github.com/users/scampion/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I downloaded the jsonl file and extract it manually. \r\nThe issue seems to be related to pyarrow.json \r\n\r\n\r\n\r\npython3 -q -X faulthandler -c \"from datasets import load_dataset; load_dataset('json', data_files='/Users/scampion/Downloads/1998-09.jsonl')\"\r\nGenerating train split: 0 examples [00:00, ? examples/s]Fatal Python error: Segmentation fault\r\n\r\nThread 0x00007000000c1000 (most recent call first):\r\n <no Python frame>\r\n\r\nThread 0x00007000024df000 (most recent call first):\r\n File \"/usr/local/Cellar/python@3.11/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py\", line 331 in wait\r\n File \"/usr/local/Cellar/python@3.11/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py\", line 629 in wait\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/tqdm/_monitor.py\", line 60 in run\r\n File \"/usr/local/Cellar/python@3.11/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py\", line 1045 in _bootstrap_inner\r\n File \"/usr/local/Cellar/python@3.11/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py\", line 1002 in _bootstrap\r\n\r\nThread 0x00007ff845c66640 (most recent call first):\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/packaged_modules/json/json.py\", line 122 in _generate_tables\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/builder.py\", line 1995 in _prepare_split_single\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/builder.py\", line 1882 in _prepare_split\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/builder.py\", line 1122 in _download_and_prepare\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/builder.py\", line 1027 in download_and_prepare\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/load.py\", line 2609 in load_dataset\r\n File \"<string>\", line 1 in <module>\r\n\r\nExtension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, pyarrow.lib, pyarrow._hdfsio, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.strptime, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.lib, pyarrow._compute, pandas._libs.ops, pandas._libs.hashing, pandas._libs.arrays, pandas._libs.tslib, pandas._libs.sparse, pandas._libs.internals, pandas._libs.indexing, pandas._libs.index, pandas._libs.writers, pandas._libs.join, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.groupby, pandas._libs.json, pandas._libs.parsers, pandas._libs.testing, charset_normalizer.md, yaml._yaml, pyarrow._parquet, pyarrow._fs, pyarrow._hdfs, pyarrow._gcsfs, pyarrow._s3fs, multidict._multidict, yarl._quoting_c, aiohttp._helpers, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket, frozenlist._frozenlist, xxhash._xxhash, pyarrow._json (total: 72)\r\n[1] 56678 segmentation fault python3 -q -X faulthandler -c\r\n/usr/local/Cellar/python@3.11/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown\r\n warnings.warn('resource_tracker: There appear to be %d '\r\n(venv_test)", "The error comes from data where one line contains \"null\"" ]
2024-05-02T08:28:49
2024-05-03T08:43:21
2024-05-03T08:42:36
NONE
null
### Describe the bug Using various version for datasets, I'm no more longer able to load that dataset without a segmentation fault. Several others files are also concerned. ### Steps to reproduce the bug # Create a new venv python3 -m venv venv_test source venv_test/bin/activate # Install the latest version pip install datasets # Load that dataset python3 -q -X faulthandler -c "from datasets import load_dataset; load_dataset('EuropeanParliament/Eurovoc', '1998-09')" ### Expected behavior Data must be loaded ### Environment info datasets==2.19.0 Python 3.11.7 Darwin 22.5.0 Darwin Kernel Version 22.5.0: Mon Apr 24 20:51:50 PDT 2023; root:xnu-8796.121.2~5/RELEASE_X86_64 x86_64
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6858/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6858/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6857
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6857/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6857/comments
https://api.github.com/repos/huggingface/datasets/issues/6857/events
https://github.com/huggingface/datasets/pull/6857
2,274,849,730
PR_kwDODunzps5uUooF
6,857
Fix line-endings in tests on Windows
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6857). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005050 / 0.011353 (-0.006303) | 0.003400 / 0.011008 (-0.007609) | 0.063488 / 0.038508 (0.024980) | 0.029112 / 0.023109 (0.006002) | 0.245872 / 0.275898 (-0.030026) | 0.270682 / 0.323480 (-0.052798) | 0.003145 / 0.007986 (-0.004841) | 0.002671 / 0.004328 (-0.001658) | 0.048862 / 0.004250 (0.044612) | 0.044330 / 0.037052 (0.007278) | 0.269066 / 0.258489 (0.010577) | 0.294806 / 0.293841 (0.000965) | 0.027717 / 0.128546 (-0.100829) | 0.010189 / 0.075646 (-0.065458) | 0.206853 / 0.419271 (-0.212419) | 0.035655 / 0.043533 (-0.007877) | 0.254554 / 0.255139 (-0.000585) | 0.275104 / 0.283200 (-0.008095) | 0.018786 / 0.141683 (-0.122897) | 1.147165 / 1.452155 (-0.304989) | 1.202755 / 1.492716 (-0.289961) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094693 / 0.018006 (0.076687) | 0.303049 / 0.000490 (0.302559) | 0.000217 / 0.000200 (0.000017) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018375 / 0.037411 (-0.019036) | 0.061080 / 0.014526 (0.046554) | 0.082140 / 0.176557 (-0.094416) | 0.119962 / 0.737135 (-0.617173) | 0.074596 / 0.296338 (-0.221743) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278483 / 0.215209 (0.063274) | 2.757734 / 2.077655 (0.680079) | 1.431875 / 1.504120 (-0.072245) | 1.320315 / 1.541195 (-0.220879) | 1.319433 / 1.468490 (-0.149058) | 0.566134 / 4.584777 (-4.018643) | 2.407416 / 3.745712 (-1.338296) | 2.765087 / 5.269862 (-2.504775) | 1.727335 / 4.565676 (-2.838341) | 0.065267 / 0.424275 (-0.359008) | 0.005466 / 0.007607 (-0.002141) | 0.336667 / 0.226044 (0.110622) | 3.311721 / 2.268929 (1.042792) | 1.768960 / 55.444624 (-53.675664) | 1.510854 / 6.876477 (-5.365623) | 1.499345 / 2.142072 (-0.642728) | 0.649205 / 4.805227 (-4.156022) | 0.118920 / 6.500664 (-6.381744) | 0.041570 / 0.075469 (-0.033899) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976127 / 1.841788 (-0.865660) | 11.646120 / 8.074308 (3.571812) | 9.710204 / 10.191392 (-0.481188) | 0.129081 / 0.680424 (-0.551342) | 0.013874 / 0.534201 (-0.520327) | 0.287044 / 0.579283 (-0.292239) | 0.268684 / 0.434364 (-0.165680) | 0.328465 / 0.540337 (-0.211872) | 0.420433 / 1.386936 (-0.966503) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005380 / 0.011353 (-0.005973) | 0.003582 / 0.011008 (-0.007427) | 0.049539 / 0.038508 (0.011031) | 0.032363 / 0.023109 (0.009253) | 0.277697 / 0.275898 (0.001799) | 0.303861 / 0.323480 (-0.019618) | 0.004226 / 0.007986 (-0.003759) | 0.002749 / 0.004328 (-0.001579) | 0.049404 / 0.004250 (0.045153) | 0.040602 / 0.037052 (0.003550) | 0.292995 / 0.258489 (0.034506) | 0.317958 / 0.293841 (0.024117) | 0.030052 / 0.128546 (-0.098494) | 0.010179 / 0.075646 (-0.065467) | 0.058600 / 0.419271 (-0.360672) | 0.033202 / 0.043533 (-0.010331) | 0.282474 / 0.255139 (0.027335) | 0.299330 / 0.283200 (0.016130) | 0.017612 / 0.141683 (-0.124071) | 1.160199 / 1.452155 (-0.291955) | 1.193248 / 1.492716 (-0.299468) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093450 / 0.018006 (0.075443) | 0.311391 / 0.000490 (0.310901) | 0.000208 / 0.000200 (0.000008) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022045 / 0.037411 (-0.015366) | 0.075238 / 0.014526 (0.060712) | 0.086648 / 0.176557 (-0.089908) | 0.128595 / 0.737135 (-0.608540) | 0.088785 / 0.296338 (-0.207553) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283928 / 0.215209 (0.068719) | 2.780663 / 2.077655 (0.703008) | 1.517870 / 1.504120 (0.013751) | 1.402606 / 1.541195 (-0.138588) | 1.408382 / 1.468490 (-0.060108) | 0.579216 / 4.584777 (-4.005560) | 0.979349 / 3.745712 (-2.766363) | 2.847551 / 5.269862 (-2.422311) | 1.774713 / 4.565676 (-2.790963) | 0.064635 / 0.424275 (-0.359640) | 0.005038 / 0.007607 (-0.002569) | 0.341763 / 0.226044 (0.115719) | 3.351240 / 2.268929 (1.082311) | 1.871082 / 55.444624 (-53.573542) | 1.592683 / 6.876477 (-5.283794) | 1.619814 / 2.142072 (-0.522259) | 0.661628 / 4.805227 (-4.143599) | 0.118287 / 6.500664 (-6.382377) | 0.041289 / 0.075469 (-0.034180) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.010075 / 1.841788 (-0.831712) | 11.949132 / 8.074308 (3.874824) | 10.004906 / 10.191392 (-0.186486) | 0.138622 / 0.680424 (-0.541802) | 0.015134 / 0.534201 (-0.519067) | 0.286300 / 0.579283 (-0.292984) | 0.125163 / 0.434364 (-0.309201) | 0.378641 / 0.540337 (-0.161696) | 0.422805 / 1.386936 (-0.964131) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#282379fbd58df2b5065b70330750688acb4eb461 \"CML watermark\")\n" ]
2024-05-02T07:49:15
2024-05-02T11:49:35
2024-05-02T11:43:00
MEMBER
null
EDIT: ~~Fix test_delete_from_hub on Windows by passing explicit encoding.~~ Fix test_delete_from_hub and test_xgetsize_private by uploading the README file content directly (encoding the string), instead of writing a local file and uploading it. Note that local files created on Windows will have "\r\n" line endings, instead of "\n". These are no longer transformed to "\n" by the Hub. Fix #6856.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6857/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6857", "html_url": "https://github.com/huggingface/datasets/pull/6857", "diff_url": "https://github.com/huggingface/datasets/pull/6857.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6857.patch", "merged_at": "2024-05-02T11:43:00" }
true
https://api.github.com/repos/huggingface/datasets/issues/6856
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6856/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6856/comments
https://api.github.com/repos/huggingface/datasets/issues/6856/events
https://github.com/huggingface/datasets/issues/6856
2,274,828,933
I_kwDODunzps6HlyKF
6,856
CI fails on Windows for test_delete_from_hub and test_xgetsize_private due to new-line character
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "After investigation, I have found that when a local file is uploaded to the Hub, the new line character is no longer transformed to \"\\n\": on Windows machine now it is kept as \"\\r\\n\".\r\n\r\nAny idea why this changed?\r\nCC: @lhoestq " ]
2024-05-02T07:37:03
2024-05-02T11:43:01
2024-05-02T11:43:01
MEMBER
null
CI fails on Windows for test_delete_from_hub after the merge of: - #6820 This is weird because the CI was green in the PR branch before merging to main. ``` FAILED tests/test_hub.py::test_delete_from_hub - AssertionError: assert [CommitOperat...\r\n---\r\n')] == [CommitOperat...in/*\n---\n')] At index 1 diff: CommitOperationAdd(path_in_repo='README.md', path_or_fileobj=b'---\r\nconfigs:\r\n- config_name: cats\r\n data_files:\r\n - split: train\r\n path: cats/train/*\r\n---\r\n') != CommitOperationAdd(path_in_repo='README.md', path_or_fileobj=b'---\nconfigs:\n- config_name: cats\n data_files:\n - split: train\n path: cats/train/*\n---\n') Full diff: [ CommitOperationDelete( path_in_repo='dogs/train/0000.csv', is_folder=False, ), CommitOperationAdd( path_in_repo='README.md', - path_or_fileobj=b'---\nconfigs:\n- config_name: cats\n data_files:\n ' ? -------- + path_or_fileobj=b'---\r\nconfigs:\r\n- config_name: cats\r\n data_f' ? ++ ++ ++ - b' - split: train\n path: cats/train/*\n---\n', ? ^^^^^^ - + b'iles:\r\n - split: train\r\n path: cats/train/*\r' ? ++++++++++ ++ ^ + b'\n---\r\n', ), ] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6856/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6856/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6855
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6855/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6855/comments
https://api.github.com/repos/huggingface/datasets/issues/6855/events
https://github.com/huggingface/datasets/pull/6855
2,274,777,812
PR_kwDODunzps5uUZNT
6,855
Fix dataset name for community Hub script-datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6855). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "The CI errors were unrelated. I am merging main once they were fixed:\r\n- #6857", "The new CI tests failing are also unrelated to this PR.\r\n\r\nThey are caused the the release of huggingface_hub-0.23.0, which now raises a FutureWarning for resume_download. See:\r\n- #6860", "I have merged main once the CI was fixed:\r\n- #6861", "This PR is ready for review @huggingface/datasets.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005015 / 0.011353 (-0.006338) | 0.003576 / 0.011008 (-0.007432) | 0.063797 / 0.038508 (0.025289) | 0.030198 / 0.023109 (0.007089) | 0.237408 / 0.275898 (-0.038490) | 0.266534 / 0.323480 (-0.056946) | 0.003133 / 0.007986 (-0.004852) | 0.002639 / 0.004328 (-0.001689) | 0.049051 / 0.004250 (0.044801) | 0.044650 / 0.037052 (0.007597) | 0.253239 / 0.258489 (-0.005250) | 0.288301 / 0.293841 (-0.005540) | 0.027459 / 0.128546 (-0.101087) | 0.010457 / 0.075646 (-0.065189) | 0.207209 / 0.419271 (-0.212063) | 0.035537 / 0.043533 (-0.007996) | 0.240914 / 0.255139 (-0.014225) | 0.266817 / 0.283200 (-0.016383) | 0.019133 / 0.141683 (-0.122550) | 1.113268 / 1.452155 (-0.338887) | 1.183576 / 1.492716 (-0.309140) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091218 / 0.018006 (0.073212) | 0.301690 / 0.000490 (0.301200) | 0.000234 / 0.000200 (0.000034) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018489 / 0.037411 (-0.018922) | 0.061379 / 0.014526 (0.046853) | 0.072854 / 0.176557 (-0.103703) | 0.120470 / 0.737135 (-0.616665) | 0.074206 / 0.296338 (-0.222133) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281725 / 0.215209 (0.066516) | 2.805469 / 2.077655 (0.727814) | 1.478755 / 1.504120 (-0.025365) | 1.361718 / 1.541195 (-0.179477) | 1.381460 / 1.468490 (-0.087030) | 0.570758 / 4.584777 (-4.014019) | 2.434707 / 3.745712 (-1.311005) | 2.853322 / 5.269862 (-2.416539) | 1.785684 / 4.565676 (-2.779992) | 0.063551 / 0.424275 (-0.360724) | 0.005322 / 0.007607 (-0.002285) | 0.330938 / 0.226044 (0.104894) | 3.247414 / 2.268929 (0.978486) | 1.821401 / 55.444624 (-53.623223) | 1.554258 / 6.876477 (-5.322219) | 1.589263 / 2.142072 (-0.552809) | 0.651232 / 4.805227 (-4.153995) | 0.117903 / 6.500664 (-6.382761) | 0.041948 / 0.075469 (-0.033522) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000386 / 1.841788 (-0.841402) | 11.645406 / 8.074308 (3.571098) | 9.567803 / 10.191392 (-0.623589) | 0.142869 / 0.680424 (-0.537555) | 0.014250 / 0.534201 (-0.519951) | 0.287054 / 0.579283 (-0.292229) | 0.268849 / 0.434364 (-0.165515) | 0.323307 / 0.540337 (-0.217031) | 0.418965 / 1.386936 (-0.967971) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005216 / 0.011353 (-0.006137) | 0.003714 / 0.011008 (-0.007294) | 0.049544 / 0.038508 (0.011036) | 0.030897 / 0.023109 (0.007788) | 0.262478 / 0.275898 (-0.013420) | 0.289693 / 0.323480 (-0.033787) | 0.004226 / 0.007986 (-0.003760) | 0.002811 / 0.004328 (-0.001518) | 0.048256 / 0.004250 (0.044006) | 0.040974 / 0.037052 (0.003922) | 0.279431 / 0.258489 (0.020942) | 0.306538 / 0.293841 (0.012697) | 0.029493 / 0.128546 (-0.099054) | 0.010550 / 0.075646 (-0.065097) | 0.057826 / 0.419271 (-0.361445) | 0.033045 / 0.043533 (-0.010488) | 0.264820 / 0.255139 (0.009681) | 0.282362 / 0.283200 (-0.000838) | 0.018387 / 0.141683 (-0.123296) | 1.167956 / 1.452155 (-0.284199) | 1.247261 / 1.492716 (-0.245455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091962 / 0.018006 (0.073956) | 0.300725 / 0.000490 (0.300236) | 0.000209 / 0.000200 (0.000009) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021835 / 0.037411 (-0.015576) | 0.076954 / 0.014526 (0.062428) | 0.087224 / 0.176557 (-0.089332) | 0.127529 / 0.737135 (-0.609606) | 0.089651 / 0.296338 (-0.206688) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290878 / 0.215209 (0.075669) | 2.845647 / 2.077655 (0.767992) | 1.550515 / 1.504120 (0.046395) | 1.422251 / 1.541195 (-0.118944) | 1.425366 / 1.468490 (-0.043124) | 0.559228 / 4.584777 (-4.025549) | 0.970661 / 3.745712 (-2.775051) | 2.755494 / 5.269862 (-2.514367) | 1.724285 / 4.565676 (-2.841391) | 0.062981 / 0.424275 (-0.361294) | 0.006644 / 0.007607 (-0.000963) | 0.344315 / 0.226044 (0.118270) | 3.383452 / 2.268929 (1.114524) | 1.914809 / 55.444624 (-53.529815) | 1.626189 / 6.876477 (-5.250288) | 1.614631 / 2.142072 (-0.527441) | 0.636415 / 4.805227 (-4.168812) | 0.115318 / 6.500664 (-6.385346) | 0.040337 / 0.075469 (-0.035132) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006257 / 1.841788 (-0.835531) | 12.152942 / 8.074308 (4.078634) | 9.744413 / 10.191392 (-0.446979) | 0.139431 / 0.680424 (-0.540993) | 0.015601 / 0.534201 (-0.518600) | 0.287069 / 0.579283 (-0.292214) | 0.125020 / 0.434364 (-0.309344) | 0.380366 / 0.540337 (-0.159971) | 0.423486 / 1.386936 (-0.963450) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1bf8a46cc7b096d5c547ea3794f6a4b6c31ea762 \"CML watermark\")\n" ]
2024-05-02T07:05:44
2024-05-03T15:58:00
2024-05-03T15:51:57
MEMBER
null
Fix dataset name for community Hub script-datasets by passing explicit dataset_name to HubDatasetModuleFactoryWithScript. Fix #6854. CC: @Wauplin
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6855/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6855/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6855", "html_url": "https://github.com/huggingface/datasets/pull/6855", "diff_url": "https://github.com/huggingface/datasets/pull/6855.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6855.patch", "merged_at": "2024-05-03T15:51:57" }
true
https://api.github.com/repos/huggingface/datasets/issues/6854
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6854/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6854/comments
https://api.github.com/repos/huggingface/datasets/issues/6854/events
https://github.com/huggingface/datasets/issues/6854
2,274,767,686
I_kwDODunzps6HljNG
6,854
Wrong example of usage when config name is missing for community script-datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2024-05-02T06:59:39
2024-05-03T15:51:59
2024-05-03T15:51:58
MEMBER
null
As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example: ```python >>> ds = load_dataset("google/fleurs") ValueError: Config name is missing. Please pick one among the available configs: ['af_za', 'am_et', 'ar_eg', 'as_in', 'ast_es', 'az_az', 'be_by', 'bg_bg', 'bn_in', 'bs_ba', 'ca_es', 'ceb_ph', 'ckb_iq', 'cmn_hans_cn', 'cs_cz', 'cy_gb', 'da_dk', 'de_de', 'el_gr', 'en_us', 'es_419', 'et_ee', 'fa_ir', 'ff_sn', 'fi_fi', 'fil_ph', 'fr_fr', 'ga_ie', 'gl_es', 'gu_in', 'ha_ng', 'he_il', 'hi_in', 'hr_hr', 'hu_hu', 'hy_am', 'id_id', 'ig_ng', 'is_is', 'it_it', 'ja_jp', 'jv_id', 'ka_ge', 'kam_ke', 'kea_cv', 'kk_kz', 'km_kh', 'kn_in', 'ko_kr', 'ky_kg', 'lb_lu', 'lg_ug', 'ln_cd', 'lo_la', 'lt_lt', 'luo_ke', 'lv_lv', 'mi_nz', 'mk_mk', 'ml_in', 'mn_mn', 'mr_in', 'ms_my', 'mt_mt', 'my_mm', 'nb_no', 'ne_np', 'nl_nl', 'nso_za', 'ny_mw', 'oc_fr', 'om_et', 'or_in', 'pa_in', 'pl_pl', 'ps_af', 'pt_br', 'ro_ro', 'ru_ru', 'sd_in', 'sk_sk', 'sl_si', 'sn_zw', 'so_so', 'sr_rs', 'sv_se', 'sw_ke', 'ta_in', 'te_in', 'tg_tj', 'th_th', 'tr_tr', 'uk_ua', 'umb_ao', 'ur_pk', 'uz_uz', 'vi_vn', 'wo_sn', 'xh_za', 'yo_ng', 'yue_hant_hk', 'zu_za', 'all'] Example of usage: `load_dataset('fleurs', 'af_za')` ``` Note the example of usage in the error message suggests loading "fleurs" instead of "google/fleurs".
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6854/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6854/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6853
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6853/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6853/comments
https://api.github.com/repos/huggingface/datasets/issues/6853/events
https://github.com/huggingface/datasets/issues/6853
2,272,570,000
I_kwDODunzps6HdKqQ
6,853
Support soft links for load_datasets imagefolder
{ "login": "billytcl", "id": 10386511, "node_id": "MDQ6VXNlcjEwMzg2NTEx", "avatar_url": "https://avatars.githubusercontent.com/u/10386511?v=4", "gravatar_id": "", "url": "https://api.github.com/users/billytcl", "html_url": "https://github.com/billytcl", "followers_url": "https://api.github.com/users/billytcl/followers", "following_url": "https://api.github.com/users/billytcl/following{/other_user}", "gists_url": "https://api.github.com/users/billytcl/gists{/gist_id}", "starred_url": "https://api.github.com/users/billytcl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/billytcl/subscriptions", "organizations_url": "https://api.github.com/users/billytcl/orgs", "repos_url": "https://api.github.com/users/billytcl/repos", "events_url": "https://api.github.com/users/billytcl/events{/privacy}", "received_events_url": "https://api.github.com/users/billytcl/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2024-04-30T22:14:29
2024-04-30T22:14:29
null
NONE
null
### Feature request Load_dataset from a folder of images doesn't seem to support soft links. It would be nice if it did, especially during methods development where image folders are being curated. ### Motivation Images are coming from a complex variety of sources and we'd like to be able to soft link directly from the originating folders as opposed to copying. Having a copy of the file ensures that there may be issues with image versioning as well as having double the amount of required disk space. ### Your contribution N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6853/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6853/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6852
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6852/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6852/comments
https://api.github.com/repos/huggingface/datasets/issues/6852/events
https://github.com/huggingface/datasets/issues/6852
2,272,465,011
I_kwDODunzps6HcxBz
6,852
Write token isn't working while pushing to datasets
{ "login": "zaibutcooler", "id": 130903099, "node_id": "U_kgDOB81sOw", "avatar_url": "https://avatars.githubusercontent.com/u/130903099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zaibutcooler", "html_url": "https://github.com/zaibutcooler", "followers_url": "https://api.github.com/users/zaibutcooler/followers", "following_url": "https://api.github.com/users/zaibutcooler/following{/other_user}", "gists_url": "https://api.github.com/users/zaibutcooler/gists{/gist_id}", "starred_url": "https://api.github.com/users/zaibutcooler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zaibutcooler/subscriptions", "organizations_url": "https://api.github.com/users/zaibutcooler/orgs", "repos_url": "https://api.github.com/users/zaibutcooler/repos", "events_url": "https://api.github.com/users/zaibutcooler/events{/privacy}", "received_events_url": "https://api.github.com/users/zaibutcooler/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2024-04-30T21:18:20
2024-05-02T00:55:46
2024-05-02T00:55:46
NONE
null
### Describe the bug <img width="1001" alt="Screenshot 2024-05-01 at 3 37 06 AM" src="https://github.com/huggingface/datasets/assets/130903099/00fcf12c-fcc1-4749-8592-d263d4efcbcc"> As you can see I logged in to my account and the write token is valid. But I can't upload on my main account and I am getting that error. It was okay on my test account at first try. (I refreshed the token, tried a new token but still doesn't work) ### Steps to reproduce the bug 1. I loaded a dataset. 2. I logged in using both cli and huggingface_hub 3. I pushed to my down dataset (It went well without any issues on my test account) ### Expected behavior It should have gone smoothly and this is not even my first time uploading to huggingface datasets ### Environment info colab, dataset (tried multiple versions)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6852/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6852/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6851
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6851/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6851/comments
https://api.github.com/repos/huggingface/datasets/issues/6851/events
https://github.com/huggingface/datasets/issues/6851
2,270,965,503
I_kwDODunzps6HXC7_
6,851
load_dataset('emotion') UnicodeDecodeError
{ "login": "L-Block-C", "id": 32314558, "node_id": "MDQ6VXNlcjMyMzE0NTU4", "avatar_url": "https://avatars.githubusercontent.com/u/32314558?v=4", "gravatar_id": "", "url": "https://api.github.com/users/L-Block-C", "html_url": "https://github.com/L-Block-C", "followers_url": "https://api.github.com/users/L-Block-C/followers", "following_url": "https://api.github.com/users/L-Block-C/following{/other_user}", "gists_url": "https://api.github.com/users/L-Block-C/gists{/gist_id}", "starred_url": "https://api.github.com/users/L-Block-C/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/L-Block-C/subscriptions", "organizations_url": "https://api.github.com/users/L-Block-C/orgs", "repos_url": "https://api.github.com/users/L-Block-C/repos", "events_url": "https://api.github.com/users/L-Block-C/events{/privacy}", "received_events_url": "https://api.github.com/users/L-Block-C/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-04-30T09:25:01
2024-04-30T09:25:01
null
NONE
null
### Describe the bug **emotions = load_dataset('emotion')** _UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte_ ### Steps to reproduce the bug load_dataset('emotion') ### Expected behavior succese ### Environment info py3.10 transformers 4.41.0.dev0 datasets 2.19.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6851/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6851/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6850
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6850/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6850/comments
https://api.github.com/repos/huggingface/datasets/issues/6850/events
https://github.com/huggingface/datasets/issues/6850
2,269,500,624
I_kwDODunzps6HRdTQ
6,850
Problem loading voxpopuli dataset
{ "login": "Namangarg110", "id": 40496687, "node_id": "MDQ6VXNlcjQwNDk2Njg3", "avatar_url": "https://avatars.githubusercontent.com/u/40496687?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Namangarg110", "html_url": "https://github.com/Namangarg110", "followers_url": "https://api.github.com/users/Namangarg110/followers", "following_url": "https://api.github.com/users/Namangarg110/following{/other_user}", "gists_url": "https://api.github.com/users/Namangarg110/gists{/gist_id}", "starred_url": "https://api.github.com/users/Namangarg110/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Namangarg110/subscriptions", "organizations_url": "https://api.github.com/users/Namangarg110/orgs", "repos_url": "https://api.github.com/users/Namangarg110/repos", "events_url": "https://api.github.com/users/Namangarg110/events{/privacy}", "received_events_url": "https://api.github.com/users/Namangarg110/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Version 2.18 works without problem.", "@Namangarg110 @mohsen-goodarzi The bug appears because the number of urls is less than 16 and the algorithm is meant to work on the previously created mode for a single url as stated on line 314: https://github.com/huggingface/datasets/blob/1bf8a46cc7b096d5c547ea3794f6a4b6c31ea762/src/datasets/download/download_manager.py#L314\r\n\r\nIn addition, previously `map_nested` function was supported without batching and it is meant to be the default performance. \r\n\r\nOne of the shortest walk-arounds would be changing the part of the manager with the current setting:\r\n```\r\n if len(url_or_urls) >= 16:\r\n download_func = partial(self._download_batched, download_config=download_config)\r\n else:\r\n download_func = partial(self._download_single, download_config=download_config)\r\n\r\n start_time = datetime.now()\r\n with stack_multiprocessing_download_progress_bars():\r\n downloaded_path_or_paths = map_nested(\r\n download_func,\r\n url_or_urls,\r\n map_tuple=True,\r\n num_proc=download_config.num_proc,\r\n desc=\"Downloading data files\",\r\n batched=True if len(url_or_urls) >= 16 else False,\r\n batch_size=-1,\r\n )\r\n```\r\n\r\nI would suggest to consider other datasets for similar issues and make a pull-request. ", "Thanks for reporting @Namangarg110 and thanks for the investigation @MilanaShhanukova.\r\n\r\nApparently, there is an issue with the download functionality.\r\nI am proposing a fix." ]
2024-04-29T16:46:51
2024-05-06T09:25:54
2024-05-06T09:25:54
NONE
null
### Describe the bug ``` Exception has occurred: FileNotFoundError Couldn't find file at https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/{'en': 'data/en/asr_train.tsv'} ``` Error in logic for link url creation. The link should be https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/en/asr_train.tsv Basically there should be links directly under ```metadata["train"]```, not under ```metadata["train"][self.config.languages[0]]``` same for audio urls ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("facebook/voxpopuli","en") ``` ### Expected behavior Dataset should be loaded successfully. ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-5.15.0-1041-aws-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.22.2 - PyArrow version: 16.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.12.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6850/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6850/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6849
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6849/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6849/comments
https://api.github.com/repos/huggingface/datasets/issues/6849/events
https://github.com/huggingface/datasets/pull/6849
2,268,718,355
PR_kwDODunzps5t_wnu
6,849
fix webdataset filename split
{ "login": "Bowser1704", "id": 43539191, "node_id": "MDQ6VXNlcjQzNTM5MTkx", "avatar_url": "https://avatars.githubusercontent.com/u/43539191?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bowser1704", "html_url": "https://github.com/Bowser1704", "followers_url": "https://api.github.com/users/Bowser1704/followers", "following_url": "https://api.github.com/users/Bowser1704/following{/other_user}", "gists_url": "https://api.github.com/users/Bowser1704/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bowser1704/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bowser1704/subscriptions", "organizations_url": "https://api.github.com/users/Bowser1704/orgs", "repos_url": "https://api.github.com/users/Bowser1704/repos", "events_url": "https://api.github.com/users/Bowser1704/events{/privacy}", "received_events_url": "https://api.github.com/users/Bowser1704/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-04-29T10:57:18
2024-04-29T11:14:41
null
NONE
null
use `os.path.splitext` to parse field_name. fix filename which has dot. like: ``` a.b.jpeg a.b.txt ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6849/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6849", "html_url": "https://github.com/huggingface/datasets/pull/6849", "diff_url": "https://github.com/huggingface/datasets/pull/6849.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6849.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6848
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6848/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6848/comments
https://api.github.com/repos/huggingface/datasets/issues/6848/events
https://github.com/huggingface/datasets/issues/6848
2,268,622,609
I_kwDODunzps6HOG8R
6,848
Cant Downlaod Common Voice 17.0 hy-AM
{ "login": "mheryerznkanyan", "id": 31586104, "node_id": "MDQ6VXNlcjMxNTg2MTA0", "avatar_url": "https://avatars.githubusercontent.com/u/31586104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mheryerznkanyan", "html_url": "https://github.com/mheryerznkanyan", "followers_url": "https://api.github.com/users/mheryerznkanyan/followers", "following_url": "https://api.github.com/users/mheryerznkanyan/following{/other_user}", "gists_url": "https://api.github.com/users/mheryerznkanyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/mheryerznkanyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mheryerznkanyan/subscriptions", "organizations_url": "https://api.github.com/users/mheryerznkanyan/orgs", "repos_url": "https://api.github.com/users/mheryerznkanyan/repos", "events_url": "https://api.github.com/users/mheryerznkanyan/events{/privacy}", "received_events_url": "https://api.github.com/users/mheryerznkanyan/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Same issue here." ]
2024-04-29T10:06:02
2024-05-13T06:09:30
null
NONE
null
### Describe the bug I want to download Common Voice 17.0 hy-AM but it returns an error. ``` The version_base parameter is not specified. Please specify a compatability version level, or None. Will assume defaults for version 1.1 @hydra.main(config_name='hfds_config', config_path=None) /usr/local/lib/python3.10/dist-packages/hydra/_internal/hydra.py:119: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default. See https://hydra.cc/docs/1.2/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information. ret = run_job( /usr/local/lib/python3.10/dist-packages/datasets/load.py:1429: FutureWarning: The repository for mozilla-foundation/common_voice_17_0 contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/mozilla-foundation/common_voice_17_0 You can avoid this message in future by passing the argument `trust_remote_code=True`. Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`. warnings.warn( Reading metadata...: 6180it [00:00, 133224.37it/s]les/s] Generating train split: 0 examples [00:00, ? examples/s] HuggingFace datasets failed due to some reason (stack trace below). For certain datasets (eg: MCV), it may be necessary to login to the huggingface-cli (via `huggingface-cli login`). Once logged in, you need to set `use_auth_token=True` when calling this script. Traceback error for reference : Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1743, in _prepare_split_single example = self.info.features.encode_example(record) if self.info.features is not None else record File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1878, in encode_example return encode_nested_example(self, example) File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1243, in encode_nested_example { File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1243, in <dictcomp> { File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 326, in zip_dict yield key, tuple(d[key] for d in dicts) File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 326, in <genexpr> yield key, tuple(d[key] for d in dicts) KeyError: 'sentence_id' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/workspace/nemo/scripts/speech_recognition/convert_hf_dataset_to_nemo.py", line 358, in main dataset = load_dataset( File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2549, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1005, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1767, in _download_and_prepare super()._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1100, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1605, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1762, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug ``` from datasets import load_dataset cv_17 = load_dataset("mozilla-foundation/common_voice_17_0", "hy-AM") ``` ### Expected behavior It works fine with common_voice_16_1 ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-5.15.0-1042-nvidia-x86_64-with-glibc2.35 - Python version: 3.11.6 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.2.2 - `fsspec` version: 2024.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6848/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6848/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6847
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6847/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6847/comments
https://api.github.com/repos/huggingface/datasets/issues/6847/events
https://github.com/huggingface/datasets/issues/6847
2,268,589,177
I_kwDODunzps6HN-x5
6,847
[Streaming] Only load requested splits without resolving files for the other splits
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "This should help fixing this issue: https://github.com/huggingface/datasets/pull/6832", "I'm having a similar issue when using splices:\r\n<img width=\"947\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/28941213/2153faac-e1fe-4b6d-a79b-30b2699407e8\">\r\n<img width=\"823\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/28941213/80919eca-eb6c-407d-8070-52642fdcee54\">\r\n<img width=\"914\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/28941213/5219c201-e22e-4536-acc3-a922677785ff\">\r\n\r\n\r\nIt seems to be downloading, loading, and generating splits using the entire dataset." ]
2024-04-29T09:49:32
2024-05-07T04:43:59
null
MEMBER
null
e.g. [thangvip](https://huggingface.co/thangvip)/[cosmopedia_vi_math](https://huggingface.co/datasets/thangvip/cosmopedia_vi_math) has 300 splits and it takes a very long time to load only one split. This is due to `load_dataset()` resolving the files of all the splits even if only one is needed. In `dataset-viewer` the splits are loaded in different jobs so it results in 300 jobs that resolve 300 splits -> 90k calls to `/paths-info`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6847/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6847/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6846
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6846/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6846/comments
https://api.github.com/repos/huggingface/datasets/issues/6846/events
https://github.com/huggingface/datasets/issues/6846
2,267,352,120
I_kwDODunzps6HJQw4
6,846
Unimaginable super slow iteration
{ "login": "rangehow", "id": 88258534, "node_id": "MDQ6VXNlcjg4MjU4NTM0", "avatar_url": "https://avatars.githubusercontent.com/u/88258534?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rangehow", "html_url": "https://github.com/rangehow", "followers_url": "https://api.github.com/users/rangehow/followers", "following_url": "https://api.github.com/users/rangehow/following{/other_user}", "gists_url": "https://api.github.com/users/rangehow/gists{/gist_id}", "starred_url": "https://api.github.com/users/rangehow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rangehow/subscriptions", "organizations_url": "https://api.github.com/users/rangehow/orgs", "repos_url": "https://api.github.com/users/rangehow/repos", "events_url": "https://api.github.com/users/rangehow/events{/privacy}", "received_events_url": "https://api.github.com/users/rangehow/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "In every iteration you load the full \"random_input\" column in memory, only then to access it's i-th element.\r\n\r\nYou can try using this instead\r\n\r\na,b=dataset[i]['random_input'],dataset[i]['random_output']" ]
2024-04-28T05:24:14
2024-05-06T08:30:03
2024-05-06T08:30:03
NONE
null
### Describe the bug Assuming there is a dataset with 52000 sentences, each with a length of 500, it takes 20 seconds to extract a sentence from the dataset……?Is there something wrong with my iteration? ### Steps to reproduce the bug ```python import datasets import time import random num_rows = 52000 num_cols = 500 random_input = [[random.randint(1, 100) for _ in range(num_cols)] for _ in range(num_rows)] random_output = [[random.randint(1, 100) for _ in range(num_cols)] for _ in range(num_rows)] s=time.time() d={'random_input':random_input,'random_output':random_output} dataset=datasets.Dataset.from_dict(d) print('from dict',time.time()-s) print(dataset) for i in range(len(dataset)): aa=time.time() a,b=dataset['random_input'][i],dataset['random_output'][i] print(time.time()-aa) ``` corresponding output ```bash from dict 9.215498685836792 Dataset({ features: ['random_input', 'random_output'], num_rows: 52000 }) 19.129778146743774 19.329464197158813 19.27668261528015 19.28557538986206 19.247620582580566 19.624247074127197 19.28673791885376 19.301053047180176 19.290496110916138 19.291821718215942 19.357765197753906 ``` ### Expected behavior Under normal circumstances, iteration should be very rapid as it does not involve the main tasks other than getting items ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.13 - `huggingface_hub` version: 0.21.4 - PyArrow version: 15.0.0 - Pandas version: 2.2.1 - `fsspec` version: 2024.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6846/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6845
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6845/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6845/comments
https://api.github.com/repos/huggingface/datasets/issues/6845/events
https://github.com/huggingface/datasets/issues/6845
2,265,876,551
I_kwDODunzps6HDohH
6,845
load_dataset doesn't support list column
{ "login": "arthasking123", "id": 16257131, "node_id": "MDQ6VXNlcjE2MjU3MTMx", "avatar_url": "https://avatars.githubusercontent.com/u/16257131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arthasking123", "html_url": "https://github.com/arthasking123", "followers_url": "https://api.github.com/users/arthasking123/followers", "following_url": "https://api.github.com/users/arthasking123/following{/other_user}", "gists_url": "https://api.github.com/users/arthasking123/gists{/gist_id}", "starred_url": "https://api.github.com/users/arthasking123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arthasking123/subscriptions", "organizations_url": "https://api.github.com/users/arthasking123/orgs", "repos_url": "https://api.github.com/users/arthasking123/repos", "events_url": "https://api.github.com/users/arthasking123/events{/privacy}", "received_events_url": "https://api.github.com/users/arthasking123/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I encountered this same issue when loading a customized dataset for ORPO training, in which there were three columns and two of them were lists. \r\nI debugged and found that it might be caused by the type-infer mechanism and because in some chunks one of the columns is always an empty list ([]), it was regarded as ```list<item: null>```, however in some other chunk it was ```list<item: string>```. This triggered a TypeError running the function ```table_cast()```.\r\n\r\nI temporarily fixed this by re-dumping the file into a regular JSON format instead of lines of JSON dict. I didn't dig deeper for the lack of knowledge and programming ability but I do hope some developer of this repo will find and fix it." ]
2024-04-26T14:11:44
2024-05-15T12:06:59
null
NONE
null
### Describe the bug dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese") got exception: Generating train split: 1834 examples [00:00, 5227.98 examples/s] Traceback (most recent call last): File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2295, in table_cast return cast_table_to_schema(table, schema) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2254, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2254, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1802, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1802, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2018, in cast_array_to_feature casted_array_values = _c(array.values, feature[0]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1804, in wrapper return func(array, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2115, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") TypeError: Couldn't cast array of type struct<m.name: string, x.name: string, p.name: string, n.name: string, h.name: string, name: string, c: int64, collect(r.name): list<item: string>, q.name: string, rel.name: string, count(p): int64, 1: int64, p.location: string, max(n.name): null, mn.name: string, p.time: int64, min(q.name): string> to {'q.name': Value(dtype='string', id=None), 'mn.name': Value(dtype='string', id=None), 'x.name': Value(dtype='string', id=None), 'p.name': Value(dtype='string', id=None), 'n.name': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None), 'm.name': Value(dtype='string', id=None), 'h.name': Value(dtype='string', id=None), 'count(p)': Value(dtype='int64', id=None), 'rel.name': Value(dtype='string', id=None), 'c': Value(dtype='int64', id=None), 'collect(r.name)': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '1': Value(dtype='int64', id=None), 'p.location': Value(dtype='string', id=None), 'substring(h.name,0,5)': Value(dtype='string', id=None), 'p.time': Value(dtype='int64', id=None)} The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/ubuntu/llm/train-2.py", line 150, in <module> dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/load.py", line 2609, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset ### Steps to reproduce the bug dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese") ### Expected behavior no exception ### Environment info python 3.11 datasets 2.19.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6845/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6845/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6844
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6844/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6844/comments
https://api.github.com/repos/huggingface/datasets/issues/6844/events
https://github.com/huggingface/datasets/pull/6844
2,265,870,546
PR_kwDODunzps5t2PRA
6,844
Retry on HF Hub error when streaming
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6844). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@Wauplin This PR is indeed not needed as explained in https://github.com/huggingface/datasets/issues/6843#issuecomment-2079630389. \r\n\r\nSo, I'm closing it." ]
2024-04-26T14:09:04
2024-04-26T15:37:42
2024-04-26T15:37:42
COLLABORATOR
null
Retry on the `huggingface_hub`'s `HfHubHTTPError` in the streaming mode. Fix #6843
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6844/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6844/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6844", "html_url": "https://github.com/huggingface/datasets/pull/6844", "diff_url": "https://github.com/huggingface/datasets/pull/6844.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6844.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6843
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6843/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6843/comments
https://api.github.com/repos/huggingface/datasets/issues/6843/events
https://github.com/huggingface/datasets/issues/6843
2,265,432,897
I_kwDODunzps6HB8NB
6,843
IterableDataset raises exception instead of retrying
{ "login": "bauwenst", "id": 145220868, "node_id": "U_kgDOCKflBA", "avatar_url": "https://avatars.githubusercontent.com/u/145220868?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bauwenst", "html_url": "https://github.com/bauwenst", "followers_url": "https://api.github.com/users/bauwenst/followers", "following_url": "https://api.github.com/users/bauwenst/following{/other_user}", "gists_url": "https://api.github.com/users/bauwenst/gists{/gist_id}", "starred_url": "https://api.github.com/users/bauwenst/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bauwenst/subscriptions", "organizations_url": "https://api.github.com/users/bauwenst/orgs", "repos_url": "https://api.github.com/users/bauwenst/repos", "events_url": "https://api.github.com/users/bauwenst/events{/privacy}", "received_events_url": "https://api.github.com/users/bauwenst/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Thanks for reporting! I've opened a PR with a fix.", "Thanks, @mariosasko! Related question (although I guess this is a feature request): could we have some kind of exponential back-off for these retries? Here's my reasoning:\r\n- If a one-time accidental error happens, you should retry immediately and will succeed immediately.\r\n- If the Hub has a small outage on the order of minutes, you don't want to retry on the order of hours. \r\n- If the Hub has a prologned outage of several hours, we don't want to keep retrying on the order of minutes.\r\n\r\nThere actually already exists an implementation for (clipped) exponential backoff in the HuggingFace suite ([here](https://github.com/huggingface/huggingface_hub/blob/61b156a4f2e5fe1a492ed8712b26803e2122bde0/src/huggingface_hub/utils/_http.py#L306)), but I don't think it is used here.\r\n\r\nThe requirements are basically that you have an initial minimum waiting time and a maximum waiting time, and with each retry, the waiting time is doubled. We don't want to overload your servers with needless retries, especially when they're down :sweat_smile:", "Oh, I've just remembered that we added retries to the `HfFileSystem` in `huggingface_hub` 0.21.0 (see [this](https://github.com/huggingface/huggingface_hub/blob/61b156a4f2e5fe1a492ed8712b26803e2122bde0/src/huggingface_hub/hf_file_system.py#L703)), so I'll close the linked PR as we don't want to retry the retries :).\r\n\r\nI agree with the exponential backoff suggestion, so I'll open another PR.", "@mariosasko The call you linked indeed points to the implementation I linked in my previous comment, yes, but it has no configurability. Arguably, you want to have this hidden backoff under the hood that catches small network disturbances on the time scale of seconds -- perhaps even with hardcoded limits as is the case currently -- but you also still want to have a separate backoff on top of that with the configurability as suggested by @lhoestq in [the comment I linked](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229).\r\n\r\nMy particular use-case is that I'm streaming a dataset while training on a university cluster with a very long scheduling queue. This means that when the backoff runs out of retries (which happens in under 30 seconds with the call you linked), I lose my spot on the cluster and have to queue for a whole day or more. Ideally, I should be able to specify that I want to retry for 2 to 3 hours but with more and more time between requests, so that I can smooth over hours-long outages without a setback of days.", "I also have my runs crash a surprising amount due to the dataloader crashing because of the hub, some way to address this would be nice." ]
2024-04-26T10:00:43
2024-04-30T13:14:13
null
NONE
null
### Describe the bug In light of the recent server outages, I decided to look into whether I could somehow wrap my IterableDataset streams to retry rather than error out immediately. To my surprise, `datasets` [already supports retries](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229). Since a commit by @lhoestq [last week](https://github.com/huggingface/datasets/commit/a188022dc43a76a119d90c03832d51d6e4a94d91), that code lives here: https://github.com/huggingface/datasets/blob/fe2bea6a4b09b180bd23b88fe96dfd1a11191a4f/src/datasets/utils/file_utils.py#L1097C1-L1111C19 If GitHub code snippets still aren't working, here's a copy: ```python def read_with_retries(*args, **kwargs): disconnect_err = None for retry in range(1, max_retries + 1): try: out = read(*args, **kwargs) break except (ClientError, TimeoutError) as err: disconnect_err = err logger.warning( f"Got disconnected from remote data host. Retrying in {config.STREAMING_READ_RETRY_INTERVAL}sec [{retry}/{max_retries}]" ) time.sleep(config.STREAMING_READ_RETRY_INTERVAL) else: raise ConnectionError("Server Disconnected") from disconnect_err return out ``` With the latest outage, the end of my stack trace looked like this: ``` ... File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 342, in read_with_retries out = read(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 301, in read return self._buffer.read(size) ^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/_compression.py", line 68, in readinto data = self.read(len(byte_view)) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 505, in read buf = self._fp.read(io.DEFAULT_BUFFER_SIZE) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 88, in read return self.file.read(size) ^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/site-packages/fsspec/spec.py", line 1856, in read out = self.cache._fetch(self.loc, self.loc + length) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/site-packages/fsspec/caching.py", line 189, in _fetch self.cache = self.fetcher(start, end) # new block replaces old ^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/hf_file_system.py", line 626, in _fetch_range hf_raise_for_status(r) File "/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 333, in hf_raise_for_status raise HfHubHTTPError(str(e), response=response) from e huggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/datasets/allenai/c4/resolve/1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-train.00346-of-01024.json.gz ``` Indeed, the code for retries only catches `ClientError`s and `TimeoutError`s, and all other exceptions, *including HuggingFace's own custom HTTP error class*, **are not caught. Nothing is retried,** and instead the exception is propagated upwards immediately. ### Steps to reproduce the bug Not sure how you reproduce this. Maybe unplug your Ethernet cable while streaming a dataset; the issue is pretty clear from the stack trace. ### Expected behavior All HTTP errors while iterating a streamable dataset should cause retries. ### Environment info Output from `datasets-cli env`: - `datasets` version: 2.18.0 - Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28 - Python version: 3.11.7 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6843/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6843/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6842
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6842/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6842/comments
https://api.github.com/repos/huggingface/datasets/issues/6842/events
https://github.com/huggingface/datasets/issues/6842
2,264,692,159
I_kwDODunzps6G_HW_
6,842
Datasets with files with colon : in filenames cannot be used on Windows
{ "login": "jacobjennings", "id": 1038927, "node_id": "MDQ6VXNlcjEwMzg5Mjc=", "avatar_url": "https://avatars.githubusercontent.com/u/1038927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jacobjennings", "html_url": "https://github.com/jacobjennings", "followers_url": "https://api.github.com/users/jacobjennings/followers", "following_url": "https://api.github.com/users/jacobjennings/following{/other_user}", "gists_url": "https://api.github.com/users/jacobjennings/gists{/gist_id}", "starred_url": "https://api.github.com/users/jacobjennings/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jacobjennings/subscriptions", "organizations_url": "https://api.github.com/users/jacobjennings/orgs", "repos_url": "https://api.github.com/users/jacobjennings/repos", "events_url": "https://api.github.com/users/jacobjennings/events{/privacy}", "received_events_url": "https://api.github.com/users/jacobjennings/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-04-26T00:14:16
2024-04-26T00:14:16
null
NONE
null
### Describe the bug Datasets (such as https://huggingface.co/datasets/MLCommons/peoples_speech) cannot be used on Windows due to the fact that windows does not allow colons ":" in filenames. These should be converted into alternative strings. ### Steps to reproduce the bug 1. Attempt to run load_dataset on MLCommons/peoples_speech ### Expected behavior Does not crash during extraction ### Environment info Windows 11, NTFS filesystem, Python 3.12
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6842/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6841
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6841/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6841/comments
https://api.github.com/repos/huggingface/datasets/issues/6841/events
https://github.com/huggingface/datasets/issues/6841
2,264,687,683
I_kwDODunzps6G_GRD
6,841
Unable to load wiki_auto_asset_turk from GEM
{ "login": "abhinavsethy", "id": 23074600, "node_id": "MDQ6VXNlcjIzMDc0NjAw", "avatar_url": "https://avatars.githubusercontent.com/u/23074600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhinavsethy", "html_url": "https://github.com/abhinavsethy", "followers_url": "https://api.github.com/users/abhinavsethy/followers", "following_url": "https://api.github.com/users/abhinavsethy/following{/other_user}", "gists_url": "https://api.github.com/users/abhinavsethy/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhinavsethy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhinavsethy/subscriptions", "organizations_url": "https://api.github.com/users/abhinavsethy/orgs", "repos_url": "https://api.github.com/users/abhinavsethy/repos", "events_url": "https://api.github.com/users/abhinavsethy/events{/privacy}", "received_events_url": "https://api.github.com/users/abhinavsethy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! I've opened a [PR](https://huggingface.co/datasets/GEM/wiki_auto_asset_turk/discussions/5) with a fix. While waiting for it to be merged, you can load the dataset from the PR branch with `datasets.load_dataset(\"GEM/wiki_auto_asset_turk\", revision=\"refs/pr/5\")`", "Thanks Mario. Still getting the same issue though with the suggested fix\r\n\r\n#cat gem_sari.py\r\nimport datasets\r\nprint (datasets.__version__)\r\ndataset =datasets.load_dataset(\"GEM/wiki_auto_asset_turk\", revision=\"refs/pr/5\")\r\n\r\nEnd up with \r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/load.py\", line 2582, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1005, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1767, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1100, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1565, in _prepare_split\r\n split_info = self.info.splits[split_generator.name]\r\n ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/splits.py\", line 532, in __getitem__\r\n instructions = make_file_instructions(\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/arrow_reader.py\", line 121, in make_file_instructions\r\n info.name: filenames_for_dataset_split(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/naming.py\", line 72, in filenames_for_dataset_split\r\n prefix = os.path.join(path, prefix)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"<frozen posixpath>\", line 76, in join\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType", "Hmm, that's weird. Maybe try deleting the cache with `!rm -rf ~/.cache/huggingface/datasets` and then re-download.", "Tried that a couple of time. It does download the data fresh but end up with same error. Is there a way to see if its using the right version ?", "You can check the version with `python -c \"import datasets; print(datasets.__version__)\"`", "the datasets version is 2.18. \r\n\r\nI wanted to see if the command datasets.load_dataset(\"GEM/wiki_auto_asset_turk\", revision=\"refs/pr/5\") is using the right revision (refs/pr/5). \r\n\r\n\r\n\r\n\r\n\r\n " ]
2024-04-26T00:08:47
2024-04-26T17:22:58
2024-04-26T16:12:29
NONE
null
### Describe the bug I am unable to load the wiki_auto_asset_turk dataset. I get a fatal error while trying to access wiki_auto_asset_turk and load it with datasets.load_dataset. The error (TypeError: expected str, bytes or os.PathLike object, not NoneType) is from filenames_for_dataset_split in a os.path.join call >>import datasets >>print (datasets.__version__) >>dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk") System output: Generating train split: 100%|█| 483801/483801 [00:03<00:00, 127164.26 examples/s Generating validation split: 100%|█| 20000/20000 [00:00<00:00, 116052.94 example Generating test_asset split: 100%|██| 359/359 [00:00<00:00, 76155.93 examples/s] Generating test_turk split: 100%|███| 359/359 [00:00<00:00, 87691.76 examples/s] Traceback (most recent call last): File "/Users/abhinav.sethy/Code/openai_evals/evals/evals/grammarly_tasks/gem_sari.py", line 3, in <module> dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/load.py", line 2582, in load_dataset builder_instance.download_and_prepare( File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1005, in download_and_prepare self._download_and_prepare( File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1767, in _download_and_prepare super()._download_and_prepare( File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1100, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1565, in _prepare_split split_info = self.info.splits[split_generator.name] ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/splits.py", line 532, in __getitem__ instructions = make_file_instructions( ^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/arrow_reader.py", line 121, in make_file_instructions info.name: filenames_for_dataset_split( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/naming.py", line 72, in filenames_for_dataset_split prefix = os.path.join(path, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen posixpath>", line 76, in join TypeError: expected str, bytes or os.PathLike object, not NoneType ### Steps to reproduce the bug import datasets print (datasets.__version__) dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk") ### Expected behavior Should be able to load the dataset without any issues ### Environment info datasets version 2.18.0 (was able to reproduce bug with older versions 2.16 and 2.14 also) Python 3.12.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6841/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6840
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6840/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6840/comments
https://api.github.com/repos/huggingface/datasets/issues/6840/events
https://github.com/huggingface/datasets/issues/6840
2,264,604,766
I_kwDODunzps6G-yBe
6,840
Delete uploaded files from the UI
{ "login": "saicharan2804", "id": 62512681, "node_id": "MDQ6VXNlcjYyNTEyNjgx", "avatar_url": "https://avatars.githubusercontent.com/u/62512681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saicharan2804", "html_url": "https://github.com/saicharan2804", "followers_url": "https://api.github.com/users/saicharan2804/followers", "following_url": "https://api.github.com/users/saicharan2804/following{/other_user}", "gists_url": "https://api.github.com/users/saicharan2804/gists{/gist_id}", "starred_url": "https://api.github.com/users/saicharan2804/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saicharan2804/subscriptions", "organizations_url": "https://api.github.com/users/saicharan2804/orgs", "repos_url": "https://api.github.com/users/saicharan2804/repos", "events_url": "https://api.github.com/users/saicharan2804/events{/privacy}", "received_events_url": "https://api.github.com/users/saicharan2804/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2024-04-25T22:33:57
2024-04-25T22:33:57
null
NONE
null
### Feature request Once a file is uploaded and the commit is made, I am unable to delete individual files without completely deleting the whole dataset via the website UI. ### Motivation Would be a useful addition ### Your contribution Would love to help out with some guidance
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6840/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6839
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6839/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6839/comments
https://api.github.com/repos/huggingface/datasets/issues/6839/events
https://github.com/huggingface/datasets/pull/6839
2,263,761,062
PR_kwDODunzps5tvC1c
6,839
Remove token arg from CLI examples
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6839). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005311 / 0.011353 (-0.006042) | 0.003691 / 0.011008 (-0.007317) | 0.063714 / 0.038508 (0.025206) | 0.030875 / 0.023109 (0.007766) | 0.251210 / 0.275898 (-0.024688) | 0.280539 / 0.323480 (-0.042941) | 0.004262 / 0.007986 (-0.003724) | 0.002723 / 0.004328 (-0.001606) | 0.049487 / 0.004250 (0.045237) | 0.045655 / 0.037052 (0.008603) | 0.264399 / 0.258489 (0.005910) | 0.306613 / 0.293841 (0.012772) | 0.028513 / 0.128546 (-0.100033) | 0.010726 / 0.075646 (-0.064921) | 0.210601 / 0.419271 (-0.208670) | 0.036918 / 0.043533 (-0.006614) | 0.257872 / 0.255139 (0.002733) | 0.278951 / 0.283200 (-0.004249) | 0.017900 / 0.141683 (-0.123783) | 1.096749 / 1.452155 (-0.355406) | 1.152603 / 1.492716 (-0.340113) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095193 / 0.018006 (0.077187) | 0.303919 / 0.000490 (0.303429) | 0.000226 / 0.000200 (0.000026) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018558 / 0.037411 (-0.018853) | 0.061106 / 0.014526 (0.046580) | 0.076233 / 0.176557 (-0.100323) | 0.122402 / 0.737135 (-0.614734) | 0.075579 / 0.296338 (-0.220760) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283586 / 0.215209 (0.068377) | 2.766179 / 2.077655 (0.688524) | 1.481069 / 1.504120 (-0.023051) | 1.355004 / 1.541195 (-0.186191) | 1.392940 / 1.468490 (-0.075550) | 0.578878 / 4.584777 (-4.005899) | 2.432890 / 3.745712 (-1.312822) | 2.837912 / 5.269862 (-2.431949) | 1.762803 / 4.565676 (-2.802873) | 0.063339 / 0.424275 (-0.360937) | 0.005392 / 0.007607 (-0.002215) | 0.340271 / 0.226044 (0.114227) | 3.388371 / 2.268929 (1.119443) | 1.862622 / 55.444624 (-53.582002) | 1.543209 / 6.876477 (-5.333268) | 1.569858 / 2.142072 (-0.572215) | 0.651487 / 4.805227 (-4.153740) | 0.119048 / 6.500664 (-6.381616) | 0.042309 / 0.075469 (-0.033160) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.991161 / 1.841788 (-0.850627) | 11.778857 / 8.074308 (3.704549) | 9.586019 / 10.191392 (-0.605373) | 0.148093 / 0.680424 (-0.532331) | 0.014301 / 0.534201 (-0.519900) | 0.287983 / 0.579283 (-0.291301) | 0.266070 / 0.434364 (-0.168293) | 0.328261 / 0.540337 (-0.212076) | 0.417908 / 1.386936 (-0.969028) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005252 / 0.011353 (-0.006100) | 0.003740 / 0.011008 (-0.007268) | 0.049622 / 0.038508 (0.011114) | 0.030040 / 0.023109 (0.006931) | 0.262224 / 0.275898 (-0.013674) | 0.312216 / 0.323480 (-0.011264) | 0.004213 / 0.007986 (-0.003773) | 0.002737 / 0.004328 (-0.001592) | 0.049159 / 0.004250 (0.044908) | 0.041060 / 0.037052 (0.004008) | 0.275826 / 0.258489 (0.017337) | 0.301879 / 0.293841 (0.008038) | 0.029364 / 0.128546 (-0.099182) | 0.010453 / 0.075646 (-0.065193) | 0.058095 / 0.419271 (-0.361176) | 0.032898 / 0.043533 (-0.010635) | 0.263876 / 0.255139 (0.008737) | 0.281686 / 0.283200 (-0.001514) | 0.018711 / 0.141683 (-0.122971) | 1.126056 / 1.452155 (-0.326098) | 1.185125 / 1.492716 (-0.307591) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094153 / 0.018006 (0.076147) | 0.300719 / 0.000490 (0.300229) | 0.000207 / 0.000200 (0.000007) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022610 / 0.037411 (-0.014801) | 0.075502 / 0.014526 (0.060977) | 0.088858 / 0.176557 (-0.087699) | 0.129421 / 0.737135 (-0.607714) | 0.089331 / 0.296338 (-0.207007) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291595 / 0.215209 (0.076386) | 2.864377 / 2.077655 (0.786722) | 1.543387 / 1.504120 (0.039267) | 1.404273 / 1.541195 (-0.136922) | 1.421964 / 1.468490 (-0.046526) | 0.579275 / 4.584777 (-4.005502) | 0.979212 / 3.745712 (-2.766500) | 2.822043 / 5.269862 (-2.447818) | 1.745015 / 4.565676 (-2.820661) | 0.064626 / 0.424275 (-0.359649) | 0.005006 / 0.007607 (-0.002601) | 0.345509 / 0.226044 (0.119464) | 3.410369 / 2.268929 (1.141440) | 1.875930 / 55.444624 (-53.568694) | 1.600841 / 6.876477 (-5.275636) | 1.611818 / 2.142072 (-0.530254) | 0.662277 / 4.805227 (-4.142950) | 0.117861 / 6.500664 (-6.382803) | 0.041061 / 0.075469 (-0.034408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007834 / 1.841788 (-0.833954) | 12.345653 / 8.074308 (4.271345) | 9.775237 / 10.191392 (-0.416155) | 0.135166 / 0.680424 (-0.545258) | 0.016799 / 0.534201 (-0.517402) | 0.289235 / 0.579283 (-0.290048) | 0.126196 / 0.434364 (-0.308168) | 0.382905 / 0.540337 (-0.157432) | 0.435248 / 1.386936 (-0.951688) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#22bf5388748611a9255d8e17218d36d2f799f182 \"CML watermark\")\n" ]
2024-04-25T14:36:58
2024-04-26T17:03:51
2024-04-26T16:57:40
MEMBER
null
Remove token arg from CLI examples. Fix #6838. CC: @Wauplin
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6839/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6839/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6839", "html_url": "https://github.com/huggingface/datasets/pull/6839", "diff_url": "https://github.com/huggingface/datasets/pull/6839.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6839.patch", "merged_at": "2024-04-26T16:57:40" }
true
https://api.github.com/repos/huggingface/datasets/issues/6838
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6838/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6838/comments
https://api.github.com/repos/huggingface/datasets/issues/6838/events
https://github.com/huggingface/datasets/issues/6838
2,263,674,843
I_kwDODunzps6G7O_b
6,838
Remove token arg from CLI examples
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2024-04-25T14:00:38
2024-04-26T16:57:41
2024-04-26T16:57:41
MEMBER
null
As suggested by @Wauplin, see: https://github.com/huggingface/datasets/pull/6831#discussion_r1579492603 > I would not advertise the --token arg in the example as this shouldn't be the recommended way (best to login with env variable or huggingface-cli login)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6838/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6838/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6837
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6837/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6837/comments
https://api.github.com/repos/huggingface/datasets/issues/6837/events
https://github.com/huggingface/datasets/issues/6837
2,263,273,983
I_kwDODunzps6G5tH_
6,837
Cannot use cached dataset without Internet connection (or when servers are down)
{ "login": "DionisMuzenitov", "id": 112088378, "node_id": "U_kgDOBq5VOg", "avatar_url": "https://avatars.githubusercontent.com/u/112088378?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DionisMuzenitov", "html_url": "https://github.com/DionisMuzenitov", "followers_url": "https://api.github.com/users/DionisMuzenitov/followers", "following_url": "https://api.github.com/users/DionisMuzenitov/following{/other_user}", "gists_url": "https://api.github.com/users/DionisMuzenitov/gists{/gist_id}", "starred_url": "https://api.github.com/users/DionisMuzenitov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DionisMuzenitov/subscriptions", "organizations_url": "https://api.github.com/users/DionisMuzenitov/orgs", "repos_url": "https://api.github.com/users/DionisMuzenitov/repos", "events_url": "https://api.github.com/users/DionisMuzenitov/events{/privacy}", "received_events_url": "https://api.github.com/users/DionisMuzenitov/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "There are 2 workarounds, tho:\r\n1. Download datasets from web and just load them locally\r\n2. Use metadata directly (temporal solution, since metadata can change)\r\n```\r\nimport datasets\r\nfrom datasets.data_files import DataFilesDict, DataFilesList\r\n\r\ndata_files_list = DataFilesList(\r\n [\r\n \"hf://datasets/allenai/c4@1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-train.00000-of-01024.json.gz\"\r\n ],\r\n [(\"allenai/c4\", \"1588ec454efa1a09f29cd18ddd04fe05fc8653a2\")],\r\n)\r\ndata_files = DataFilesDict({\"train\": data_files_list})\r\nc4_dataset = datasets.load_dataset(\r\n path=\"allenai/c4\",\r\n data_files=data_files,\r\n split=\"train\",\r\n cache_dir=\"/datesets/cache\",\r\n download_mode=\"reuse_cache_if_exists\",\r\n token=False,\r\n)\r\n```\r\nSecond solution also shows where to find the bug. I suggest that the hashing functions should always use only original parameter `data_files`, and not the one they get after connecting to the server and creating `DataFilesDict`", "Hi! You need to set the `HF_DATASETS_OFFLINE` env variable to `1` to load cached datasets offline, as explained in the docs [here](https://huggingface.co/docs/datasets/v2.19.0/en/loading#offline).", "Just tested. It doesn't work, because of the exact problem I described above: hash of dataset config is different.\r\nThe only error difference is the reason why it cannot connect to HuggingFace (now it's 'offline mode is enabled')\r\n![image](https://github.com/huggingface/datasets/assets/112088378/1a7e1720-d711-46e3-9c90-53d52c441e68)\r\n" ]
2024-04-25T10:48:20
2024-04-26T14:27:15
null
NONE
null
### Describe the bug I want to be able to use cached dataset from HuggingFace even when I have no Internet connection (or when HuggingFace servers are down, or my company has network issues). The problem why I can't use it: `data_files` argument from `datasets.load_dataset()` function get it updates from the server before calculating hash for caching. As a result, when I run the same code with and without Internet I get different dataset configuration directory name. ### Steps to reproduce the bug ``` import datasets c4_dataset = datasets.load_dataset( path="allenai/c4", data_files={"train": "en/c4-train.00000-of-01024.json.gz"}, split="train", cache_dir="/datesets/cache", download_mode="reuse_cache_if_exists", token=False, ) ``` 1. Run this code with the Internet. 2. Run the same code without the Internet. ### Expected behavior When running without the Internet connection, the loader should be able to get dataset from cache ### Environment info - `datasets` version: 2.19.0 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.10.13 - `huggingface_hub` version: 0.22.2 - PyArrow version: 16.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2023.12.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6837/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6837/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6836
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6836/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6836/comments
https://api.github.com/repos/huggingface/datasets/issues/6836/events
https://github.com/huggingface/datasets/issues/6836
2,262,249,919
I_kwDODunzps6G1zG_
6,836
ExpectedMoreSplits error on load_dataset when upgrading to 2.19.0
{ "login": "ebsmothers", "id": 24319399, "node_id": "MDQ6VXNlcjI0MzE5Mzk5", "avatar_url": "https://avatars.githubusercontent.com/u/24319399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ebsmothers", "html_url": "https://github.com/ebsmothers", "followers_url": "https://api.github.com/users/ebsmothers/followers", "following_url": "https://api.github.com/users/ebsmothers/following{/other_user}", "gists_url": "https://api.github.com/users/ebsmothers/gists{/gist_id}", "starred_url": "https://api.github.com/users/ebsmothers/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ebsmothers/subscriptions", "organizations_url": "https://api.github.com/users/ebsmothers/orgs", "repos_url": "https://api.github.com/users/ebsmothers/repos", "events_url": "https://api.github.com/users/ebsmothers/events{/privacy}", "received_events_url": "https://api.github.com/users/ebsmothers/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Get same error on same datasets too.", "+1", "same error" ]
2024-04-24T21:52:35
2024-05-14T04:08:19
null
NONE
null
### Describe the bug Hi there, thanks for the great library! We have been using it a lot in torchtune and it's been a huge help for us. Regarding the bug: the same call to `load_dataset` errors with `ExpectedMoreSplits` in 2.19.0 after working fine in 2.18.0. Full details given in the repro below. ### Steps to reproduce the bug On 2.18.0, things work fine: ``` # First clear the locally cached dataset rm -r ~/.cache/huggingface/datasets/lvwerra___stack-exchange-paired pip install "datasets==2.18.0" python3 >>> from datasets import load_dataset >>> dataset = load_dataset('lvwerra/stack-exchange-paired', split='train', data_dir='data/rl') ``` On 2.19.0, they do not: ``` # First clear the locally cached dataset rm -r ~/.cache/huggingface/datasets/lvwerra___stack-exchange-paired pip install "datasets==2.19.0" python3 >>> from datasets import load_dataset >>> dataset = load_dataset('lvwerra/stack-exchange-paired', split='train', data_dir='data/rl') ``` The stack trace I see from the 2.19.0 version of load_dataset can be seen [here](https://gist.github.com/ebsmothers/f9b1f1949bee7030a8d7bb8a491550d2). (Maybe unsurprising but) notably if I do not delete the cache first I am able to load the dataset successfully. So based on this I suspect the cause is somewhere in the download logic. ### Expected behavior Download the dataset successfully :) ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-5.12.0-0_fbk16_zion_7661_geb00762ce6d2-x86_64-with-glibc2.34 - Python version: 3.11.9 - `huggingface_hub` version: 0.22.2 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6836/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6835
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6835/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6835/comments
https://api.github.com/repos/huggingface/datasets/issues/6835/events
https://github.com/huggingface/datasets/pull/6835
2,261,079,263
PR_kwDODunzps5tl2fc
6,835
LargeListType support #6834
{ "login": "Modexus", "id": 37351874, "node_id": "MDQ6VXNlcjM3MzUxODc0", "avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Modexus", "html_url": "https://github.com/Modexus", "followers_url": "https://api.github.com/users/Modexus/followers", "following_url": "https://api.github.com/users/Modexus/following{/other_user}", "gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}", "starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Modexus/subscriptions", "organizations_url": "https://api.github.com/users/Modexus/orgs", "repos_url": "https://api.github.com/users/Modexus/repos", "events_url": "https://api.github.com/users/Modexus/events{/privacy}", "received_events_url": "https://api.github.com/users/Modexus/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6835). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Fixed the conversion from `pyarrow` to `python` `Sequence` features. \r\n\r\nThere is still an issue that if `features` are passed the `Sequence` always forces conversion to `ListArray`.\r\nThis probably causes issues if the `LargeListArray` is actually needed.\r\n\r\nThere doesn't seem to be a great solution since this list is created solely on the `schema` for `Sequence`.\r\nOne solution would be to always use `LargeListArray` instead.\r\n" ]
2024-04-24T11:34:24
2024-04-30T13:16:14
null
CONTRIBUTOR
null
Fixes #6834
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6835/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6835", "html_url": "https://github.com/huggingface/datasets/pull/6835", "diff_url": "https://github.com/huggingface/datasets/pull/6835.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6835.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6834
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6834/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6834/comments
https://api.github.com/repos/huggingface/datasets/issues/6834/events
https://github.com/huggingface/datasets/issues/6834
2,261,078,104
I_kwDODunzps6GxVBY
6,834
largelisttype not supported (.from_polars())
{ "login": "Modexus", "id": 37351874, "node_id": "MDQ6VXNlcjM3MzUxODc0", "avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Modexus", "html_url": "https://github.com/Modexus", "followers_url": "https://api.github.com/users/Modexus/followers", "following_url": "https://api.github.com/users/Modexus/following{/other_user}", "gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}", "starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Modexus/subscriptions", "organizations_url": "https://api.github.com/users/Modexus/orgs", "repos_url": "https://api.github.com/users/Modexus/repos", "events_url": "https://api.github.com/users/Modexus/events{/privacy}", "received_events_url": "https://api.github.com/users/Modexus/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-04-24T11:33:43
2024-04-24T12:06:37
null
CONTRIBUTOR
null
### Describe the bug The following code fails because LargeListType is not supported. This is especially a problem for .from_polars since polars uses LargeListType. ### Steps to reproduce the bug ```python import datasets import polars as pl df = pl.DataFrame({"list": [[]]}) datasets.Dataset.from_polars(df) ``` ### Expected behavior Convert LargeListType to list. ### Environment info - `datasets` version: 2.19.1.dev0 - Platform: Linux-6.8.7-200.fc39.x86_64-x86_64-with-glibc2.38 - Python version: 3.12.2 - `huggingface_hub` version: 0.22.2 - PyArrow version: 16.0.0 - Pandas version: 2.1.4 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6834/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6833
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6833/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6833/comments
https://api.github.com/repos/huggingface/datasets/issues/6833/events
https://github.com/huggingface/datasets/issues/6833
2,259,731,274
I_kwDODunzps6GsMNK
6,833
Super slow iteration with trivial custom transform
{ "login": "xslittlegrass", "id": 2780075, "node_id": "MDQ6VXNlcjI3ODAwNzU=", "avatar_url": "https://avatars.githubusercontent.com/u/2780075?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xslittlegrass", "html_url": "https://github.com/xslittlegrass", "followers_url": "https://api.github.com/users/xslittlegrass/followers", "following_url": "https://api.github.com/users/xslittlegrass/following{/other_user}", "gists_url": "https://api.github.com/users/xslittlegrass/gists{/gist_id}", "starred_url": "https://api.github.com/users/xslittlegrass/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xslittlegrass/subscriptions", "organizations_url": "https://api.github.com/users/xslittlegrass/orgs", "repos_url": "https://api.github.com/users/xslittlegrass/repos", "events_url": "https://api.github.com/users/xslittlegrass/events{/privacy}", "received_events_url": "https://api.github.com/users/xslittlegrass/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Similar issue in text process \r\n\r\n```python\r\n\r\ntokenizer=AutoTokenizer.from_pretrained(model_dir[args.model])\r\ntrain_dataset=datasets.load_from_disk(dataset_dir[args.dataset],keep_in_memory=True)['train']\r\ntrain_dataset=train_dataset.map(partial(dname2func[args.dataset],tokenizer=tokenizer),batched=True,num_proc =50,remove_columns=train_dataset.features.keys(),desc='tokenize',keep_in_memory=True)\r\n\r\n```\r\nAfter this train_dataset will be like\r\n```python\r\nDataset({\r\n features: ['input_ids', 'labels'],\r\n num_rows: 51760\r\n})\r\n```\r\nIn which input_ids and labels are both List[int]\r\nHowever, per iter on dataset cost 7.412479639053345s ……?\r\n```python\r\nfor j in tqdm(range(len(train_dataset)),desc='first stage'):\r\n input_id,label=train_dataset['input_ids'][j],train_dataset['labels'][j]\r\n\r\n``` ", "The transform currently replaces the numpy formatting.\r\n\r\nSo you're back to copying data to long python lists which is super slow.\r\n\r\nIt would be cool for the transform to not remove the formatting in this case, but this requires a few changes in the lib" ]
2024-04-23T20:40:59
2024-05-04T11:24:37
null
NONE
null
### Describe the bug Dataset is 10X slower when applying trivial transforms: ``` import time import numpy as np from datasets import Dataset, Features, Array2D a = np.zeros((800, 800)) a = np.stack([a] * 1000) features = Features({"a": Array2D(shape=(800, 800), dtype="uint8")}) ds1 = Dataset.from_dict({"a": a}, features=features).with_format('numpy') def transform(batch): return batch ds2 = ds1.with_transform(transform) %time sum(1 for _ in ds1) %time sum(1 for _ in ds2) ``` ``` CPU times: user 472 ms, sys: 319 ms, total: 791 ms Wall time: 794 ms CPU times: user 9.32 s, sys: 443 ms, total: 9.76 s Wall time: 9.78 s ``` In my real code I'm using set_transform to apply some post-processing on-the-fly for the 2d array, but it significantly slows down the dataset even if the transform itself is trivial. Related issue: https://github.com/huggingface/datasets/issues/5841 ### Steps to reproduce the bug Use code in the description to reproduce. ### Expected behavior Trivial custom transform in the example should not slowdown the dataset iteration. ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.35 - Python version: 3.11.4 - `huggingface_hub` version: 0.20.2 - PyArrow version: 15.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2023.12.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6833/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6833/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6832
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6832/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6832/comments
https://api.github.com/repos/huggingface/datasets/issues/6832/events
https://github.com/huggingface/datasets/pull/6832
2,258,761,447
PR_kwDODunzps5teFoJ
6,832
Support downloading specific splits in `load_dataset`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6832). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-04-23T12:32:27
2024-04-30T08:55:28
null
COLLABORATOR
null
This PR builds on https://github.com/huggingface/datasets/pull/6639 to support downloading only the specified splits in `load_dataset`. For this to work, a builder's `_split_generators` need to be able to accept the requested splits (as a list) via a `splits` argument to avoid processing the non-requested ones. Also, the builder has to define a `_available_splits` method that lists all the possible `splits` values. Close https://github.com/huggingface/datasets/issues/4101, close https://github.com/huggingface/datasets/issues/2538 (I'm probably missing some) Should also make it possible to address https://github.com/huggingface/datasets/issues/6793
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6832/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6832/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6832", "html_url": "https://github.com/huggingface/datasets/pull/6832", "diff_url": "https://github.com/huggingface/datasets/pull/6832.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6832.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6831
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6831/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6831/comments
https://api.github.com/repos/huggingface/datasets/issues/6831/events
https://github.com/huggingface/datasets/pull/6831
2,258,537,405
PR_kwDODunzps5tdTy_
6,831
Add docs about the CLI
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6831). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Concretely, the docs about convert_to_parquet are here: https://moon-ci-docs.huggingface.co/docs/datasets/pr_6831/en/cli#convert-to-parquet", "There is an issue with the example snippet when copy/pasting it: the leading shell dollar sign is also copied. I guess they will not like to fix it in the backend: currently they only support Python code snippets (with leading `>>>` or `...`), as they appear in the IPython interactive console.\r\n\r\nWhat do you suggest, @severo?" ]
2024-04-23T10:41:03
2024-04-26T16:51:09
2024-04-25T10:44:10
MEMBER
null
Add docs about the CLI. Close #6830. CC: @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6831/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6831/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6831", "html_url": "https://github.com/huggingface/datasets/pull/6831", "diff_url": "https://github.com/huggingface/datasets/pull/6831.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6831.patch", "merged_at": "2024-04-25T10:44:10" }
true
https://api.github.com/repos/huggingface/datasets/issues/6830
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6830/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6830/comments
https://api.github.com/repos/huggingface/datasets/issues/6830/events
https://github.com/huggingface/datasets/issues/6830
2,258,433,178
I_kwDODunzps6GnPSa
6,830
Add a doc page for the convert_to_parquet CLI
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2024-04-23T09:49:04
2024-04-25T10:44:11
2024-04-25T10:44:11
CONTRIBUTOR
null
Follow-up to https://github.com/huggingface/datasets/pull/6795. Useful for https://github.com/huggingface/dataset-viewer/issues/2742. cc @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6830/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6830/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6829
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6829/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6829/comments
https://api.github.com/repos/huggingface/datasets/issues/6829/events
https://github.com/huggingface/datasets/issues/6829
2,258,424,577
I_kwDODunzps6GnNMB
6,829
Load and save from/to disk no longer accept pathlib.Path
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
2024-04-23T09:44:45
2024-04-23T09:44:46
null
MEMBER
null
Reported by @vttrifonov at https://github.com/huggingface/datasets/pull/6704#issuecomment-2071168296: > This change is breaking in > https://github.com/huggingface/datasets/blob/f96e74d5c633cd5435dd526adb4a74631eb05c43/src/datasets/arrow_dataset.py#L1515 > when the input is `pathlib.Path`. The issue is that `url_to_fs` expects a `str` and cannot deal with `Path`. `get_fs_token_paths` converts to `str` so it is not a problem This change was introduced in: - #6704
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6829/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6829/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6828
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6828/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6828/comments
https://api.github.com/repos/huggingface/datasets/issues/6828/events
https://github.com/huggingface/datasets/pull/6828
2,258,420,421
PR_kwDODunzps5tc55y
6,828
Support PathLike input in save_to_disk / load_from_disk
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6828). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-04-23T09:42:38
2024-04-23T11:05:52
null
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6828/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6828/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6828", "html_url": "https://github.com/huggingface/datasets/pull/6828", "diff_url": "https://github.com/huggingface/datasets/pull/6828.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6828.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6827
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6827/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6827/comments
https://api.github.com/repos/huggingface/datasets/issues/6827/events
https://github.com/huggingface/datasets/issues/6827
2,254,011,833
I_kwDODunzps6GWX25
6,827
Loading a remote dataset fails in the last release (v2.19.0)
{ "login": "zrthxn", "id": 35369637, "node_id": "MDQ6VXNlcjM1MzY5NjM3", "avatar_url": "https://avatars.githubusercontent.com/u/35369637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zrthxn", "html_url": "https://github.com/zrthxn", "followers_url": "https://api.github.com/users/zrthxn/followers", "following_url": "https://api.github.com/users/zrthxn/following{/other_user}", "gists_url": "https://api.github.com/users/zrthxn/gists{/gist_id}", "starred_url": "https://api.github.com/users/zrthxn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zrthxn/subscriptions", "organizations_url": "https://api.github.com/users/zrthxn/orgs", "repos_url": "https://api.github.com/users/zrthxn/repos", "events_url": "https://api.github.com/users/zrthxn/events{/privacy}", "received_events_url": "https://api.github.com/users/zrthxn/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-04-19T21:11:58
2024-04-19T21:13:42
null
NONE
null
While loading a dataset with multiple splits I get an error saying `Couldn't find file at <URL>` I am loading the dataset like so, nothing out of the ordinary. This dataset needs a token to access it. ``` token="hf_myhftoken-sdhbdsjgkhbd" load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test", token=token) ``` I get the following error ![Screenshot 2024-04-19 at 11 03 07 PM](https://github.com/huggingface/datasets/assets/35369637/8dce757f-08ff-45dd-85b5-890fced7c5bc) Now you can see that the URL that it is trying to reach has the JSON object of the dataset split appended to the base URL. I think this may be due to a newly introduced issue. I did not have this issue with the previous version of the datasets. Everything was fine for me yesterday and after the release 12 hours ago, this seems to have broken. Also, the dataset in question runs custom code and I checked and there have been no commits to the dataset on Huggingface in 6 months. ### Steps to reproduce the bug Since this happened with one particular dataset for me, I am listing steps to use that dataset. 1. Open https://huggingface.co/datasets/speechcolab/gigaspeech and fill the form to get access. 2. Create a token on your huggingface account with read access. 3. Run the following line, substituing `<your_token_here>` with your token. ``` load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test", token="<your_token_here>") ``` ### Expected behavior Be able to load the dataset in question. ### Environment info datasets == 2.19.0 python == 3.10 kernel == Linux 6.1.58+
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6827/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6827/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6826
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6826/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6826/comments
https://api.github.com/repos/huggingface/datasets/issues/6826/events
https://github.com/huggingface/datasets/pull/6826
2,252,445,242
PR_kwDODunzps5tJMZh
6,826
Set dev version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6826). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004893 / 0.011353 (-0.006460) | 0.003238 / 0.011008 (-0.007771) | 0.063143 / 0.038508 (0.024635) | 0.029770 / 0.023109 (0.006661) | 0.229052 / 0.275898 (-0.046846) | 0.254534 / 0.323480 (-0.068945) | 0.003083 / 0.007986 (-0.004903) | 0.002615 / 0.004328 (-0.001714) | 0.049684 / 0.004250 (0.045434) | 0.043745 / 0.037052 (0.006693) | 0.248985 / 0.258489 (-0.009504) | 0.275957 / 0.293841 (-0.017884) | 0.027323 / 0.128546 (-0.101223) | 0.010372 / 0.075646 (-0.065275) | 0.206494 / 0.419271 (-0.212778) | 0.035230 / 0.043533 (-0.008303) | 0.234235 / 0.255139 (-0.020904) | 0.252395 / 0.283200 (-0.030805) | 0.019442 / 0.141683 (-0.122240) | 1.130677 / 1.452155 (-0.321478) | 1.161721 / 1.492716 (-0.330996) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091659 / 0.018006 (0.073653) | 0.301323 / 0.000490 (0.300833) | 0.000212 / 0.000200 (0.000012) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018360 / 0.037411 (-0.019051) | 0.061101 / 0.014526 (0.046575) | 0.072383 / 0.176557 (-0.104174) | 0.117656 / 0.737135 (-0.619479) | 0.073903 / 0.296338 (-0.222436) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.272768 / 0.215209 (0.057558) | 2.655714 / 2.077655 (0.578059) | 1.446254 / 1.504120 (-0.057866) | 1.330543 / 1.541195 (-0.210652) | 1.352527 / 1.468490 (-0.115964) | 0.561428 / 4.584777 (-4.023349) | 2.368182 / 3.745712 (-1.377530) | 2.746508 / 5.269862 (-2.523353) | 1.713972 / 4.565676 (-2.851705) | 0.062046 / 0.424275 (-0.362229) | 0.005427 / 0.007607 (-0.002180) | 0.321652 / 0.226044 (0.095607) | 3.181812 / 2.268929 (0.912883) | 1.766778 / 55.444624 (-53.677846) | 1.492502 / 6.876477 (-5.383975) | 1.534658 / 2.142072 (-0.607415) | 0.640372 / 4.805227 (-4.164856) | 0.118180 / 6.500664 (-6.382484) | 0.042698 / 0.075469 (-0.032771) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993262 / 1.841788 (-0.848525) | 11.512827 / 8.074308 (3.438518) | 9.602140 / 10.191392 (-0.589252) | 0.144723 / 0.680424 (-0.535701) | 0.014122 / 0.534201 (-0.520079) | 0.302211 / 0.579283 (-0.277072) | 0.268026 / 0.434364 (-0.166338) | 0.326524 / 0.540337 (-0.213813) | 0.423781 / 1.386936 (-0.963155) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005388 / 0.011353 (-0.005965) | 0.003535 / 0.011008 (-0.007473) | 0.050139 / 0.038508 (0.011631) | 0.031813 / 0.023109 (0.008704) | 0.269501 / 0.275898 (-0.006397) | 0.294355 / 0.323480 (-0.029125) | 0.004128 / 0.007986 (-0.003858) | 0.002684 / 0.004328 (-0.001644) | 0.049295 / 0.004250 (0.045045) | 0.040129 / 0.037052 (0.003077) | 0.282406 / 0.258489 (0.023917) | 0.309822 / 0.293841 (0.015981) | 0.028506 / 0.128546 (-0.100040) | 0.010434 / 0.075646 (-0.065213) | 0.057890 / 0.419271 (-0.361382) | 0.032487 / 0.043533 (-0.011046) | 0.270631 / 0.255139 (0.015492) | 0.288734 / 0.283200 (0.005534) | 0.018710 / 0.141683 (-0.122973) | 1.151571 / 1.452155 (-0.300583) | 1.195222 / 1.492716 (-0.297494) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090939 / 0.018006 (0.072932) | 0.300278 / 0.000490 (0.299788) | 0.000202 / 0.000200 (0.000002) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022036 / 0.037411 (-0.015376) | 0.075131 / 0.014526 (0.060605) | 0.087775 / 0.176557 (-0.088782) | 0.125719 / 0.737135 (-0.611416) | 0.088491 / 0.296338 (-0.207848) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300363 / 0.215209 (0.085154) | 2.931852 / 2.077655 (0.854197) | 1.633688 / 1.504120 (0.129568) | 1.512641 / 1.541195 (-0.028554) | 1.527703 / 1.468490 (0.059213) | 0.572781 / 4.584777 (-4.011996) | 2.445950 / 3.745712 (-1.299762) | 2.883667 / 5.269862 (-2.386195) | 1.761396 / 4.565676 (-2.804280) | 0.064422 / 0.424275 (-0.359853) | 0.005332 / 0.007607 (-0.002275) | 0.346730 / 0.226044 (0.120686) | 3.443815 / 2.268929 (1.174886) | 1.988677 / 55.444624 (-53.455948) | 1.707688 / 6.876477 (-5.168789) | 1.694216 / 2.142072 (-0.447856) | 0.634834 / 4.805227 (-4.170393) | 0.115044 / 6.500664 (-6.385620) | 0.040853 / 0.075469 (-0.034616) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.009382 / 1.841788 (-0.832405) | 12.327511 / 8.074308 (4.253203) | 10.123296 / 10.191392 (-0.068097) | 0.130770 / 0.680424 (-0.549654) | 0.015548 / 0.534201 (-0.518653) | 0.286650 / 0.579283 (-0.292633) | 0.270267 / 0.434364 (-0.164097) | 0.333485 / 0.540337 (-0.206852) | 0.428288 / 1.386936 (-0.958648) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f96e74d5c633cd5435dd526adb4a74631eb05c43 \"CML watermark\")\n" ]
2024-04-19T08:51:42
2024-04-19T09:05:25
2024-04-19T08:52:14
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6826/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6826/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6826", "html_url": "https://github.com/huggingface/datasets/pull/6826", "diff_url": "https://github.com/huggingface/datasets/pull/6826.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6826.patch", "merged_at": "2024-04-19T08:52:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/6825
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6825/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6825/comments
https://api.github.com/repos/huggingface/datasets/issues/6825/events
https://github.com/huggingface/datasets/pull/6825
2,252,404,599
PR_kwDODunzps5tJEMw
6,825
Release: 2.19.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6825). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004945 / 0.011353 (-0.006407) | 0.003290 / 0.011008 (-0.007718) | 0.062404 / 0.038508 (0.023896) | 0.040056 / 0.023109 (0.016946) | 0.246574 / 0.275898 (-0.029324) | 0.275074 / 0.323480 (-0.048406) | 0.004118 / 0.007986 (-0.003867) | 0.002604 / 0.004328 (-0.001724) | 0.048618 / 0.004250 (0.044367) | 0.044088 / 0.037052 (0.007035) | 0.263059 / 0.258489 (0.004570) | 0.294602 / 0.293841 (0.000761) | 0.027425 / 0.128546 (-0.101121) | 0.010263 / 0.075646 (-0.065383) | 0.205925 / 0.419271 (-0.213346) | 0.048917 / 0.043533 (0.005384) | 0.264227 / 0.255139 (0.009088) | 0.273339 / 0.283200 (-0.009860) | 0.017783 / 0.141683 (-0.123900) | 1.137526 / 1.452155 (-0.314629) | 1.179551 / 1.492716 (-0.313165) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096809 / 0.018006 (0.078802) | 0.303854 / 0.000490 (0.303364) | 0.000207 / 0.000200 (0.000007) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017756 / 0.037411 (-0.019655) | 0.061005 / 0.014526 (0.046479) | 0.072986 / 0.176557 (-0.103571) | 0.119851 / 0.737135 (-0.617284) | 0.074733 / 0.296338 (-0.221605) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278270 / 0.215209 (0.063061) | 2.737874 / 2.077655 (0.660219) | 1.460658 / 1.504120 (-0.043462) | 1.337695 / 1.541195 (-0.203499) | 1.364376 / 1.468490 (-0.104114) | 0.565622 / 4.584777 (-4.019155) | 2.365167 / 3.745712 (-1.380546) | 2.694544 / 5.269862 (-2.575317) | 1.699689 / 4.565676 (-2.865987) | 0.062564 / 0.424275 (-0.361712) | 0.005296 / 0.007607 (-0.002311) | 0.340122 / 0.226044 (0.114077) | 3.382133 / 2.268929 (1.113204) | 1.816907 / 55.444624 (-53.627718) | 1.530825 / 6.876477 (-5.345652) | 1.533266 / 2.142072 (-0.608807) | 0.638215 / 4.805227 (-4.167012) | 0.116227 / 6.500664 (-6.384437) | 0.041548 / 0.075469 (-0.033921) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971031 / 1.841788 (-0.870757) | 11.117905 / 8.074308 (3.043597) | 9.358159 / 10.191392 (-0.833233) | 0.127954 / 0.680424 (-0.552470) | 0.013634 / 0.534201 (-0.520567) | 0.285399 / 0.579283 (-0.293885) | 0.267980 / 0.434364 (-0.166383) | 0.320219 / 0.540337 (-0.220119) | 0.416035 / 1.386936 (-0.970901) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005177 / 0.011353 (-0.006176) | 0.003078 / 0.011008 (-0.007930) | 0.049650 / 0.038508 (0.011142) | 0.030897 / 0.023109 (0.007787) | 0.271186 / 0.275898 (-0.004712) | 0.296050 / 0.323480 (-0.027430) | 0.004204 / 0.007986 (-0.003781) | 0.002755 / 0.004328 (-0.001574) | 0.049550 / 0.004250 (0.045300) | 0.039801 / 0.037052 (0.002749) | 0.283243 / 0.258489 (0.024753) | 0.310932 / 0.293841 (0.017091) | 0.029136 / 0.128546 (-0.099410) | 0.010278 / 0.075646 (-0.065368) | 0.059300 / 0.419271 (-0.359971) | 0.032965 / 0.043533 (-0.010568) | 0.272646 / 0.255139 (0.017507) | 0.293697 / 0.283200 (0.010497) | 0.018330 / 0.141683 (-0.123353) | 1.144251 / 1.452155 (-0.307904) | 1.209660 / 1.492716 (-0.283056) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091020 / 0.018006 (0.073014) | 0.298294 / 0.000490 (0.297804) | 0.000214 / 0.000200 (0.000014) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021879 / 0.037411 (-0.015532) | 0.074728 / 0.014526 (0.060202) | 0.085499 / 0.176557 (-0.091057) | 0.125743 / 0.737135 (-0.611392) | 0.086130 / 0.296338 (-0.210208) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292311 / 0.215209 (0.077102) | 2.861240 / 2.077655 (0.783585) | 1.590426 / 1.504120 (0.086306) | 1.472288 / 1.541195 (-0.068907) | 1.472901 / 1.468490 (0.004411) | 0.574924 / 4.584777 (-4.009853) | 2.450817 / 3.745712 (-1.294895) | 2.781903 / 5.269862 (-2.487959) | 1.747110 / 4.565676 (-2.818566) | 0.064680 / 0.424275 (-0.359595) | 0.005376 / 0.007607 (-0.002231) | 0.356846 / 0.226044 (0.130802) | 3.457851 / 2.268929 (1.188922) | 1.952678 / 55.444624 (-53.491946) | 1.670824 / 6.876477 (-5.205653) | 1.655872 / 2.142072 (-0.486200) | 0.655874 / 4.805227 (-4.149353) | 0.117098 / 6.500664 (-6.383566) | 0.040230 / 0.075469 (-0.035239) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007423 / 1.841788 (-0.834365) | 11.818228 / 8.074308 (3.743920) | 10.153699 / 10.191392 (-0.037693) | 0.132073 / 0.680424 (-0.548351) | 0.015101 / 0.534201 (-0.519100) | 0.286555 / 0.579283 (-0.292728) | 0.281953 / 0.434364 (-0.152411) | 0.323647 / 0.540337 (-0.216691) | 0.418698 / 1.386936 (-0.968238) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0d3c7462bc67407c42d3ad102b7f9d5914219d9d \"CML watermark\")\n" ]
2024-04-19T08:29:02
2024-05-04T12:23:26
2024-04-19T08:44:57
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6825/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/6825/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6825", "html_url": "https://github.com/huggingface/datasets/pull/6825", "diff_url": "https://github.com/huggingface/datasets/pull/6825.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6825.patch", "merged_at": "2024-04-19T08:44:57" }
true
https://api.github.com/repos/huggingface/datasets/issues/6824
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6824/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6824/comments
https://api.github.com/repos/huggingface/datasets/issues/6824/events
https://github.com/huggingface/datasets/issues/6824
2,251,076,197
I_kwDODunzps6GLLJl
6,824
Winogrande does not seem to be compatible with datasets version of 1.18.0
{ "login": "spliew", "id": 7878204, "node_id": "MDQ6VXNlcjc4NzgyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7878204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/spliew", "html_url": "https://github.com/spliew", "followers_url": "https://api.github.com/users/spliew/followers", "following_url": "https://api.github.com/users/spliew/following{/other_user}", "gists_url": "https://api.github.com/users/spliew/gists{/gist_id}", "starred_url": "https://api.github.com/users/spliew/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/spliew/subscriptions", "organizations_url": "https://api.github.com/users/spliew/orgs", "repos_url": "https://api.github.com/users/spliew/repos", "events_url": "https://api.github.com/users/spliew/events{/privacy}", "received_events_url": "https://api.github.com/users/spliew/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Do you mean 2.18 ? Can you try to update `fsspec` and `huggingface_hub` ?\r\n\r\n```\r\npip install -U fsspec huggingface_hub\r\n```", "Yes I meant 2.18, and it works after updating `fsspec` and `huggingface_hub`. Thanks!" ]
2024-04-18T16:11:04
2024-04-19T09:53:15
2024-04-19T09:52:33
NONE
null
### Describe the bug I get the following error when simply running `load_dataset('winogrande','winogrande_xl')`. I do not have such an issue in the 1.17.0 version. ```Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2556, in load_dataset builder_instance = load_dataset_builder( File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2265, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 371, in __init__ self.config, self.config_id = self._create_builder_config( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 620, in _create_builder_config builder_config._resolve_data_files( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 211, in _resolve_data_files self.data_files = self.data_files.resolve(base_path, download_config) File "/usr/local/lib/python3.10/dist-packages/datasets/data_files.py", line 799, in resolve out[key] = data_files_patterns_list.resolve(base_path, download_config) File "/usr/local/lib/python3.10/dist-packages/datasets/data_files.py", line 752, in resolve resolve_pattern( File "/usr/local/lib/python3.10/dist-packages/datasets/data_files.py", line 393, in resolve_pattern raise FileNotFoundError(error_msg) FileNotFoundError: Unable to find 'hf://datasets/winogrande@ebf71e3c7b5880d019ecf6099c0b09311b1084f5/winogrande_xl/train/0000.parquet' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']``` ### Steps to reproduce the bug from datasets import load_dataset datasets = load_dataset('winogrande','winogrande_xl') ### Expected behavior ```Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.06M/2.06M [00:00<00:00, 5.16MB/s] Downloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 118k/118k [00:00<00:00, 360kB/s] Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 85.9k/85.9k [00:00<00:00, 242kB/s] Generating train split: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 40398/40398 [00:00<00:00, 845491.12 examples/s] Generating test split: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 1767/1767 [00:00<00:00, 362501.11 examples/s] Generating validation split: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 1267/1267 [00:00<00:00, 318768.11 examples/s]``` ### Environment info datasets version: 1.18.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6824/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6824/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6823
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6823/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6823/comments
https://api.github.com/repos/huggingface/datasets/issues/6823/events
https://github.com/huggingface/datasets/issues/6823
2,250,775,569
I_kwDODunzps6GKBwR
6,823
Loading problems of Datasets with a single shard
{ "login": "andjoer", "id": 60151338, "node_id": "MDQ6VXNlcjYwMTUxMzM4", "avatar_url": "https://avatars.githubusercontent.com/u/60151338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andjoer", "html_url": "https://github.com/andjoer", "followers_url": "https://api.github.com/users/andjoer/followers", "following_url": "https://api.github.com/users/andjoer/following{/other_user}", "gists_url": "https://api.github.com/users/andjoer/gists{/gist_id}", "starred_url": "https://api.github.com/users/andjoer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andjoer/subscriptions", "organizations_url": "https://api.github.com/users/andjoer/orgs", "repos_url": "https://api.github.com/users/andjoer/repos", "events_url": "https://api.github.com/users/andjoer/events{/privacy}", "received_events_url": "https://api.github.com/users/andjoer/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-04-18T13:59:00
2024-04-18T17:51:08
null
NONE
null
### Describe the bug When saving a dataset on disk and it has a single shard it is not loaded as when it is saved in multiple shards. I installed the latest version of datasets via pip. ### Steps to reproduce the bug The code below reproduces the behavior. All works well when the range of the loop is 10000 but it fails when it is 1000. ``` from PIL import Image import numpy as np from datasets import Dataset, DatasetDict, load_dataset def load_image(): # Generate random noise image noise = np.random.randint(0, 256, (256, 256, 3), dtype=np.uint8) return Image.fromarray(noise) def create_dataset(): input_images = [] output_images = [] text_prompts = [] for _ in range(10000): # this is the problematic parameter input_images.append(load_image()) output_images.append(load_image()) text_prompts.append('test prompt') data = {'input_image': input_images, 'output_image': output_images, 'text_prompt': text_prompts} dataset = Dataset.from_dict(data) return DatasetDict({'train': dataset}) dataset = create_dataset() print('dataset before saving') print(dataset) print(dataset['train'].column_names) dataset.save_to_disk('test_ds') print('dataset after loading') dataset_loaded = load_dataset('test_ds') print(dataset_loaded) print(dataset_loaded['train'].column_names) ``` The output for 1000 iterations is: ``` dataset before saving DatasetDict({ train: Dataset({ features: ['input_image', 'output_image', 'text_prompt'], num_rows: 1000 }) }) ['input_image', 'output_image', 'text_prompt'] Saving the dataset (1/1 shards): 100%|█| 1000/1000 [00:00<00:00, 5156.00 example dataset after loading Generating train split: 1 examples [00:00, 230.52 examples/s] DatasetDict({ train: Dataset({ features: ['_data_files', '_fingerprint', '_format_columns', '_format_kwargs', '_format_type', '_output_all_columns', '_split'], num_rows: 1 }) }) ['_data_files', '_fingerprint', '_format_columns', '_format_kwargs', '_format_type', '_output_all_columns', '_split'] ``` For 10000 iteration (8 shards) it is correct: ``` dataset before saving DatasetDict({ train: Dataset({ features: ['input_image', 'output_image', 'text_prompt'], num_rows: 10000 }) }) ['input_image', 'output_image', 'text_prompt'] Saving the dataset (8/8 shards): 100%|█| 10000/10000 [00:01<00:00, 6237.68 examp dataset after loading Generating train split: 10000 examples [00:00, 10773.16 examples/s] DatasetDict({ train: Dataset({ features: ['input_image', 'output_image', 'text_prompt'], num_rows: 10000 }) }) ['input_image', 'output_image', 'text_prompt'] ``` ### Expected behavior The procedure should work for a dataset with one shrad the same as for one with multiple shards ### Environment info - `datasets` version: 2.18.0 - Platform: macOS-14.1-arm64-arm-64bit - Python version: 3.11.8 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.2.2 - `fsspec` version: 2024.2.0 Edit: I looked in the source code of load.py in datasets. I should have used "load_from_disk" and it indeed works that way. But ideally load_dataset would have raisen an error the same way as if I call a path: ``` if Path(path, config.DATASET_STATE_JSON_FILENAME).exists(): raise ValueError( "You are trying to load a dataset that was saved using `save_to_disk`. " "Please use `load_from_disk` instead." ) ``` nevertheless I find it interesting that it works just well and without a warning if there are multiple shards.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6823/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6823/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6822
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6822/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6822/comments
https://api.github.com/repos/huggingface/datasets/issues/6822/events
https://github.com/huggingface/datasets/pull/6822
2,250,316,258
PR_kwDODunzps5tB8aD
6,822
Fix parquet export infos
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6822). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005084 / 0.011353 (-0.006269) | 0.003658 / 0.011008 (-0.007351) | 0.063369 / 0.038508 (0.024860) | 0.030739 / 0.023109 (0.007630) | 0.244335 / 0.275898 (-0.031564) | 0.271731 / 0.323480 (-0.051749) | 0.004133 / 0.007986 (-0.003853) | 0.002798 / 0.004328 (-0.001530) | 0.048790 / 0.004250 (0.044540) | 0.044054 / 0.037052 (0.007002) | 0.261514 / 0.258489 (0.003025) | 0.292155 / 0.293841 (-0.001686) | 0.027971 / 0.128546 (-0.100575) | 0.010723 / 0.075646 (-0.064923) | 0.207328 / 0.419271 (-0.211944) | 0.035928 / 0.043533 (-0.007605) | 0.245320 / 0.255139 (-0.009819) | 0.268774 / 0.283200 (-0.014426) | 0.017119 / 0.141683 (-0.124564) | 1.107052 / 1.452155 (-0.345103) | 1.151752 / 1.492716 (-0.340965) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089941 / 0.018006 (0.071935) | 0.299788 / 0.000490 (0.299298) | 0.000211 / 0.000200 (0.000012) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018159 / 0.037411 (-0.019252) | 0.061876 / 0.014526 (0.047350) | 0.074733 / 0.176557 (-0.101824) | 0.122070 / 0.737135 (-0.615065) | 0.076100 / 0.296338 (-0.220238) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282209 / 0.215209 (0.067000) | 2.758098 / 2.077655 (0.680444) | 1.482454 / 1.504120 (-0.021666) | 1.372649 / 1.541195 (-0.168546) | 1.373171 / 1.468490 (-0.095319) | 0.563606 / 4.584777 (-4.021171) | 2.406760 / 3.745712 (-1.338952) | 2.796322 / 5.269862 (-2.473540) | 1.732327 / 4.565676 (-2.833350) | 0.063623 / 0.424275 (-0.360652) | 0.005338 / 0.007607 (-0.002269) | 0.337562 / 0.226044 (0.111518) | 3.345225 / 2.268929 (1.076296) | 1.844353 / 55.444624 (-53.600271) | 1.551003 / 6.876477 (-5.325474) | 1.570623 / 2.142072 (-0.571449) | 0.644843 / 4.805227 (-4.160385) | 0.118811 / 6.500664 (-6.381853) | 0.041731 / 0.075469 (-0.033738) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.970469 / 1.841788 (-0.871319) | 11.775531 / 8.074308 (3.701222) | 9.757852 / 10.191392 (-0.433540) | 0.130187 / 0.680424 (-0.550237) | 0.013654 / 0.534201 (-0.520547) | 0.328387 / 0.579283 (-0.250896) | 0.268181 / 0.434364 (-0.166183) | 0.325230 / 0.540337 (-0.215107) | 0.421055 / 1.386936 (-0.965881) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005846 / 0.011353 (-0.005507) | 0.003606 / 0.011008 (-0.007402) | 0.050787 / 0.038508 (0.012279) | 0.031635 / 0.023109 (0.008526) | 0.277040 / 0.275898 (0.001142) | 0.300544 / 0.323480 (-0.022936) | 0.004200 / 0.007986 (-0.003786) | 0.002749 / 0.004328 (-0.001580) | 0.049449 / 0.004250 (0.045198) | 0.041616 / 0.037052 (0.004564) | 0.289570 / 0.258489 (0.031081) | 0.316138 / 0.293841 (0.022297) | 0.029578 / 0.128546 (-0.098969) | 0.010582 / 0.075646 (-0.065064) | 0.058284 / 0.419271 (-0.360988) | 0.033078 / 0.043533 (-0.010455) | 0.277964 / 0.255139 (0.022825) | 0.295008 / 0.283200 (0.011808) | 0.017753 / 0.141683 (-0.123930) | 1.128635 / 1.452155 (-0.323519) | 1.190142 / 1.492716 (-0.302575) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091504 / 0.018006 (0.073498) | 0.303875 / 0.000490 (0.303385) | 0.000221 / 0.000200 (0.000021) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021413 / 0.037411 (-0.015998) | 0.074825 / 0.014526 (0.060299) | 0.086329 / 0.176557 (-0.090228) | 0.125632 / 0.737135 (-0.611503) | 0.087918 / 0.296338 (-0.208420) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297914 / 0.215209 (0.082705) | 2.922885 / 2.077655 (0.845230) | 1.625758 / 1.504120 (0.121638) | 1.500174 / 1.541195 (-0.041021) | 1.517162 / 1.468490 (0.048672) | 0.576885 / 4.584777 (-4.007892) | 2.458723 / 3.745712 (-1.286989) | 2.798471 / 5.269862 (-2.471391) | 1.762499 / 4.565676 (-2.803178) | 0.064736 / 0.424275 (-0.359539) | 0.005325 / 0.007607 (-0.002282) | 0.351697 / 0.226044 (0.125652) | 3.496223 / 2.268929 (1.227294) | 1.977535 / 55.444624 (-53.467090) | 1.695223 / 6.876477 (-5.181254) | 1.689692 / 2.142072 (-0.452381) | 0.656404 / 4.805227 (-4.148823) | 0.123106 / 6.500664 (-6.377558) | 0.040980 / 0.075469 (-0.034489) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.036972 / 1.841788 (-0.804816) | 12.163931 / 8.074308 (4.089623) | 10.297927 / 10.191392 (0.106535) | 0.144087 / 0.680424 (-0.536337) | 0.015553 / 0.534201 (-0.518648) | 0.286225 / 0.579283 (-0.293058) | 0.275567 / 0.434364 (-0.158797) | 0.332717 / 0.540337 (-0.207620) | 0.423804 / 1.386936 (-0.963132) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0bc709af303c8dc64c973a17016bd5aa5db2f3d5 \"CML watermark\")\n" ]
2024-04-18T10:21:41
2024-04-18T11:15:41
2024-04-18T11:09:13
MEMBER
null
Don't use the parquet export infos when USE_PARQUET_EXPORT is False. Otherwise the `datasets-server` might reuse erroneous data when re-running a job this follows https://github.com/huggingface/datasets/pull/6714
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6822/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6822/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6822", "html_url": "https://github.com/huggingface/datasets/pull/6822", "diff_url": "https://github.com/huggingface/datasets/pull/6822.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6822.patch", "merged_at": "2024-04-18T11:09:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/6820
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6820/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6820/comments
https://api.github.com/repos/huggingface/datasets/issues/6820/events
https://github.com/huggingface/datasets/pull/6820
2,248,471,673
PR_kwDODunzps5s7sgy
6,820
Allow deleting a subset/config from a no-script dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6820). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "This is ready for review, @huggingface/datasets.", "I am adding a test...", "@lhoestq I am getting an error in the test and I think it happens because the CI endpoint does not have the /preupload functionality:\r\n```\r\nhuggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-662a4de9-7134df595e29e4c073ac1298;332ff6e3-597a-4dfc-89df-4e9ac64215ad)\r\n\r\nRepository Not Found for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-6c54e2-17140484441915/preupload/main?create_pr=1.\r\nPlease make sure you specified the correct `repo_id` and `repo_type`.\r\nIf you are trying to access a private or gated repo, make sure you are authenticated.\r\nInvalid username or password.\r\nNote: Creating a commit assumes that the repo already exists on the Huggingface Hub. Please use `create_repo` if it's not the case.\r\n```", "@lhoestq, finally, I implemented the test with a mock of the call to `HfApi.create_commit`.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004958 / 0.011353 (-0.006395) | 0.004065 / 0.011008 (-0.006943) | 0.063499 / 0.038508 (0.024991) | 0.030260 / 0.023109 (0.007151) | 0.250910 / 0.275898 (-0.024988) | 0.276632 / 0.323480 (-0.046848) | 0.004038 / 0.007986 (-0.003948) | 0.002721 / 0.004328 (-0.001608) | 0.049098 / 0.004250 (0.044848) | 0.044418 / 0.037052 (0.007366) | 0.262189 / 0.258489 (0.003700) | 0.292426 / 0.293841 (-0.001415) | 0.027268 / 0.128546 (-0.101279) | 0.010601 / 0.075646 (-0.065045) | 0.207332 / 0.419271 (-0.211940) | 0.036102 / 0.043533 (-0.007430) | 0.252425 / 0.255139 (-0.002714) | 0.269421 / 0.283200 (-0.013779) | 0.018534 / 0.141683 (-0.123149) | 1.127869 / 1.452155 (-0.324286) | 1.179660 / 1.492716 (-0.313056) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092686 / 0.018006 (0.074680) | 0.299492 / 0.000490 (0.299002) | 0.000211 / 0.000200 (0.000011) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018385 / 0.037411 (-0.019026) | 0.060979 / 0.014526 (0.046453) | 0.073351 / 0.176557 (-0.103205) | 0.120145 / 0.737135 (-0.616990) | 0.073653 / 0.296338 (-0.222686) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286175 / 0.215209 (0.070966) | 2.792698 / 2.077655 (0.715043) | 1.507442 / 1.504120 (0.003322) | 1.392531 / 1.541195 (-0.148664) | 1.387253 / 1.468490 (-0.081237) | 0.568435 / 4.584777 (-4.016342) | 2.387392 / 3.745712 (-1.358321) | 2.813695 / 5.269862 (-2.456167) | 1.747392 / 4.565676 (-2.818284) | 0.062948 / 0.424275 (-0.361328) | 0.005596 / 0.007607 (-0.002011) | 0.334357 / 0.226044 (0.108313) | 3.263289 / 2.268929 (0.994360) | 1.829553 / 55.444624 (-53.615071) | 1.552510 / 6.876477 (-5.323967) | 1.579975 / 2.142072 (-0.562098) | 0.633982 / 4.805227 (-4.171246) | 0.118752 / 6.500664 (-6.381912) | 0.042445 / 0.075469 (-0.033024) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988062 / 1.841788 (-0.853725) | 11.615693 / 8.074308 (3.541385) | 9.728103 / 10.191392 (-0.463289) | 0.131561 / 0.680424 (-0.548862) | 0.015330 / 0.534201 (-0.518871) | 0.289617 / 0.579283 (-0.289666) | 0.265717 / 0.434364 (-0.168646) | 0.323974 / 0.540337 (-0.216363) | 0.419523 / 1.386936 (-0.967413) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005385 / 0.011353 (-0.005968) | 0.003753 / 0.011008 (-0.007255) | 0.049821 / 0.038508 (0.011313) | 0.030490 / 0.023109 (0.007381) | 0.260550 / 0.275898 (-0.015348) | 0.284598 / 0.323480 (-0.038881) | 0.004165 / 0.007986 (-0.003821) | 0.002741 / 0.004328 (-0.001588) | 0.048567 / 0.004250 (0.044317) | 0.045185 / 0.037052 (0.008133) | 0.273164 / 0.258489 (0.014674) | 0.301995 / 0.293841 (0.008155) | 0.028802 / 0.128546 (-0.099744) | 0.010539 / 0.075646 (-0.065108) | 0.057967 / 0.419271 (-0.361305) | 0.032826 / 0.043533 (-0.010706) | 0.260425 / 0.255139 (0.005286) | 0.280175 / 0.283200 (-0.003024) | 0.017202 / 0.141683 (-0.124481) | 1.129588 / 1.452155 (-0.322567) | 1.199565 / 1.492716 (-0.293152) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091234 / 0.018006 (0.073228) | 0.299313 / 0.000490 (0.298824) | 0.000203 / 0.000200 (0.000003) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022519 / 0.037411 (-0.014892) | 0.075915 / 0.014526 (0.061389) | 0.088636 / 0.176557 (-0.087920) | 0.128234 / 0.737135 (-0.608902) | 0.089782 / 0.296338 (-0.206556) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291936 / 0.215209 (0.076727) | 2.864589 / 2.077655 (0.786935) | 1.575649 / 1.504120 (0.071529) | 1.452797 / 1.541195 (-0.088398) | 1.476245 / 1.468490 (0.007754) | 0.593972 / 4.584777 (-3.990804) | 0.962315 / 3.745712 (-2.783397) | 2.836496 / 5.269862 (-2.433366) | 1.758639 / 4.565676 (-2.807038) | 0.064842 / 0.424275 (-0.359433) | 0.005076 / 0.007607 (-0.002531) | 0.342568 / 0.226044 (0.116524) | 3.392753 / 2.268929 (1.123825) | 1.908305 / 55.444624 (-53.536319) | 1.632140 / 6.876477 (-5.244337) | 1.653048 / 2.142072 (-0.489024) | 0.662068 / 4.805227 (-4.143159) | 0.118326 / 6.500664 (-6.382338) | 0.041222 / 0.075469 (-0.034247) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005119 / 1.841788 (-0.836669) | 12.250922 / 8.074308 (4.176614) | 9.775600 / 10.191392 (-0.415792) | 0.146230 / 0.680424 (-0.534194) | 0.015883 / 0.534201 (-0.518318) | 0.290807 / 0.579283 (-0.288476) | 0.126002 / 0.434364 (-0.308362) | 0.392332 / 0.540337 (-0.148005) | 0.435513 / 1.386936 (-0.951423) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ceb25e118f21f54b5b5c5e9c223713f14a798eb5 \"CML watermark\")\n" ]
2024-04-17T14:41:12
2024-05-02T07:31:03
2024-04-30T09:44:24
MEMBER
null
TODO: - [x] Add docs - [x] Delete token arg from CLI example - See: #6839 Close #6810.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6820/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6820", "html_url": "https://github.com/huggingface/datasets/pull/6820", "diff_url": "https://github.com/huggingface/datasets/pull/6820.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6820.patch", "merged_at": "2024-04-30T09:44:24" }
true
https://api.github.com/repos/huggingface/datasets/issues/6819
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6819/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6819/comments
https://api.github.com/repos/huggingface/datasets/issues/6819/events
https://github.com/huggingface/datasets/issues/6819
2,248,043,797
I_kwDODunzps6F_m0V
6,819
Give more details in `DataFilesNotFoundError` when getting the config names
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2024-04-17T11:19:47
2024-04-17T11:19:47
null
CONTRIBUTOR
null
### Feature request After https://huggingface.co/datasets/cis-lmu/Glot500/commit/39060e01272ff228cc0ce1d31ae53789cacae8c3, the dataset viewer gives the following error: ``` { "error": "Cannot get the config names for the dataset.", "cause_exception": "DataFilesNotFoundError", "cause_message": "No (supported) data files found in cis-lmu/Glot500", "cause_traceback": [ "Traceback (most recent call last):\n", " File \"/src/services/worker/src/worker/job_runners/dataset/config_names.py\", line 73, in compute_config_names_response\n config_names = get_dataset_config_names(\n", " File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 347, in get_dataset_config_names\n dataset_module = dataset_module_factory(\n", " File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1873, in dataset_module_factory\n raise e1 from None\n", " File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1854, in dataset_module_factory\n return HubDatasetModuleFactoryWithoutScript(\n", " File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1245, in get_module\n module_name, default_builder_kwargs = infer_module_for_data_files(\n", " File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 595, in infer_module_for_data_files\n raise DataFilesNotFoundError(\"No (supported) data files found\" + (f\" in {path}\" if path else \"\"))\n", "datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in cis-lmu/Glot500\n" ] } ``` because the deleted files were still listed in the README, see https://huggingface.co/datasets/cis-lmu/Glot500/discussions/4 Ideally, the error message would include the name of the first configuration with missing files, to help the user understand how to fix it. Here, it would tell that configuration `aze_Ethi` has no supported data files, instead of telling that the `cis-lmu/Glot500` *dataset* has no supported data files (which is not true). ### Motivation Giving more detail in the error would help the Datasets Hub users to debug why the dataset viewer does not work. ### Your contribution Not sure how to best fix this, as there are a lot of loops on the dataset configs in the traceback methods. "maybe" it would be easier to handle if the code was completely isolating each config.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6819/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6817
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6817/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6817/comments
https://api.github.com/repos/huggingface/datasets/issues/6817/events
https://github.com/huggingface/datasets/pull/6817
2,246,578,480
PR_kwDODunzps5s1RAN
6,817
Support indexable objects in `Dataset.__getitem__`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6817). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005464 / 0.011353 (-0.005889) | 0.004174 / 0.011008 (-0.006834) | 0.064252 / 0.038508 (0.025744) | 0.033305 / 0.023109 (0.010196) | 0.245831 / 0.275898 (-0.030067) | 0.275575 / 0.323480 (-0.047905) | 0.003359 / 0.007986 (-0.004626) | 0.004196 / 0.004328 (-0.000132) | 0.049961 / 0.004250 (0.045710) | 0.048940 / 0.037052 (0.011888) | 0.261037 / 0.258489 (0.002548) | 0.295329 / 0.293841 (0.001488) | 0.028570 / 0.128546 (-0.099976) | 0.010747 / 0.075646 (-0.064900) | 0.216021 / 0.419271 (-0.203251) | 0.036885 / 0.043533 (-0.006648) | 0.251169 / 0.255139 (-0.003970) | 0.286233 / 0.283200 (0.003034) | 0.021253 / 0.141683 (-0.120429) | 1.150669 / 1.452155 (-0.301485) | 1.187577 / 1.492716 (-0.305140) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094443 / 0.018006 (0.076436) | 0.304410 / 0.000490 (0.303920) | 0.000213 / 0.000200 (0.000013) | 0.000041 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019568 / 0.037411 (-0.017844) | 0.065734 / 0.014526 (0.051208) | 0.076042 / 0.176557 (-0.100515) | 0.123624 / 0.737135 (-0.613511) | 0.078047 / 0.296338 (-0.218291) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295725 / 0.215209 (0.080515) | 2.752501 / 2.077655 (0.674846) | 1.461856 / 1.504120 (-0.042264) | 1.353692 / 1.541195 (-0.187503) | 1.391777 / 1.468490 (-0.076713) | 0.563423 / 4.584777 (-4.021354) | 2.384620 / 3.745712 (-1.361092) | 2.876092 / 5.269862 (-2.393769) | 1.803913 / 4.565676 (-2.761763) | 0.062678 / 0.424275 (-0.361597) | 0.005428 / 0.007607 (-0.002179) | 0.333797 / 0.226044 (0.107753) | 3.304458 / 2.268929 (1.035530) | 1.801768 / 55.444624 (-53.642856) | 1.569406 / 6.876477 (-5.307070) | 1.614535 / 2.142072 (-0.527538) | 0.650178 / 4.805227 (-4.155049) | 0.119693 / 6.500664 (-6.380971) | 0.042832 / 0.075469 (-0.032637) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982035 / 1.841788 (-0.859753) | 12.390006 / 8.074308 (4.315698) | 10.127018 / 10.191392 (-0.064374) | 0.131963 / 0.680424 (-0.548461) | 0.013926 / 0.534201 (-0.520275) | 0.289587 / 0.579283 (-0.289696) | 0.270302 / 0.434364 (-0.164062) | 0.327231 / 0.540337 (-0.213107) | 0.422522 / 1.386936 (-0.964414) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005666 / 0.011353 (-0.005687) | 0.003914 / 0.011008 (-0.007094) | 0.050315 / 0.038508 (0.011807) | 0.032367 / 0.023109 (0.009257) | 0.271732 / 0.275898 (-0.004166) | 0.297248 / 0.323480 (-0.026231) | 0.005101 / 0.007986 (-0.002884) | 0.002882 / 0.004328 (-0.001447) | 0.049651 / 0.004250 (0.045401) | 0.043773 / 0.037052 (0.006721) | 0.288011 / 0.258489 (0.029522) | 0.311863 / 0.293841 (0.018023) | 0.029147 / 0.128546 (-0.099399) | 0.010722 / 0.075646 (-0.064925) | 0.058832 / 0.419271 (-0.360440) | 0.033092 / 0.043533 (-0.010441) | 0.274686 / 0.255139 (0.019547) | 0.294174 / 0.283200 (0.010975) | 0.019196 / 0.141683 (-0.122486) | 1.126615 / 1.452155 (-0.325540) | 1.193107 / 1.492716 (-0.299609) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097547 / 0.018006 (0.079541) | 0.316018 / 0.000490 (0.315529) | 0.000330 / 0.000200 (0.000130) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022336 / 0.037411 (-0.015076) | 0.077092 / 0.014526 (0.062566) | 0.088873 / 0.176557 (-0.087684) | 0.128517 / 0.737135 (-0.608619) | 0.094061 / 0.296338 (-0.202278) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300100 / 0.215209 (0.084891) | 2.893114 / 2.077655 (0.815460) | 1.570541 / 1.504120 (0.066421) | 1.453538 / 1.541195 (-0.087657) | 1.505325 / 1.468490 (0.036835) | 0.567955 / 4.584777 (-4.016822) | 2.458547 / 3.745712 (-1.287166) | 2.969181 / 5.269862 (-2.300680) | 1.850082 / 4.565676 (-2.715594) | 0.063811 / 0.424275 (-0.360464) | 0.005378 / 0.007607 (-0.002229) | 0.348219 / 0.226044 (0.122175) | 3.443986 / 2.268929 (1.175057) | 1.943005 / 55.444624 (-53.501620) | 1.686541 / 6.876477 (-5.189935) | 1.715552 / 2.142072 (-0.426520) | 0.641361 / 4.805227 (-4.163866) | 0.116652 / 6.500664 (-6.384012) | 0.042216 / 0.075469 (-0.033253) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.020102 / 1.841788 (-0.821686) | 12.966127 / 8.074308 (4.891819) | 10.748397 / 10.191392 (0.557005) | 0.132601 / 0.680424 (-0.547823) | 0.016643 / 0.534201 (-0.517558) | 0.289422 / 0.579283 (-0.289861) | 0.275524 / 0.434364 (-0.158840) | 0.332835 / 0.540337 (-0.207503) | 0.427867 / 1.386936 (-0.959069) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5eb93f61f9f6e7fefba5d800defe21e50ddf8c58 \"CML watermark\")\n" ]
2024-04-16T17:41:27
2024-04-16T18:27:44
2024-04-16T18:17:29
COLLABORATOR
null
As discussed in https://github.com/huggingface/datasets/pull/6816, this is needed to support objects that implement `__index__` such as `np.int64` in `Dataset.__getitem__`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6817/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6817/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6817", "html_url": "https://github.com/huggingface/datasets/pull/6817", "diff_url": "https://github.com/huggingface/datasets/pull/6817.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6817.patch", "merged_at": "2024-04-16T18:17:29" }
true
https://api.github.com/repos/huggingface/datasets/issues/6816
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6816/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6816/comments
https://api.github.com/repos/huggingface/datasets/issues/6816/events
https://github.com/huggingface/datasets/pull/6816
2,246,264,911
PR_kwDODunzps5s0MYO
6,816
Improve typing of Dataset.search, matching definition
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6816). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hi! This is a breaking change. A better solution is to check for \"indexable\" types in `__getitem__` to support keys such as `np.int64`:\r\n```python\r\nimport operator\r\n\r\ndef _query_table_with_indices_mapping(...): # or _query_table\r\n ...\r\n try:\r\n operator.index(key)\r\n except TypeError:\r\n pass\r\n \r\n _raise_bad_key_type(key)\r\n```", "Sounds good! We should still update type annotations for SearchResult in my opinion." ]
2024-04-16T14:53:39
2024-04-16T15:54:10
2024-04-16T15:54:10
CONTRIBUTOR
null
Previously, the output of `score, indices = Dataset.search(...)` would be numpy arrays. The definition in `SearchResult` is a `List[int]` so this PR now matched the expected type. The previous behavior is a bit annoying as `Dataset.__getitem__` doesn't support `numpy.int64` which forced me to convert `indices` to int eg: ```python score, indices = ds.search(...) item = ds[int(indices[0])] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6816/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6816", "html_url": "https://github.com/huggingface/datasets/pull/6816", "diff_url": "https://github.com/huggingface/datasets/pull/6816.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6816.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6815
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6815/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6815/comments
https://api.github.com/repos/huggingface/datasets/issues/6815/events
https://github.com/huggingface/datasets/pull/6815
2,246,197,070
PR_kwDODunzps5sz9eC
6,815
Remove `os.path.relpath` in `resolve_patterns`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6815). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005101 / 0.011353 (-0.006252) | 0.003478 / 0.011008 (-0.007531) | 0.063634 / 0.038508 (0.025126) | 0.030670 / 0.023109 (0.007561) | 0.240057 / 0.275898 (-0.035841) | 0.258726 / 0.323480 (-0.064754) | 0.004136 / 0.007986 (-0.003849) | 0.002667 / 0.004328 (-0.001662) | 0.048968 / 0.004250 (0.044718) | 0.043125 / 0.037052 (0.006073) | 0.249033 / 0.258489 (-0.009456) | 0.282630 / 0.293841 (-0.011211) | 0.027528 / 0.128546 (-0.101018) | 0.009987 / 0.075646 (-0.065660) | 0.210614 / 0.419271 (-0.208657) | 0.034965 / 0.043533 (-0.008567) | 0.239199 / 0.255139 (-0.015940) | 0.276891 / 0.283200 (-0.006309) | 0.017781 / 0.141683 (-0.123902) | 1.142795 / 1.452155 (-0.309360) | 1.184171 / 1.492716 (-0.308545) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092075 / 0.018006 (0.074068) | 0.300709 / 0.000490 (0.300220) | 0.000217 / 0.000200 (0.000017) | 0.000046 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017887 / 0.037411 (-0.019525) | 0.061134 / 0.014526 (0.046608) | 0.077075 / 0.176557 (-0.099482) | 0.118808 / 0.737135 (-0.618327) | 0.074961 / 0.296338 (-0.221377) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280404 / 0.215209 (0.065194) | 2.759453 / 2.077655 (0.681798) | 1.437552 / 1.504120 (-0.066568) | 1.318703 / 1.541195 (-0.222492) | 1.313075 / 1.468490 (-0.155416) | 0.564876 / 4.584777 (-4.019901) | 2.381595 / 3.745712 (-1.364118) | 2.759171 / 5.269862 (-2.510691) | 1.725878 / 4.565676 (-2.839799) | 0.062627 / 0.424275 (-0.361648) | 0.005295 / 0.007607 (-0.002312) | 0.335245 / 0.226044 (0.109201) | 3.276266 / 2.268929 (1.007337) | 1.843272 / 55.444624 (-53.601353) | 1.519948 / 6.876477 (-5.356529) | 1.519626 / 2.142072 (-0.622447) | 0.637891 / 4.805227 (-4.167336) | 0.116260 / 6.500664 (-6.384404) | 0.041768 / 0.075469 (-0.033701) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981739 / 1.841788 (-0.860049) | 11.354768 / 8.074308 (3.280460) | 9.900585 / 10.191392 (-0.290807) | 0.130683 / 0.680424 (-0.549741) | 0.014122 / 0.534201 (-0.520079) | 0.297451 / 0.579283 (-0.281832) | 0.264786 / 0.434364 (-0.169577) | 0.337559 / 0.540337 (-0.202778) | 0.425131 / 1.386936 (-0.961805) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005182 / 0.011353 (-0.006171) | 0.003355 / 0.011008 (-0.007653) | 0.049842 / 0.038508 (0.011334) | 0.031094 / 0.023109 (0.007985) | 0.270080 / 0.275898 (-0.005818) | 0.291602 / 0.323480 (-0.031878) | 0.004210 / 0.007986 (-0.003776) | 0.002720 / 0.004328 (-0.001608) | 0.048986 / 0.004250 (0.044736) | 0.055187 / 0.037052 (0.018135) | 0.280085 / 0.258489 (0.021595) | 0.308148 / 0.293841 (0.014308) | 0.029300 / 0.128546 (-0.099246) | 0.009976 / 0.075646 (-0.065670) | 0.057930 / 0.419271 (-0.361341) | 0.032543 / 0.043533 (-0.010990) | 0.277485 / 0.255139 (0.022346) | 0.289345 / 0.283200 (0.006145) | 0.018070 / 0.141683 (-0.123613) | 1.140977 / 1.452155 (-0.311178) | 1.190543 / 1.492716 (-0.302173) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093416 / 0.018006 (0.075410) | 0.298732 / 0.000490 (0.298242) | 0.000224 / 0.000200 (0.000024) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022167 / 0.037411 (-0.015244) | 0.074970 / 0.014526 (0.060444) | 0.086047 / 0.176557 (-0.090509) | 0.125228 / 0.737135 (-0.611907) | 0.088330 / 0.296338 (-0.208008) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292016 / 0.215209 (0.076807) | 2.845712 / 2.077655 (0.768057) | 1.576951 / 1.504120 (0.072831) | 1.452298 / 1.541195 (-0.088897) | 1.456918 / 1.468490 (-0.011572) | 0.560529 / 4.584777 (-4.024248) | 2.425333 / 3.745712 (-1.320379) | 2.739416 / 5.269862 (-2.530445) | 1.715779 / 4.565676 (-2.849898) | 0.062568 / 0.424275 (-0.361707) | 0.005327 / 0.007607 (-0.002280) | 0.351376 / 0.226044 (0.125332) | 3.401855 / 2.268929 (1.132927) | 1.921844 / 55.444624 (-53.522780) | 1.648423 / 6.876477 (-5.228054) | 1.642003 / 2.142072 (-0.500069) | 0.640789 / 4.805227 (-4.164438) | 0.114699 / 6.500664 (-6.385965) | 0.040451 / 0.075469 (-0.035018) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004186 / 1.841788 (-0.837602) | 11.879918 / 8.074308 (3.805609) | 9.981852 / 10.191392 (-0.209540) | 0.141298 / 0.680424 (-0.539126) | 0.015005 / 0.534201 (-0.519196) | 0.291537 / 0.579283 (-0.287746) | 0.272093 / 0.434364 (-0.162271) | 0.331361 / 0.540337 (-0.208977) | 0.422940 / 1.386936 (-0.963996) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ed8860faef3e751f3b77c08e09ce723a74d2c2e5 \"CML watermark\")\n" ]
2024-04-16T14:23:13
2024-04-16T16:06:48
2024-04-16T15:58:22
COLLABORATOR
null
... to save a few seconds when resolving repos with many data files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6815/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6815", "html_url": "https://github.com/huggingface/datasets/pull/6815", "diff_url": "https://github.com/huggingface/datasets/pull/6815.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6815.patch", "merged_at": "2024-04-16T15:58:22" }
true
https://api.github.com/repos/huggingface/datasets/issues/6814
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6814/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6814/comments
https://api.github.com/repos/huggingface/datasets/issues/6814/events
https://github.com/huggingface/datasets/issues/6814
2,245,857,902
I_kwDODunzps6F3RJu
6,814
`map` with `num_proc` > 1 leads to OOM
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! You can try to reduce `writer_batch_size`. It corresponds to the number of samples that stay in RAM before being flushed to disk" ]
2024-04-16T11:56:03
2024-04-19T11:53:41
null
CONTRIBUTOR
null
### Describe the bug When running `map` on parquet dataset loaded from local machine, the RAM usage increases linearly eventually leading to OOM. I was wondering if I should I save the `cache_file` after every n steps in order to prevent this? ### Steps to reproduce the bug ``` ds = load_dataset("parquet", data_files=dataset_path, split="train") ds = ds.shard(num_shards=4, index=0) ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000)) ds = ds.map(prepare_dataset, num_proc=32, writer_batch_size=1000, keep_in_memory=False, desc="preprocess dataset") ``` ``` def prepare_dataset(batch): # load audio sample = batch["audio"] inputs = feature_extractor(sample["array"], sampling_rate=16000) batch["input_values"] = inputs.input_values[0] batch["input_length"] = len(sample["array"].squeeze()) return batch ``` ### Expected behavior It shouldn't run into OOM problem. ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.17 - Python version: 3.8.19 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.0.3 - `fsspec` version: 2024.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6814/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6813
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6813/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6813/comments
https://api.github.com/repos/huggingface/datasets/issues/6813/events
https://github.com/huggingface/datasets/pull/6813
2,245,626,870
PR_kwDODunzps5sx-9V
6,813
Add Dataset.take and Dataset.skip
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6813). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005153 / 0.011353 (-0.006200) | 0.003560 / 0.011008 (-0.007448) | 0.063142 / 0.038508 (0.024634) | 0.030799 / 0.023109 (0.007690) | 0.241754 / 0.275898 (-0.034144) | 0.264874 / 0.323480 (-0.058606) | 0.003099 / 0.007986 (-0.004887) | 0.002629 / 0.004328 (-0.001700) | 0.049006 / 0.004250 (0.044756) | 0.044831 / 0.037052 (0.007779) | 0.258961 / 0.258489 (0.000472) | 0.286939 / 0.293841 (-0.006902) | 0.026756 / 0.128546 (-0.101791) | 0.010443 / 0.075646 (-0.065204) | 0.207264 / 0.419271 (-0.212007) | 0.035242 / 0.043533 (-0.008291) | 0.250440 / 0.255139 (-0.004699) | 0.265405 / 0.283200 (-0.017794) | 0.018924 / 0.141683 (-0.122759) | 1.138607 / 1.452155 (-0.313547) | 1.203017 / 1.492716 (-0.289700) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091293 / 0.018006 (0.073286) | 0.303937 / 0.000490 (0.303447) | 0.000266 / 0.000200 (0.000066) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018667 / 0.037411 (-0.018744) | 0.061310 / 0.014526 (0.046784) | 0.073565 / 0.176557 (-0.102991) | 0.119044 / 0.737135 (-0.618091) | 0.074484 / 0.296338 (-0.221854) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286324 / 0.215209 (0.071114) | 2.836637 / 2.077655 (0.758982) | 1.458531 / 1.504120 (-0.045589) | 1.333081 / 1.541195 (-0.208114) | 1.328398 / 1.468490 (-0.140092) | 0.571467 / 4.584777 (-4.013310) | 2.409869 / 3.745712 (-1.335843) | 2.760241 / 5.269862 (-2.509621) | 1.728153 / 4.565676 (-2.837523) | 0.063008 / 0.424275 (-0.361267) | 0.005375 / 0.007607 (-0.002232) | 0.338574 / 0.226044 (0.112530) | 3.355485 / 2.268929 (1.086556) | 1.812741 / 55.444624 (-53.631884) | 1.507435 / 6.876477 (-5.369041) | 1.516957 / 2.142072 (-0.625116) | 0.643790 / 4.805227 (-4.161437) | 0.117465 / 6.500664 (-6.383199) | 0.041960 / 0.075469 (-0.033509) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993787 / 1.841788 (-0.848001) | 11.439076 / 8.074308 (3.364768) | 9.636815 / 10.191392 (-0.554577) | 0.131292 / 0.680424 (-0.549132) | 0.014916 / 0.534201 (-0.519285) | 0.287309 / 0.579283 (-0.291974) | 0.261971 / 0.434364 (-0.172392) | 0.324453 / 0.540337 (-0.215885) | 0.420306 / 1.386936 (-0.966630) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005138 / 0.011353 (-0.006215) | 0.003719 / 0.011008 (-0.007289) | 0.050411 / 0.038508 (0.011903) | 0.031334 / 0.023109 (0.008225) | 0.281752 / 0.275898 (0.005854) | 0.299445 / 0.323480 (-0.024035) | 0.004194 / 0.007986 (-0.003792) | 0.002737 / 0.004328 (-0.001591) | 0.048527 / 0.004250 (0.044277) | 0.040294 / 0.037052 (0.003242) | 0.291763 / 0.258489 (0.033274) | 0.317597 / 0.293841 (0.023757) | 0.029014 / 0.128546 (-0.099532) | 0.010372 / 0.075646 (-0.065274) | 0.058704 / 0.419271 (-0.360568) | 0.033259 / 0.043533 (-0.010273) | 0.278109 / 0.255139 (0.022970) | 0.299593 / 0.283200 (0.016393) | 0.018048 / 0.141683 (-0.123635) | 1.185558 / 1.452155 (-0.266597) | 1.203481 / 1.492716 (-0.289236) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091149 / 0.018006 (0.073143) | 0.306152 / 0.000490 (0.305662) | 0.000246 / 0.000200 (0.000046) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022082 / 0.037411 (-0.015330) | 0.074487 / 0.014526 (0.059961) | 0.086112 / 0.176557 (-0.090444) | 0.124303 / 0.737135 (-0.612832) | 0.088831 / 0.296338 (-0.207508) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291745 / 0.215209 (0.076536) | 2.878397 / 2.077655 (0.800742) | 1.606920 / 1.504120 (0.102801) | 1.492352 / 1.541195 (-0.048843) | 1.509725 / 1.468490 (0.041235) | 0.567087 / 4.584777 (-4.017690) | 2.436423 / 3.745712 (-1.309290) | 2.793930 / 5.269862 (-2.475932) | 1.748329 / 4.565676 (-2.817347) | 0.063424 / 0.424275 (-0.360851) | 0.005476 / 0.007607 (-0.002131) | 0.346211 / 0.226044 (0.120167) | 3.461288 / 2.268929 (1.192360) | 1.979362 / 55.444624 (-53.465262) | 1.702877 / 6.876477 (-5.173600) | 1.699087 / 2.142072 (-0.442985) | 0.645116 / 4.805227 (-4.160112) | 0.116186 / 6.500664 (-6.384478) | 0.041246 / 0.075469 (-0.034223) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.017540 / 1.841788 (-0.824248) | 12.016640 / 8.074308 (3.942332) | 10.234085 / 10.191392 (0.042693) | 0.147558 / 0.680424 (-0.532866) | 0.015096 / 0.534201 (-0.519105) | 0.288077 / 0.579283 (-0.291206) | 0.274629 / 0.434364 (-0.159735) | 0.334097 / 0.540337 (-0.206241) | 0.425476 / 1.386936 (-0.961460) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#55eb1d9a34a91dbf2418166f9f1d92f7181e778b \"CML watermark\")\n" ]
2024-04-16T09:53:42
2024-04-16T14:12:14
2024-04-16T14:06:07
MEMBER
null
...to be aligned with IterableDataset.take and IterableDataset.skip
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6813/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6813/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6813", "html_url": "https://github.com/huggingface/datasets/pull/6813", "diff_url": "https://github.com/huggingface/datasets/pull/6813.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6813.patch", "merged_at": "2024-04-16T14:06:07" }
true
https://api.github.com/repos/huggingface/datasets/issues/6812
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6812/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6812/comments
https://api.github.com/repos/huggingface/datasets/issues/6812/events
https://github.com/huggingface/datasets/pull/6812
2,244,898,824
PR_kwDODunzps5svgoq
6,812
Run CI
{ "login": "charliermarsh", "id": 1309177, "node_id": "MDQ6VXNlcjEzMDkxNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/1309177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/charliermarsh", "html_url": "https://github.com/charliermarsh", "followers_url": "https://api.github.com/users/charliermarsh/followers", "following_url": "https://api.github.com/users/charliermarsh/following{/other_user}", "gists_url": "https://api.github.com/users/charliermarsh/gists{/gist_id}", "starred_url": "https://api.github.com/users/charliermarsh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/charliermarsh/subscriptions", "organizations_url": "https://api.github.com/users/charliermarsh/orgs", "repos_url": "https://api.github.com/users/charliermarsh/repos", "events_url": "https://api.github.com/users/charliermarsh/events{/privacy}", "received_events_url": "https://api.github.com/users/charliermarsh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "(Sorry, meant to open this against my own fork. I'm attempting to debug this issue (https://github.com/astral-sh/uv/issues/1921#issuecomment-2058056192) reported by `huggingface/datasets` on the uv repo.)" ]
2024-04-16T01:12:36
2024-04-16T01:14:16
2024-04-16T01:12:41
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6812/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6812/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6812", "html_url": "https://github.com/huggingface/datasets/pull/6812", "diff_url": "https://github.com/huggingface/datasets/pull/6812.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6812.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6811
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6811/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6811/comments
https://api.github.com/repos/huggingface/datasets/issues/6811/events
https://github.com/huggingface/datasets/pull/6811
2,243,656,096
PR_kwDODunzps5srOtR
6,811
add allow_primitive_to_str and allow_decimal_to_str instead of allow_number_to_str
{ "login": "Modexus", "id": 37351874, "node_id": "MDQ6VXNlcjM3MzUxODc0", "avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Modexus", "html_url": "https://github.com/Modexus", "followers_url": "https://api.github.com/users/Modexus/followers", "following_url": "https://api.github.com/users/Modexus/following{/other_user}", "gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}", "starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Modexus/subscriptions", "organizations_url": "https://api.github.com/users/Modexus/orgs", "repos_url": "https://api.github.com/users/Modexus/repos", "events_url": "https://api.github.com/users/Modexus/events{/privacy}", "received_events_url": "https://api.github.com/users/Modexus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6811). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@mariosasko pytest seems to be missing on windows?", "CI is not behaving well today 🙂 ", "I couldn't find an instance of the `allow_number_to_str` parameter (or `array_cast`/`cast_array_to_feature` more generally) being used in the wild. So, I think simply removing `allow_number_to_str` instead of deprecating it should be fine, considering `array_cast`/`cast_array_to_feature` are somewhat hidden. Do you agree @lhoestq? ", "Yup we can remove without any deprecation cycle", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005253 / 0.011353 (-0.006100) | 0.003767 / 0.011008 (-0.007241) | 0.064599 / 0.038508 (0.026091) | 0.030758 / 0.023109 (0.007649) | 0.237437 / 0.275898 (-0.038461) | 0.277580 / 0.323480 (-0.045900) | 0.004220 / 0.007986 (-0.003766) | 0.002738 / 0.004328 (-0.001591) | 0.049393 / 0.004250 (0.045143) | 0.045283 / 0.037052 (0.008231) | 0.249907 / 0.258489 (-0.008582) | 0.283301 / 0.293841 (-0.010540) | 0.027722 / 0.128546 (-0.100825) | 0.010842 / 0.075646 (-0.064804) | 0.219197 / 0.419271 (-0.200074) | 0.036449 / 0.043533 (-0.007084) | 0.237774 / 0.255139 (-0.017365) | 0.257981 / 0.283200 (-0.025218) | 0.018098 / 0.141683 (-0.123585) | 1.161778 / 1.452155 (-0.290376) | 1.212707 / 1.492716 (-0.280010) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096462 / 0.018006 (0.078456) | 0.305322 / 0.000490 (0.304832) | 0.000218 / 0.000200 (0.000018) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018438 / 0.037411 (-0.018973) | 0.061633 / 0.014526 (0.047107) | 0.073678 / 0.176557 (-0.102879) | 0.122033 / 0.737135 (-0.615103) | 0.074846 / 0.296338 (-0.221493) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279564 / 0.215209 (0.064355) | 2.756984 / 2.077655 (0.679330) | 1.486525 / 1.504120 (-0.017595) | 1.366474 / 1.541195 (-0.174721) | 1.370192 / 1.468490 (-0.098298) | 0.576940 / 4.584777 (-4.007837) | 2.414088 / 3.745712 (-1.331624) | 2.788423 / 5.269862 (-2.481439) | 1.738695 / 4.565676 (-2.826982) | 0.064456 / 0.424275 (-0.359819) | 0.005536 / 0.007607 (-0.002071) | 0.337266 / 0.226044 (0.111222) | 3.327140 / 2.268929 (1.058212) | 1.837553 / 55.444624 (-53.607072) | 1.538955 / 6.876477 (-5.337521) | 1.575624 / 2.142072 (-0.566448) | 0.639960 / 4.805227 (-4.165267) | 0.117607 / 6.500664 (-6.383057) | 0.042077 / 0.075469 (-0.033393) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.960488 / 1.841788 (-0.881300) | 11.565280 / 8.074308 (3.490972) | 9.702633 / 10.191392 (-0.488759) | 0.139106 / 0.680424 (-0.541318) | 0.013601 / 0.534201 (-0.520600) | 0.291499 / 0.579283 (-0.287784) | 0.277433 / 0.434364 (-0.156930) | 0.325700 / 0.540337 (-0.214637) | 0.421036 / 1.386936 (-0.965900) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005405 / 0.011353 (-0.005948) | 0.003816 / 0.011008 (-0.007192) | 0.050422 / 0.038508 (0.011914) | 0.030473 / 0.023109 (0.007364) | 0.275975 / 0.275898 (0.000077) | 0.298002 / 0.323480 (-0.025478) | 0.004280 / 0.007986 (-0.003706) | 0.002746 / 0.004328 (-0.001583) | 0.049649 / 0.004250 (0.045398) | 0.040675 / 0.037052 (0.003623) | 0.287496 / 0.258489 (0.029007) | 0.315140 / 0.293841 (0.021299) | 0.029835 / 0.128546 (-0.098711) | 0.010443 / 0.075646 (-0.065204) | 0.058299 / 0.419271 (-0.360972) | 0.032944 / 0.043533 (-0.010588) | 0.279468 / 0.255139 (0.024329) | 0.296336 / 0.283200 (0.013136) | 0.018572 / 0.141683 (-0.123111) | 1.177622 / 1.452155 (-0.274532) | 1.238240 / 1.492716 (-0.254477) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091867 / 0.018006 (0.073861) | 0.299982 / 0.000490 (0.299492) | 0.000217 / 0.000200 (0.000017) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022649 / 0.037411 (-0.014762) | 0.074948 / 0.014526 (0.060422) | 0.087949 / 0.176557 (-0.088607) | 0.125875 / 0.737135 (-0.611261) | 0.089295 / 0.296338 (-0.207044) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290387 / 0.215209 (0.075178) | 2.820969 / 2.077655 (0.743315) | 1.614607 / 1.504120 (0.110487) | 1.496959 / 1.541195 (-0.044236) | 1.526475 / 1.468490 (0.057985) | 0.570087 / 4.584777 (-4.014690) | 2.423106 / 3.745712 (-1.322606) | 2.825321 / 5.269862 (-2.444540) | 1.765580 / 4.565676 (-2.800097) | 0.063289 / 0.424275 (-0.360986) | 0.005456 / 0.007607 (-0.002151) | 0.344100 / 0.226044 (0.118055) | 3.395733 / 2.268929 (1.126804) | 1.951794 / 55.444624 (-53.492830) | 1.677689 / 6.876477 (-5.198787) | 1.684448 / 2.142072 (-0.457624) | 0.644343 / 4.805227 (-4.160885) | 0.115796 / 6.500664 (-6.384868) | 0.041052 / 0.075469 (-0.034417) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.031487 / 1.841788 (-0.810301) | 12.116156 / 8.074308 (4.041848) | 10.472247 / 10.191392 (0.280855) | 0.142934 / 0.680424 (-0.537490) | 0.015470 / 0.534201 (-0.518731) | 0.290402 / 0.579283 (-0.288882) | 0.272594 / 0.434364 (-0.161770) | 0.328311 / 0.540337 (-0.212027) | 0.424694 / 1.386936 (-0.962242) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8983a3b4dec315bf25331a6065cb74de9017f0e8 \"CML watermark\")\n" ]
2024-04-15T13:14:38
2024-04-16T17:09:28
2024-04-16T17:03:17
CONTRIBUTOR
null
PR for #6805
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6811/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6811/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6811", "html_url": "https://github.com/huggingface/datasets/pull/6811", "diff_url": "https://github.com/huggingface/datasets/pull/6811.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6811.patch", "merged_at": "2024-04-16T17:03:17" }
true
https://api.github.com/repos/huggingface/datasets/issues/6810
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6810/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6810/comments
https://api.github.com/repos/huggingface/datasets/issues/6810/events
https://github.com/huggingface/datasets/issues/6810
2,242,968,745
I_kwDODunzps6FsPyp
6,810
Allow deleting a subset/config from a no-script dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Probably best to implement this as a CLI command?", "Thanks for your comment, @mariosasko. Or maybe both (in Python and as CLI command)? The Python command would be just the reverse of `push_to_hub`...\r\n\r\nI am working on a draft implementation, so we can discuss about the API and UX." ]
2024-04-15T07:53:26
2024-04-30T09:44:25
2024-04-30T09:44:25
MEMBER
null
As proposed by @BramVanroy, it would be neat to have this functionality through the API.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6810/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6810/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6809
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6809/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6809/comments
https://api.github.com/repos/huggingface/datasets/issues/6809/events
https://github.com/huggingface/datasets/pull/6809
2,242,956,297
PR_kwDODunzps5so0e2
6,809
Make convert_to_parquet CLI command create script branch
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6809). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@huggingface/datasets once this PR is merged, I would suggest making a release. Do you agree?\r\n- This PR is a follow-up of #6795", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004963 / 0.011353 (-0.006390) | 0.003121 / 0.011008 (-0.007888) | 0.063421 / 0.038508 (0.024913) | 0.030727 / 0.023109 (0.007618) | 0.237698 / 0.275898 (-0.038200) | 0.266613 / 0.323480 (-0.056867) | 0.004237 / 0.007986 (-0.003749) | 0.002715 / 0.004328 (-0.001614) | 0.049503 / 0.004250 (0.045253) | 0.043705 / 0.037052 (0.006653) | 0.247818 / 0.258489 (-0.010671) | 0.287545 / 0.293841 (-0.006296) | 0.027232 / 0.128546 (-0.101314) | 0.009952 / 0.075646 (-0.065695) | 0.208678 / 0.419271 (-0.210593) | 0.035494 / 0.043533 (-0.008039) | 0.260900 / 0.255139 (0.005761) | 0.264738 / 0.283200 (-0.018461) | 0.018093 / 0.141683 (-0.123590) | 1.130924 / 1.452155 (-0.321231) | 1.178982 / 1.492716 (-0.313734) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094610 / 0.018006 (0.076604) | 0.304674 / 0.000490 (0.304184) | 0.000215 / 0.000200 (0.000015) | 0.000048 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018168 / 0.037411 (-0.019243) | 0.062040 / 0.014526 (0.047514) | 0.075634 / 0.176557 (-0.100922) | 0.119488 / 0.737135 (-0.617647) | 0.074790 / 0.296338 (-0.221548) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282449 / 0.215209 (0.067240) | 2.773231 / 2.077655 (0.695576) | 1.455156 / 1.504120 (-0.048964) | 1.332652 / 1.541195 (-0.208543) | 1.340795 / 1.468490 (-0.127695) | 0.576588 / 4.584777 (-4.008189) | 2.415513 / 3.745712 (-1.330199) | 2.801569 / 5.269862 (-2.468292) | 1.741039 / 4.565676 (-2.824637) | 0.064386 / 0.424275 (-0.359890) | 0.005293 / 0.007607 (-0.002314) | 0.329732 / 0.226044 (0.103688) | 3.227275 / 2.268929 (0.958347) | 1.793121 / 55.444624 (-53.651503) | 1.515115 / 6.876477 (-5.361362) | 1.518738 / 2.142072 (-0.623335) | 0.664465 / 4.805227 (-4.140762) | 0.118813 / 6.500664 (-6.381851) | 0.041715 / 0.075469 (-0.033754) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974371 / 1.841788 (-0.867416) | 11.432869 / 8.074308 (3.358561) | 9.607939 / 10.191392 (-0.583453) | 0.143996 / 0.680424 (-0.536427) | 0.014624 / 0.534201 (-0.519577) | 0.286899 / 0.579283 (-0.292384) | 0.265965 / 0.434364 (-0.168399) | 0.324727 / 0.540337 (-0.215611) | 0.420917 / 1.386936 (-0.966019) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005145 / 0.011353 (-0.006207) | 0.003723 / 0.011008 (-0.007286) | 0.050387 / 0.038508 (0.011879) | 0.030734 / 0.023109 (0.007625) | 0.274331 / 0.275898 (-0.001567) | 0.295045 / 0.323480 (-0.028435) | 0.004187 / 0.007986 (-0.003799) | 0.002781 / 0.004328 (-0.001547) | 0.049698 / 0.004250 (0.045448) | 0.040049 / 0.037052 (0.002996) | 0.284016 / 0.258489 (0.025527) | 0.309908 / 0.293841 (0.016067) | 0.028994 / 0.128546 (-0.099552) | 0.010625 / 0.075646 (-0.065021) | 0.059305 / 0.419271 (-0.359967) | 0.032982 / 0.043533 (-0.010551) | 0.273342 / 0.255139 (0.018203) | 0.291726 / 0.283200 (0.008527) | 0.018084 / 0.141683 (-0.123599) | 1.136864 / 1.452155 (-0.315290) | 1.163656 / 1.492716 (-0.329061) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094868 / 0.018006 (0.076862) | 0.302900 / 0.000490 (0.302410) | 0.000226 / 0.000200 (0.000026) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022142 / 0.037411 (-0.015269) | 0.077457 / 0.014526 (0.062932) | 0.087989 / 0.176557 (-0.088568) | 0.127354 / 0.737135 (-0.609781) | 0.092027 / 0.296338 (-0.204312) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291196 / 0.215209 (0.075987) | 2.840386 / 2.077655 (0.762731) | 1.571201 / 1.504120 (0.067081) | 1.449429 / 1.541195 (-0.091765) | 1.467189 / 1.468490 (-0.001301) | 0.580991 / 4.584777 (-4.003786) | 2.422566 / 3.745712 (-1.323146) | 2.839621 / 5.269862 (-2.430240) | 1.782987 / 4.565676 (-2.782689) | 0.064765 / 0.424275 (-0.359510) | 0.005338 / 0.007607 (-0.002269) | 0.349148 / 0.226044 (0.123104) | 3.421283 / 2.268929 (1.152355) | 1.943503 / 55.444624 (-53.501122) | 1.653881 / 6.876477 (-5.222596) | 1.698141 / 2.142072 (-0.443931) | 0.667628 / 4.805227 (-4.137599) | 0.118469 / 6.500664 (-6.382195) | 0.041693 / 0.075469 (-0.033776) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.026385 / 1.841788 (-0.815403) | 12.225049 / 8.074308 (4.150741) | 10.363072 / 10.191392 (0.171680) | 0.142682 / 0.680424 (-0.537742) | 0.015698 / 0.534201 (-0.518502) | 0.288148 / 0.579283 (-0.291135) | 0.272639 / 0.434364 (-0.161724) | 0.325305 / 0.540337 (-0.215032) | 0.421395 / 1.386936 (-0.965541) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2a14271263da2fda9f966af41c7bd885bfa42256 \"CML watermark\")\n" ]
2024-04-15T07:47:26
2024-04-17T08:44:26
2024-04-17T08:38:18
MEMBER
null
Make convert_to_parquet CLI command create a "script" branch and keep the script file on it. This PR proposes the simplest UX approach: whenever `--revision` is not explicitly passed (i.e., when the script is in the main branch), try to create a "script" branch from the "main" branch; if the "script" branch exists already, then do nothing. Follow-up of: - #6795 Close #6808. CC: @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6809/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6809/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6809", "html_url": "https://github.com/huggingface/datasets/pull/6809", "diff_url": "https://github.com/huggingface/datasets/pull/6809.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6809.patch", "merged_at": "2024-04-17T08:38:18" }
true
https://api.github.com/repos/huggingface/datasets/issues/6808
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6808/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6808/comments
https://api.github.com/repos/huggingface/datasets/issues/6808/events
https://github.com/huggingface/datasets/issues/6808
2,242,843,611
I_kwDODunzps6FrxPb
6,808
Make convert_to_parquet CLI command create script branch
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2024-04-15T06:46:07
2024-04-17T08:38:19
2024-04-17T08:38:19
MEMBER
null
As proposed by @severo, maybe we should add this functionality as well to the CLI command to convert a script-dataset to Parquet. See: https://github.com/huggingface/datasets/pull/6795#discussion_r1562819168 > When providing support, we sometimes suggest that users store their script in a script branch. What do you think of this alternative to deleting the files?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6808/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6808/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6806
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6806/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6806/comments
https://api.github.com/repos/huggingface/datasets/issues/6806/events
https://github.com/huggingface/datasets/pull/6806
2,239,435,074
PR_kwDODunzps5sc8Mb
6,806
Fix hf-internal-testing/dataset_with_script commit SHA in CI test
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6806). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005068 / 0.011353 (-0.006285) | 0.003613 / 0.011008 (-0.007395) | 0.063226 / 0.038508 (0.024718) | 0.030653 / 0.023109 (0.007544) | 0.243981 / 0.275898 (-0.031918) | 0.268596 / 0.323480 (-0.054884) | 0.003109 / 0.007986 (-0.004876) | 0.003292 / 0.004328 (-0.001036) | 0.048857 / 0.004250 (0.044606) | 0.043929 / 0.037052 (0.006876) | 0.264002 / 0.258489 (0.005513) | 0.289028 / 0.293841 (-0.004813) | 0.028053 / 0.128546 (-0.100493) | 0.010837 / 0.075646 (-0.064809) | 0.208084 / 0.419271 (-0.211188) | 0.035592 / 0.043533 (-0.007941) | 0.252639 / 0.255139 (-0.002500) | 0.267599 / 0.283200 (-0.015600) | 0.018097 / 0.141683 (-0.123585) | 1.150811 / 1.452155 (-0.301344) | 1.219449 / 1.492716 (-0.273267) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095427 / 0.018006 (0.077421) | 0.307270 / 0.000490 (0.306781) | 0.000218 / 0.000200 (0.000018) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018713 / 0.037411 (-0.018698) | 0.065238 / 0.014526 (0.050712) | 0.074650 / 0.176557 (-0.101906) | 0.120130 / 0.737135 (-0.617005) | 0.078457 / 0.296338 (-0.217882) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283666 / 0.215209 (0.068457) | 2.852818 / 2.077655 (0.775163) | 1.459790 / 1.504120 (-0.044330) | 1.326732 / 1.541195 (-0.214463) | 1.373530 / 1.468490 (-0.094960) | 0.579136 / 4.584777 (-4.005641) | 2.388369 / 3.745712 (-1.357343) | 2.813786 / 5.269862 (-2.456075) | 1.730079 / 4.565676 (-2.835597) | 0.063445 / 0.424275 (-0.360831) | 0.005355 / 0.007607 (-0.002252) | 0.340169 / 0.226044 (0.114124) | 3.391220 / 2.268929 (1.122291) | 1.838003 / 55.444624 (-53.606621) | 1.523518 / 6.876477 (-5.352959) | 1.574007 / 2.142072 (-0.568065) | 0.650265 / 4.805227 (-4.154962) | 0.117114 / 6.500664 (-6.383550) | 0.042430 / 0.075469 (-0.033039) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.955596 / 1.841788 (-0.886191) | 11.546544 / 8.074308 (3.472236) | 9.593613 / 10.191392 (-0.597779) | 0.141502 / 0.680424 (-0.538922) | 0.014251 / 0.534201 (-0.519950) | 0.293825 / 0.579283 (-0.285458) | 0.263088 / 0.434364 (-0.171276) | 0.325035 / 0.540337 (-0.215302) | 0.419372 / 1.386936 (-0.967564) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005567 / 0.011353 (-0.005785) | 0.003670 / 0.011008 (-0.007338) | 0.050338 / 0.038508 (0.011830) | 0.031730 / 0.023109 (0.008621) | 0.278307 / 0.275898 (0.002409) | 0.303170 / 0.323480 (-0.020310) | 0.004276 / 0.007986 (-0.003709) | 0.002720 / 0.004328 (-0.001609) | 0.048675 / 0.004250 (0.044425) | 0.041026 / 0.037052 (0.003974) | 0.291353 / 0.258489 (0.032864) | 0.318487 / 0.293841 (0.024646) | 0.029676 / 0.128546 (-0.098870) | 0.010428 / 0.075646 (-0.065218) | 0.057443 / 0.419271 (-0.361828) | 0.032735 / 0.043533 (-0.010798) | 0.282900 / 0.255139 (0.027761) | 0.297539 / 0.283200 (0.014339) | 0.018237 / 0.141683 (-0.123446) | 1.188047 / 1.452155 (-0.264107) | 1.223283 / 1.492716 (-0.269433) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090629 / 0.018006 (0.072623) | 0.300898 / 0.000490 (0.300408) | 0.000212 / 0.000200 (0.000012) | 0.000133 / 0.000054 (0.000078) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022200 / 0.037411 (-0.015211) | 0.075310 / 0.014526 (0.060784) | 0.086790 / 0.176557 (-0.089766) | 0.127392 / 0.737135 (-0.609744) | 0.088435 / 0.296338 (-0.207903) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301308 / 0.215209 (0.086099) | 2.963126 / 2.077655 (0.885471) | 1.639604 / 1.504120 (0.135484) | 1.508776 / 1.541195 (-0.032419) | 1.553280 / 1.468490 (0.084789) | 0.567256 / 4.584777 (-4.017520) | 2.445231 / 3.745712 (-1.300482) | 2.884071 / 5.269862 (-2.385791) | 1.777321 / 4.565676 (-2.788355) | 0.063659 / 0.424275 (-0.360616) | 0.005435 / 0.007607 (-0.002172) | 0.361786 / 0.226044 (0.135742) | 3.624264 / 2.268929 (1.355335) | 2.022661 / 55.444624 (-53.421963) | 1.740581 / 6.876477 (-5.135896) | 1.748503 / 2.142072 (-0.393570) | 0.660783 / 4.805227 (-4.144444) | 0.118045 / 6.500664 (-6.382619) | 0.040940 / 0.075469 (-0.034529) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.015614 / 1.841788 (-0.826174) | 12.094985 / 8.074308 (4.020677) | 10.435581 / 10.191392 (0.244189) | 0.140239 / 0.680424 (-0.540185) | 0.014992 / 0.534201 (-0.519209) | 0.290549 / 0.579283 (-0.288735) | 0.274718 / 0.434364 (-0.159645) | 0.334783 / 0.540337 (-0.205554) | 0.426540 / 1.386936 (-0.960396) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#828aff908450ac7af3a1820bb2eb7b438f2692f5 \"CML watermark\")\n" ]
2024-04-12T08:47:50
2024-04-12T09:08:23
2024-04-12T09:02:12
MEMBER
null
Fix test using latest commit SHA in hf-internal-testing/dataset_with_script dataset: https://huggingface.co/datasets/hf-internal-testing/dataset_with_script/commits/refs%2Fconvert%2Fparquet Fix #6796.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6806/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6806/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6806", "html_url": "https://github.com/huggingface/datasets/pull/6806", "diff_url": "https://github.com/huggingface/datasets/pull/6806.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6806.patch", "merged_at": "2024-04-12T09:02:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/6805
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6805/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6805/comments
https://api.github.com/repos/huggingface/datasets/issues/6805/events
https://github.com/huggingface/datasets/issues/6805
2,239,034,951
I_kwDODunzps6FdPZH
6,805
Batched mapping of existing string column casts boolean to string
{ "login": "starmpcc", "id": 46891489, "node_id": "MDQ6VXNlcjQ2ODkxNDg5", "avatar_url": "https://avatars.githubusercontent.com/u/46891489?v=4", "gravatar_id": "", "url": "https://api.github.com/users/starmpcc", "html_url": "https://github.com/starmpcc", "followers_url": "https://api.github.com/users/starmpcc/followers", "following_url": "https://api.github.com/users/starmpcc/following{/other_user}", "gists_url": "https://api.github.com/users/starmpcc/gists{/gist_id}", "starred_url": "https://api.github.com/users/starmpcc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/starmpcc/subscriptions", "organizations_url": "https://api.github.com/users/starmpcc/orgs", "repos_url": "https://api.github.com/users/starmpcc/repos", "events_url": "https://api.github.com/users/starmpcc/events{/privacy}", "received_events_url": "https://api.github.com/users/starmpcc/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "This seems to be hardcoded behavior in table.py `array_cast`.\r\n```python\r\nif (\r\n not allow_number_to_str\r\n and pa.types.is_string(pa_type)\r\n and (pa.types.is_floating(array.type) or pa.types.is_integer(array.type))\r\n ):\r\n raise TypeError(\r\n f\"Couldn't cast array of type {array.type} to {pa_type} since allow_number_to_str is set to {allow_number_to_str}\"\r\n )\r\n if pa.types.is_null(pa_type) and not pa.types.is_null(array.type):\r\n raise TypeError(f\"Couldn't cast array of type {array.type} to {pa_type}\")\r\n return array.cast(pa_type)\r\n```\r\nwhere floats and integers are not cast to string but booleans are.\r\nMaybe this should be extended to booleans?", "Thanks for reporting! @Modexus Do you want to open a PR with the suggested fix?", "I'll gladly create a PR but not sure what the behavior should be.\r\n\r\nShould a value returned from map be cast to the current feature?\r\nAt the moment this seems very inconsistent since `datetime `is also cast (this would only fix `boolean`) but nested structures are not.\r\n\r\n```python\r\ndset = Dataset.from_dict({\"a\": [\"Hello world!\"]})\r\ndset = dset.map(lambda x: {\"a\": date(2021, 1, 1)})\r\n# dset[0][\"a\"] == '2021-01-01'\r\n```\r\n```python\r\ndset = Dataset.from_dict({\"a\": [\"Hello world!\"]})\r\ndset = dset.map(lambda x: {\"a\": [True]})\r\n# dset[0][\"a\"] == [True]\r\n```\r\n\r\nIs there are reason to cast the value if the user doesn't specify it explicitly?\r\nSeems tricky that some things are cast and some are not.", "Indeed, it also makes sense to raise a `TypeError` for temporal and decimal types.\r\n\r\n> Is there are reason to cast the value if the user doesn't specify it explicitly?\r\n\r\nThis is how PyArrow's built-in `cast` behaves - it allows casting from primitive types to strings. Hence, we need `allow_number_to_str` to disallow such casts (e.g., in the [scenario](https://github.com/huggingface/datasets/blob/a3bc89d8bfd47c2a175c3ce16d92b7307cdeafd6/src/datasets/arrow_writer.py#L208) when we are \"trying a type\" to preserve the original type if there is a column in the output dataset with the same name as in the input one).\r\n\r\nPS: In the PR, we can introduce `allow_numeric_to_str` (for floats, integers, decimals, booleans) and `allow_temporal_to_str` (for dates, timestamps, ...) and deprecate `allow_number_to_str` to make it clear what each parameter does.", "Would just `allow_primitive_to_str` work?\r\nThis should include all `numeric`, `boolean `and `temporal`formats.\r\n\r\nNote that at least in the [ C++ implementation](https://arrow.apache.org/docs/cpp/api/utilities.html#_CPPv410is_numericRK8DataType) `numeric `seems to exclude `boolean`.\r\n[](https://arrow.apache.org/docs/cpp/api/utilities.html#_CPPv410is_numericRK8DataType)", "Indeed, `allow_primitive_to_str` sounds better.\r\n\r\nPS: PyArrow's `pa.types.is_primitive` returns `False` for decimal types, but I think is okay for us to treat decimals as primitive types (or we can have `allow_decimal_to_str` to be fully consistent with PyArrow)" ]
2024-04-12T04:21:41
2024-04-15T12:55:19
null
NONE
null
### Describe the bug Let the dataset contain a column named 'a', which is of the string type. If 'a' is converted to a boolean using batched mapping, the mapper automatically casts the boolean to a string (e.g., True -> 'true'). It only happens when the original column and the mapped column name are identical. Thank you! ### Steps to reproduce the bug ```python from datasets import Dataset dset = Dataset.from_dict({'a': ['11', '22']}) dset = dset.map(lambda x: {'a': [True for _ in x['a']]}, batched=True) print(dset['a']) ``` ``` > ['true', 'true'] ``` ### Expected behavior [True, True] ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.21.4 - PyArrow version: 15.0.2 - Pandas version: 2.2.1 - `fsspec` version: 2023.12.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6805/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6805/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6804
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6804/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6804/comments
https://api.github.com/repos/huggingface/datasets/issues/6804/events
https://github.com/huggingface/datasets/pull/6804
2,238,035,124
PR_kwDODunzps5sYJFF
6,804
Fix --repo-type order in cli upload docs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6804). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005222 / 0.011353 (-0.006131) | 0.003306 / 0.011008 (-0.007702) | 0.063326 / 0.038508 (0.024818) | 0.031371 / 0.023109 (0.008261) | 0.244947 / 0.275898 (-0.030951) | 0.264141 / 0.323480 (-0.059339) | 0.004186 / 0.007986 (-0.003800) | 0.002676 / 0.004328 (-0.001653) | 0.048690 / 0.004250 (0.044440) | 0.045172 / 0.037052 (0.008120) | 0.256597 / 0.258489 (-0.001892) | 0.284348 / 0.293841 (-0.009493) | 0.026855 / 0.128546 (-0.101691) | 0.009947 / 0.075646 (-0.065699) | 0.206311 / 0.419271 (-0.212961) | 0.035178 / 0.043533 (-0.008355) | 0.251501 / 0.255139 (-0.003638) | 0.261314 / 0.283200 (-0.021886) | 0.018000 / 0.141683 (-0.123683) | 1.144588 / 1.452155 (-0.307566) | 1.193627 / 1.492716 (-0.299089) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091629 / 0.018006 (0.073623) | 0.298959 / 0.000490 (0.298469) | 0.000207 / 0.000200 (0.000007) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018053 / 0.037411 (-0.019358) | 0.061280 / 0.014526 (0.046754) | 0.074138 / 0.176557 (-0.102419) | 0.119048 / 0.737135 (-0.618088) | 0.074572 / 0.296338 (-0.221767) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282440 / 0.215209 (0.067231) | 2.762017 / 2.077655 (0.684362) | 1.474452 / 1.504120 (-0.029668) | 1.361489 / 1.541195 (-0.179706) | 1.359696 / 1.468490 (-0.108795) | 0.569640 / 4.584777 (-4.015137) | 2.398098 / 3.745712 (-1.347614) | 2.731399 / 5.269862 (-2.538462) | 1.697432 / 4.565676 (-2.868245) | 0.063330 / 0.424275 (-0.360945) | 0.005416 / 0.007607 (-0.002191) | 0.346510 / 0.226044 (0.120465) | 3.276473 / 2.268929 (1.007544) | 1.837605 / 55.444624 (-53.607019) | 1.538654 / 6.876477 (-5.337822) | 1.553943 / 2.142072 (-0.588129) | 0.640571 / 4.805227 (-4.164657) | 0.116736 / 6.500664 (-6.383928) | 0.041701 / 0.075469 (-0.033768) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975846 / 1.841788 (-0.865942) | 11.151727 / 8.074308 (3.077419) | 9.436281 / 10.191392 (-0.755111) | 0.141027 / 0.680424 (-0.539397) | 0.014389 / 0.534201 (-0.519812) | 0.285575 / 0.579283 (-0.293708) | 0.263753 / 0.434364 (-0.170610) | 0.321893 / 0.540337 (-0.218444) | 0.420280 / 1.386936 (-0.966656) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005148 / 0.011353 (-0.006205) | 0.003264 / 0.011008 (-0.007744) | 0.049828 / 0.038508 (0.011320) | 0.031234 / 0.023109 (0.008125) | 0.271079 / 0.275898 (-0.004819) | 0.295256 / 0.323480 (-0.028224) | 0.004128 / 0.007986 (-0.003857) | 0.002637 / 0.004328 (-0.001692) | 0.048145 / 0.004250 (0.043895) | 0.039691 / 0.037052 (0.002638) | 0.287229 / 0.258489 (0.028740) | 0.310477 / 0.293841 (0.016636) | 0.028936 / 0.128546 (-0.099610) | 0.010392 / 0.075646 (-0.065254) | 0.057774 / 0.419271 (-0.361497) | 0.032557 / 0.043533 (-0.010975) | 0.275146 / 0.255139 (0.020007) | 0.291283 / 0.283200 (0.008084) | 0.017724 / 0.141683 (-0.123958) | 1.186831 / 1.452155 (-0.265324) | 1.220086 / 1.492716 (-0.272630) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093575 / 0.018006 (0.075569) | 0.297198 / 0.000490 (0.296709) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021683 / 0.037411 (-0.015728) | 0.075347 / 0.014526 (0.060821) | 0.085453 / 0.176557 (-0.091103) | 0.125422 / 0.737135 (-0.611713) | 0.087185 / 0.296338 (-0.209153) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301520 / 0.215209 (0.086311) | 2.951614 / 2.077655 (0.873959) | 1.659897 / 1.504120 (0.155777) | 1.528097 / 1.541195 (-0.013097) | 1.552031 / 1.468490 (0.083541) | 0.576297 / 4.584777 (-4.008480) | 2.492349 / 3.745712 (-1.253363) | 2.805999 / 5.269862 (-2.463862) | 1.757556 / 4.565676 (-2.808121) | 0.064940 / 0.424275 (-0.359335) | 0.005314 / 0.007607 (-0.002293) | 0.358838 / 0.226044 (0.132793) | 3.576890 / 2.268929 (1.307961) | 2.030788 / 55.444624 (-53.413837) | 1.743650 / 6.876477 (-5.132826) | 1.745229 / 2.142072 (-0.396844) | 0.647840 / 4.805227 (-4.157387) | 0.116637 / 6.500664 (-6.384027) | 0.040555 / 0.075469 (-0.034915) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.009130 / 1.841788 (-0.832657) | 11.951145 / 8.074308 (3.876836) | 9.968355 / 10.191392 (-0.223037) | 0.139959 / 0.680424 (-0.540465) | 0.015985 / 0.534201 (-0.518216) | 0.286594 / 0.579283 (-0.292689) | 0.275805 / 0.434364 (-0.158559) | 0.328484 / 0.540337 (-0.211854) | 0.419818 / 1.386936 (-0.967118) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#89a58cdfc59ecc83662a47b638cf82a5b99f4a48 \"CML watermark\")\n" ]
2024-04-11T15:39:09
2024-04-11T16:24:57
2024-04-11T16:18:47
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6804/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6804/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6804", "html_url": "https://github.com/huggingface/datasets/pull/6804", "diff_url": "https://github.com/huggingface/datasets/pull/6804.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6804.patch", "merged_at": "2024-04-11T16:18:47" }
true