id
int64
599M
2.47B
url
stringlengths
58
61
repository_url
stringclasses
1 value
events_url
stringlengths
65
68
labels
listlengths
0
4
active_lock_reason
null
updated_at
stringlengths
20
20
assignees
listlengths
0
4
html_url
stringlengths
46
51
author_association
stringclasses
4 values
state_reason
stringclasses
3 values
draft
bool
2 classes
milestone
dict
comments
sequencelengths
0
30
title
stringlengths
1
290
reactions
dict
node_id
stringlengths
18
32
pull_request
dict
created_at
stringlengths
20
20
comments_url
stringlengths
67
70
body
stringlengths
0
228k
βŒ€
user
dict
labels_url
stringlengths
72
75
timeline_url
stringlengths
67
70
state
stringclasses
2 values
locked
bool
1 class
number
int64
1
7.11k
performed_via_github_app
null
closed_at
stringlengths
20
20
βŒ€
assignee
dict
is_pull_request
bool
2 classes
2,379,588,676
https://api.github.com/repos/huggingface/datasets/issues/7007
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7007/events
[]
null
2024-06-28T05:31:21Z
[]
https://github.com/huggingface/datasets/pull/7007
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7007). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005225 / 0.011353 (-0.006128) | 0.003856 / 0.011008 (-0.007152) | 0.063455 / 0.038508 (0.024947) | 0.030184 / 0.023109 (0.007075) | 0.248518 / 0.275898 (-0.027380) | 0.270596 / 0.323480 (-0.052884) | 0.003185 / 0.007986 (-0.004800) | 0.002739 / 0.004328 (-0.001590) | 0.049379 / 0.004250 (0.045129) | 0.043190 / 0.037052 (0.006138) | 0.257181 / 0.258489 (-0.001308) | 0.283385 / 0.293841 (-0.010456) | 0.029702 / 0.128546 (-0.098844) | 0.012022 / 0.075646 (-0.063624) | 0.204531 / 0.419271 (-0.214741) | 0.035621 / 0.043533 (-0.007912) | 0.257745 / 0.255139 (0.002606) | 0.269033 / 0.283200 (-0.014167) | 0.019283 / 0.141683 (-0.122400) | 1.152477 / 1.452155 (-0.299678) | 1.180929 / 1.492716 (-0.311788) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094520 / 0.018006 (0.076514) | 0.299383 / 0.000490 (0.298893) | 0.000224 / 0.000200 (0.000024) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019267 / 0.037411 (-0.018145) | 0.062458 / 0.014526 (0.047933) | 0.075743 / 0.176557 (-0.100814) | 0.128564 / 0.737135 (-0.608572) | 0.075549 / 0.296338 (-0.220789) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288809 / 0.215209 (0.073600) | 2.854469 / 2.077655 (0.776814) | 1.581731 / 1.504120 (0.077611) | 1.460196 / 1.541195 (-0.080999) | 1.485567 / 1.468490 (0.017077) | 0.708824 / 4.584777 (-3.875953) | 2.362389 / 3.745712 (-1.383323) | 2.980804 / 5.269862 (-2.289057) | 1.918788 / 4.565676 (-2.646888) | 0.088389 / 0.424275 (-0.335886) | 0.005266 / 0.007607 (-0.002341) | 0.348598 / 0.226044 (0.122554) | 3.443202 / 2.268929 (1.174273) | 1.979311 / 55.444624 (-53.465314) | 1.655774 / 6.876477 (-5.220702) | 1.685538 / 2.142072 (-0.456535) | 0.788695 / 4.805227 (-4.016532) | 0.138403 / 6.500664 (-6.362261) | 0.043288 / 0.075469 (-0.032181) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975874 / 1.841788 (-0.865913) | 11.506991 / 8.074308 (3.432683) | 9.640619 / 10.191392 (-0.550773) | 0.131897 / 0.680424 (-0.548527) | 0.014912 / 0.534201 (-0.519289) | 0.304173 / 0.579283 (-0.275110) | 0.262483 / 0.434364 (-0.171881) | 0.342636 / 0.540337 (-0.197701) | 0.440337 / 1.386936 (-0.946599) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005961 / 0.011353 (-0.005392) | 0.004023 / 0.011008 (-0.006985) | 0.050230 / 0.038508 (0.011722) | 0.033204 / 0.023109 (0.010095) | 0.263462 / 0.275898 (-0.012436) | 0.287517 / 0.323480 (-0.035963) | 0.004432 / 0.007986 (-0.003553) | 0.002938 / 0.004328 (-0.001390) | 0.049297 / 0.004250 (0.045047) | 0.041166 / 0.037052 (0.004113) | 0.279220 / 0.258489 (0.020731) | 0.312857 / 0.293841 (0.019016) | 0.032567 / 0.128546 (-0.095979) | 0.012566 / 0.075646 (-0.063080) | 0.060579 / 0.419271 (-0.358692) | 0.033760 / 0.043533 (-0.009773) | 0.264219 / 0.255139 (0.009080) | 0.282929 / 0.283200 (-0.000270) | 0.017434 / 0.141683 (-0.124248) | 1.148472 / 1.452155 (-0.303683) | 1.247434 / 1.492716 (-0.245282) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004566 / 0.018006 (-0.013440) | 0.296110 / 0.000490 (0.295621) | 0.000219 / 0.000200 (0.000019) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022514 / 0.037411 (-0.014897) | 0.076554 / 0.014526 (0.062029) | 0.090427 / 0.176557 (-0.086130) | 0.128611 / 0.737135 (-0.608524) | 0.090748 / 0.296338 (-0.205590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.315051 / 0.215209 (0.099842) | 3.099662 / 2.077655 (1.022007) | 1.706009 / 1.504120 (0.201889) | 1.574637 / 1.541195 (0.033442) | 1.592454 / 1.468490 (0.123964) | 0.724699 / 4.584777 (-3.860078) | 0.949954 / 3.745712 (-2.795758) | 2.818915 / 5.269862 (-2.450946) | 1.931290 / 4.565676 (-2.634386) | 0.079308 / 0.424275 (-0.344967) | 0.005414 / 0.007607 (-0.002193) | 0.373547 / 0.226044 (0.147503) | 3.742222 / 2.268929 (1.473293) | 2.076239 / 55.444624 (-53.368385) | 1.772359 / 6.876477 (-5.104118) | 1.894369 / 2.142072 (-0.247703) | 0.803650 / 4.805227 (-4.001578) | 0.136995 / 6.500664 (-6.363669) | 0.041565 / 0.075469 (-0.033905) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.989806 / 1.841788 (-0.851982) | 12.151497 / 8.074308 (4.077189) | 10.188075 / 10.191392 (-0.003317) | 0.141194 / 0.680424 (-0.539230) | 0.016351 / 0.534201 (-0.517850) | 0.303242 / 0.579283 (-0.276041) | 0.127446 / 0.434364 (-0.306918) | 0.339806 / 0.540337 (-0.200532) | 0.443527 / 1.386936 (-0.943409) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#dd631431cb73c3ca434dfd6b115a6c30c5a665a5 \"CML watermark\")\n" ]
Fix CI by temporarily pinning ruff < 0.5.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7007/reactions" }
PR_kwDODunzps5z2Q68
{ "diff_url": "https://github.com/huggingface/datasets/pull/7007.diff", "html_url": "https://github.com/huggingface/datasets/pull/7007", "merged_at": "2024-06-28T05:25:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/7007.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7007" }
2024-06-28T05:09:17Z
https://api.github.com/repos/huggingface/datasets/issues/7007/comments
As a hotfix for CI, temporarily pin ruff upper version < 0.5.0. Fix #7006. Revert once root cause is fixed.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7007/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7007/timeline
closed
false
7,007
null
2024-06-28T05:25:17Z
null
true
2,379,581,543
https://api.github.com/repos/huggingface/datasets/issues/7006
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7006/events
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
null
2024-06-28T05:25:18Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/7006
MEMBER
completed
null
null
[]
CI is broken after ruff-0.5.0: E721
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7006/reactions" }
I_kwDODunzps6N1Yhn
null
2024-06-28T05:03:28Z
https://api.github.com/repos/huggingface/datasets/issues/7006/comments
After ruff-0.5.0 release (https://github.com/astral-sh/ruff/releases/tag/0.5.0), our CI is broken due to E721 rule. See: https://github.com/huggingface/datasets/actions/runs/9707641618/job/26793170961?pr=6983 > src/datasets/features/features.py:844:12: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7006/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7006/timeline
closed
false
7,006
null
2024-06-28T05:25:18Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,378,424,349
https://api.github.com/repos/huggingface/datasets/issues/7005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7005/events
[]
null
2024-06-28T09:56:19Z
[]
https://github.com/huggingface/datasets/issues/7005
NONE
completed
null
null
[ "Hi ! `data_dir=` is for directories, can you try using `data_files=` instead ?", "If you are trying to load your image dataset from a local folder, you should replace \"data_dir=path/to/jsonl/metadata.jsonl\" with the real folder path in your computer.\r\n\r\nhttps://huggingface.co/docs/datasets/en/image_load#imagefolder", "Ah yes. My bad. I was giving file name. I should have given the folder directory as the path. That solved my issue. Thank you @albertvillanova and @lhoestq. " ]
EmptyDatasetError: The directory at /metadata.jsonl doesn't contain any data files
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7005/reactions" }
I_kwDODunzps6Nw-Ad
null
2024-06-27T15:08:26Z
https://api.github.com/repos/huggingface/datasets/issues/7005/comments
### Describe the bug while trying to load custom dataset from jsonl file, I get the error: "metadata.jsonl doesn't contain any data files" ### Steps to reproduce the bug This is my [metadata_v2.jsonl](https://github.com/user-attachments/files/16016011/metadata_v2.json) file. I have this file in the folder with all images mentioned in that json(l) file. Through below mentioned command I am trying to load_dataset so that I can upload it as mentioned here on the [official website](https://huggingface.co/docs/datasets/en/image_dataset#upload-dataset-to-the-hub). ```` from datasets import load_dataset dataset = load_dataset("imagefolder", data_dir="path/to/jsonl/metadata.jsonl") ```` error: ```` EmptyDatasetError Traceback (most recent call last) Cell In[18], line 3 1 from datasets import load_dataset ----> 3 dataset = load_dataset("imagefolder", 4 data_dir="path/to/jsonl/file/metadata.jsonl") 5 dataset[0]["objects"] File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2594, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2589 verification_mode = VerificationMode( 2590 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS 2591 ) 2593 # Create a dataset builder -> 2594 builder_instance = load_dataset_builder( 2595 path=path, 2596 name=name, 2597 data_dir=data_dir, 2598 data_files=data_files, 2599 cache_dir=cache_dir, 2600 features=features, 2601 download_config=download_config, 2602 download_mode=download_mode, 2603 revision=revision, 2604 token=token, 2605 storage_options=storage_options, 2606 trust_remote_code=trust_remote_code, 2607 _require_default_config_name=name is None, 2608 **config_kwargs, 2609 ) 2611 # Return iterable dataset in case of streaming 2612 if streaming: File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2266, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs) 2264 download_config = download_config.copy() if download_config else DownloadConfig() 2265 download_config.storage_options.update(storage_options) -> 2266 dataset_module = dataset_module_factory( 2267 path, 2268 revision=revision, 2269 download_config=download_config, 2270 download_mode=download_mode, 2271 data_dir=data_dir, 2272 data_files=data_files, 2273 cache_dir=cache_dir, 2274 trust_remote_code=trust_remote_code, 2275 _require_default_config_name=_require_default_config_name, 2276 _require_custom_configs=bool(config_kwargs), 2277 ) 2278 # Get dataset builder class from the processing script 2279 builder_kwargs = dataset_module.builder_kwargs File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1805, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) 1782 # We have several ways to get a dataset builder: 1783 # 1784 # - if path is the name of a packaged dataset module (...) 1796 1797 # Try packaged 1798 if path in _PACKAGED_DATASETS_MODULES: 1799 return PackagedDatasetModuleFactory( 1800 path, 1801 data_dir=data_dir, 1802 data_files=data_files, 1803 download_config=download_config, 1804 download_mode=download_mode, -> 1805 ).get_module() 1806 # Try locally 1807 elif path.endswith(filename): File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1140, in PackagedDatasetModuleFactory.get_module(self) 1135 def get_module(self) -> DatasetModule: 1136 base_path = Path(self.data_dir or "").expanduser().resolve().as_posix() 1137 patterns = ( 1138 sanitize_patterns(self.data_files) 1139 if self.data_files is not None -> 1140 else get_data_patterns(base_path, download_config=self.download_config) 1141 ) 1142 data_files = DataFilesDict.from_patterns( 1143 patterns, 1144 download_config=self.download_config, 1145 base_path=base_path, 1146 ) 1147 supports_metadata = self.name in _MODULE_SUPPORTS_METADATA File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/data_files.py:503, in get_data_patterns(base_path, download_config) 501 return _get_data_files_patterns(resolver) 502 except FileNotFoundError: --> 503 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None EmptyDatasetError: The directory at path/to/jsonl/file/metadata.jsonl doesn't contain any data files` ``` ### Expected behavior It should be able load the whole file in a format of "dataset" inside the dataset variable. But it gives error "The directory at "path/to/jsonl/metadata.jsonl" doesn't contain any data files." ### Environment info I am using conda environment.
{ "avatar_url": "https://avatars.githubusercontent.com/u/117731544?v=4", "events_url": "https://api.github.com/users/Aki1991/events{/privacy}", "followers_url": "https://api.github.com/users/Aki1991/followers", "following_url": "https://api.github.com/users/Aki1991/following{/other_user}", "gists_url": "https://api.github.com/users/Aki1991/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Aki1991", "id": 117731544, "login": "Aki1991", "node_id": "U_kgDOBwRw2A", "organizations_url": "https://api.github.com/users/Aki1991/orgs", "received_events_url": "https://api.github.com/users/Aki1991/received_events", "repos_url": "https://api.github.com/users/Aki1991/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Aki1991/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aki1991/subscriptions", "type": "User", "url": "https://api.github.com/users/Aki1991" }
https://api.github.com/repos/huggingface/datasets/issues/7005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7005/timeline
closed
false
7,005
null
2024-06-28T09:56:19Z
null
false
2,376,064,264
https://api.github.com/repos/huggingface/datasets/issues/7004
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7004/events
[]
null
2024-06-29T00:15:49Z
[]
https://github.com/huggingface/datasets/pull/7004
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7004). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005188 / 0.011353 (-0.006165) | 0.003812 / 0.011008 (-0.007196) | 0.062408 / 0.038508 (0.023900) | 0.030734 / 0.023109 (0.007625) | 0.236528 / 0.275898 (-0.039370) | 0.267684 / 0.323480 (-0.055796) | 0.003182 / 0.007986 (-0.004804) | 0.004009 / 0.004328 (-0.000319) | 0.048966 / 0.004250 (0.044715) | 0.045259 / 0.037052 (0.008207) | 0.250586 / 0.258489 (-0.007903) | 0.287079 / 0.293841 (-0.006762) | 0.029235 / 0.128546 (-0.099311) | 0.012216 / 0.075646 (-0.063431) | 0.203864 / 0.419271 (-0.215408) | 0.036324 / 0.043533 (-0.007209) | 0.245180 / 0.255139 (-0.009959) | 0.270327 / 0.283200 (-0.012872) | 0.018676 / 0.141683 (-0.123007) | 1.115568 / 1.452155 (-0.336586) | 1.183267 / 1.492716 (-0.309449) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094307 / 0.018006 (0.076301) | 0.299071 / 0.000490 (0.298581) | 0.000219 / 0.000200 (0.000019) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018336 / 0.037411 (-0.019076) | 0.062973 / 0.014526 (0.048447) | 0.074137 / 0.176557 (-0.102420) | 0.120553 / 0.737135 (-0.616582) | 0.075411 / 0.296338 (-0.220927) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284751 / 0.215209 (0.069542) | 2.789294 / 2.077655 (0.711640) | 1.457789 / 1.504120 (-0.046331) | 1.339140 / 1.541195 (-0.202055) | 1.341685 / 1.468490 (-0.126805) | 0.714928 / 4.584777 (-3.869849) | 2.361197 / 3.745712 (-1.384516) | 2.791569 / 5.269862 (-2.478293) | 1.892261 / 4.565676 (-2.673416) | 0.077954 / 0.424275 (-0.346321) | 0.005454 / 0.007607 (-0.002153) | 0.350766 / 0.226044 (0.124721) | 3.416749 / 2.268929 (1.147820) | 1.835377 / 55.444624 (-53.609247) | 1.506456 / 6.876477 (-5.370020) | 1.642728 / 2.142072 (-0.499344) | 0.791543 / 4.805227 (-4.013684) | 0.133102 / 6.500664 (-6.367562) | 0.042572 / 0.075469 (-0.032897) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977958 / 1.841788 (-0.863830) | 11.438271 / 8.074308 (3.363963) | 9.305719 / 10.191392 (-0.885673) | 0.141239 / 0.680424 (-0.539185) | 0.014330 / 0.534201 (-0.519871) | 0.302201 / 0.579283 (-0.277082) | 0.261688 / 0.434364 (-0.172676) | 0.338752 / 0.540337 (-0.201586) | 0.468466 / 1.386936 (-0.918470) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005629 / 0.011353 (-0.005723) | 0.003997 / 0.011008 (-0.007011) | 0.050447 / 0.038508 (0.011939) | 0.031694 / 0.023109 (0.008585) | 0.277392 / 0.275898 (0.001494) | 0.290440 / 0.323480 (-0.033040) | 0.004403 / 0.007986 (-0.003583) | 0.002851 / 0.004328 (-0.001478) | 0.048593 / 0.004250 (0.044343) | 0.040622 / 0.037052 (0.003570) | 0.282640 / 0.258489 (0.024151) | 0.309390 / 0.293841 (0.015549) | 0.031453 / 0.128546 (-0.097094) | 0.012424 / 0.075646 (-0.063223) | 0.060311 / 0.419271 (-0.358960) | 0.033195 / 0.043533 (-0.010338) | 0.266867 / 0.255139 (0.011728) | 0.281966 / 0.283200 (-0.001234) | 0.018026 / 0.141683 (-0.123657) | 1.136273 / 1.452155 (-0.315882) | 1.141643 / 1.492716 (-0.351073) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095011 / 0.018006 (0.077005) | 0.300571 / 0.000490 (0.300082) | 0.000220 / 0.000200 (0.000020) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022903 / 0.037411 (-0.014508) | 0.077130 / 0.014526 (0.062604) | 0.089576 / 0.176557 (-0.086980) | 0.127156 / 0.737135 (-0.609980) | 0.090008 / 0.296338 (-0.206331) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289270 / 0.215209 (0.074061) | 2.848239 / 2.077655 (0.770585) | 1.546788 / 1.504120 (0.042668) | 1.417275 / 1.541195 (-0.123920) | 1.456214 / 1.468490 (-0.012276) | 0.716688 / 4.584777 (-3.868088) | 0.940242 / 3.745712 (-2.805470) | 2.911956 / 5.269862 (-2.357906) | 1.871358 / 4.565676 (-2.694318) | 0.077553 / 0.424275 (-0.346722) | 0.005279 / 0.007607 (-0.002328) | 0.343338 / 0.226044 (0.117294) | 3.368694 / 2.268929 (1.099766) | 1.896765 / 55.444624 (-53.547859) | 1.612414 / 6.876477 (-5.264063) | 1.615934 / 2.142072 (-0.526138) | 0.794016 / 4.805227 (-4.011212) | 0.131821 / 6.500664 (-6.368843) | 0.041495 / 0.075469 (-0.033975) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.003418 / 1.841788 (-0.838370) | 12.073906 / 8.074308 (3.999598) | 10.166291 / 10.191392 (-0.025101) | 0.131224 / 0.680424 (-0.549200) | 0.015246 / 0.534201 (-0.518955) | 0.299835 / 0.579283 (-0.279448) | 0.124308 / 0.434364 (-0.310056) | 0.336414 / 0.540337 (-0.203924) | 0.429569 / 1.386936 (-0.957367) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#83d28601fad73755b74314a9bc1e327005514d54 \"CML watermark\")\n", "@lhoestq Thank you!" ]
Fix WebDatasets KeyError for user-defined Features when a field is missing in an example
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7004/reactions" }
PR_kwDODunzps5zrIYR
{ "diff_url": "https://github.com/huggingface/datasets/pull/7004.diff", "html_url": "https://github.com/huggingface/datasets/pull/7004", "merged_at": "2024-06-28T09:30:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/7004.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7004" }
2024-06-26T18:58:05Z
https://api.github.com/repos/huggingface/datasets/issues/7004/comments
Fixes: https://github.com/huggingface/datasets/issues/6900 Not sure if this needs any addition stuff before merging
{ "avatar_url": "https://avatars.githubusercontent.com/u/10626398?v=4", "events_url": "https://api.github.com/users/ProGamerGov/events{/privacy}", "followers_url": "https://api.github.com/users/ProGamerGov/followers", "following_url": "https://api.github.com/users/ProGamerGov/following{/other_user}", "gists_url": "https://api.github.com/users/ProGamerGov/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ProGamerGov", "id": 10626398, "login": "ProGamerGov", "node_id": "MDQ6VXNlcjEwNjI2Mzk4", "organizations_url": "https://api.github.com/users/ProGamerGov/orgs", "received_events_url": "https://api.github.com/users/ProGamerGov/received_events", "repos_url": "https://api.github.com/users/ProGamerGov/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ProGamerGov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ProGamerGov/subscriptions", "type": "User", "url": "https://api.github.com/users/ProGamerGov" }
https://api.github.com/repos/huggingface/datasets/issues/7004/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7004/timeline
closed
false
7,004
null
2024-06-28T09:30:12Z
null
true
2,373,084,132
https://api.github.com/repos/huggingface/datasets/issues/7003
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7003/events
[]
null
2024-06-25T16:16:11Z
[]
https://github.com/huggingface/datasets/pull/7003
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7003). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005633 / 0.011353 (-0.005720) | 0.004366 / 0.011008 (-0.006642) | 0.064081 / 0.038508 (0.025573) | 0.031790 / 0.023109 (0.008681) | 0.239270 / 0.275898 (-0.036628) | 0.267424 / 0.323480 (-0.056055) | 0.003229 / 0.007986 (-0.004756) | 0.002849 / 0.004328 (-0.001479) | 0.050147 / 0.004250 (0.045897) | 0.046119 / 0.037052 (0.009066) | 0.253506 / 0.258489 (-0.004983) | 0.280464 / 0.293841 (-0.013377) | 0.030561 / 0.128546 (-0.097985) | 0.012258 / 0.075646 (-0.063388) | 0.212222 / 0.419271 (-0.207049) | 0.036695 / 0.043533 (-0.006838) | 0.242141 / 0.255139 (-0.012998) | 0.263014 / 0.283200 (-0.020186) | 0.020008 / 0.141683 (-0.121675) | 1.103701 / 1.452155 (-0.348453) | 1.151641 / 1.492716 (-0.341076) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095884 / 0.018006 (0.077878) | 0.300858 / 0.000490 (0.300368) | 0.000209 / 0.000200 (0.000009) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018713 / 0.037411 (-0.018698) | 0.063659 / 0.014526 (0.049134) | 0.074588 / 0.176557 (-0.101968) | 0.120779 / 0.737135 (-0.616356) | 0.077768 / 0.296338 (-0.218570) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281680 / 0.215209 (0.066471) | 2.754658 / 2.077655 (0.677003) | 1.454036 / 1.504120 (-0.050084) | 1.333153 / 1.541195 (-0.208042) | 1.383616 / 1.468490 (-0.084874) | 0.728933 / 4.584777 (-3.855844) | 2.374989 / 3.745712 (-1.370723) | 2.990824 / 5.269862 (-2.279038) | 1.899065 / 4.565676 (-2.666612) | 0.078657 / 0.424275 (-0.345619) | 0.005162 / 0.007607 (-0.002445) | 0.335883 / 0.226044 (0.109838) | 3.323047 / 2.268929 (1.054119) | 1.848290 / 55.444624 (-53.596335) | 1.519510 / 6.876477 (-5.356966) | 1.563608 / 2.142072 (-0.578465) | 0.807890 / 4.805227 (-3.997337) | 0.134517 / 6.500664 (-6.366147) | 0.042208 / 0.075469 (-0.033262) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963634 / 1.841788 (-0.878154) | 11.617868 / 8.074308 (3.543560) | 9.804648 / 10.191392 (-0.386744) | 0.142311 / 0.680424 (-0.538113) | 0.013748 / 0.534201 (-0.520453) | 0.300309 / 0.579283 (-0.278974) | 0.268214 / 0.434364 (-0.166150) | 0.342406 / 0.540337 (-0.197931) | 0.430315 / 1.386936 (-0.956621) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005533 / 0.011353 (-0.005820) | 0.004208 / 0.011008 (-0.006800) | 0.051732 / 0.038508 (0.013224) | 0.031296 / 0.023109 (0.008187) | 0.275091 / 0.275898 (-0.000807) | 0.296889 / 0.323480 (-0.026591) | 0.004363 / 0.007986 (-0.003623) | 0.002807 / 0.004328 (-0.001522) | 0.049727 / 0.004250 (0.045476) | 0.039798 / 0.037052 (0.002746) | 0.284379 / 0.258489 (0.025890) | 0.317281 / 0.293841 (0.023440) | 0.031286 / 0.128546 (-0.097261) | 0.012384 / 0.075646 (-0.063263) | 0.061619 / 0.419271 (-0.357652) | 0.032974 / 0.043533 (-0.010559) | 0.274313 / 0.255139 (0.019174) | 0.296142 / 0.283200 (0.012943) | 0.017391 / 0.141683 (-0.124291) | 1.148369 / 1.452155 (-0.303786) | 1.171539 / 1.492716 (-0.321177) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097309 / 0.018006 (0.079302) | 0.304701 / 0.000490 (0.304212) | 0.000208 / 0.000200 (0.000008) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022382 / 0.037411 (-0.015030) | 0.077000 / 0.014526 (0.062474) | 0.088165 / 0.176557 (-0.088392) | 0.129060 / 0.737135 (-0.608075) | 0.090128 / 0.296338 (-0.206211) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285308 / 0.215209 (0.070099) | 2.816680 / 2.077655 (0.739025) | 1.542418 / 1.504120 (0.038298) | 1.418567 / 1.541195 (-0.122628) | 1.447018 / 1.468490 (-0.021472) | 0.737055 / 4.584777 (-3.847722) | 0.968285 / 3.745712 (-2.777427) | 2.880120 / 5.269862 (-2.389741) | 1.921813 / 4.565676 (-2.643864) | 0.079110 / 0.424275 (-0.345165) | 0.005826 / 0.007607 (-0.001781) | 0.336441 / 0.226044 (0.110397) | 3.326384 / 2.268929 (1.057456) | 1.929205 / 55.444624 (-53.515419) | 1.618215 / 6.876477 (-5.258261) | 1.769688 / 2.142072 (-0.372385) | 0.808009 / 4.805227 (-3.997219) | 0.136384 / 6.500664 (-6.364280) | 0.041332 / 0.075469 (-0.034137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.010884 / 1.841788 (-0.830903) | 12.266118 / 8.074308 (4.191810) | 10.287424 / 10.191392 (0.096032) | 0.143172 / 0.680424 (-0.537251) | 0.015798 / 0.534201 (-0.518403) | 0.301604 / 0.579283 (-0.277679) | 0.131079 / 0.434364 (-0.303285) | 0.338396 / 0.540337 (-0.201941) | 0.460721 / 1.386936 (-0.926215) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1e1d31387aa594b2e745c8ed8964962c134d203d \"CML watermark\")\n" ]
minor fix for bfloat16
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7003/reactions" }
PR_kwDODunzps5zhRAK
{ "diff_url": "https://github.com/huggingface/datasets/pull/7003.diff", "html_url": "https://github.com/huggingface/datasets/pull/7003", "merged_at": "2024-06-25T16:10:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/7003.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7003" }
2024-06-25T16:10:04Z
https://api.github.com/repos/huggingface/datasets/issues/7003/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/7003/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7003/timeline
closed
false
7,003
null
2024-06-25T16:10:10Z
null
true
2,373,010,351
https://api.github.com/repos/huggingface/datasets/issues/7002
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7002/events
[]
null
2024-06-25T16:10:16Z
[]
https://github.com/huggingface/datasets/pull/7002
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7002). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005321 / 0.011353 (-0.006032) | 0.003495 / 0.011008 (-0.007514) | 0.065577 / 0.038508 (0.027069) | 0.030876 / 0.023109 (0.007767) | 0.255216 / 0.275898 (-0.020682) | 0.265111 / 0.323480 (-0.058368) | 0.003149 / 0.007986 (-0.004837) | 0.004062 / 0.004328 (-0.000267) | 0.051142 / 0.004250 (0.046891) | 0.042460 / 0.037052 (0.005408) | 0.270692 / 0.258489 (0.012203) | 0.284957 / 0.293841 (-0.008884) | 0.030143 / 0.128546 (-0.098403) | 0.012148 / 0.075646 (-0.063498) | 0.203706 / 0.419271 (-0.215565) | 0.035948 / 0.043533 (-0.007584) | 0.251391 / 0.255139 (-0.003748) | 0.270908 / 0.283200 (-0.012292) | 0.018496 / 0.141683 (-0.123187) | 1.118567 / 1.452155 (-0.333587) | 1.157695 / 1.492716 (-0.335021) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.135649 / 0.018006 (0.117643) | 0.281489 / 0.000490 (0.281000) | 0.000244 / 0.000200 (0.000044) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018700 / 0.037411 (-0.018711) | 0.062305 / 0.014526 (0.047779) | 0.074968 / 0.176557 (-0.101589) | 0.121490 / 0.737135 (-0.615645) | 0.075585 / 0.296338 (-0.220754) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276929 / 0.215209 (0.061720) | 2.733543 / 2.077655 (0.655888) | 1.414585 / 1.504120 (-0.089535) | 1.301975 / 1.541195 (-0.239220) | 1.336698 / 1.468490 (-0.131792) | 0.720650 / 4.584777 (-3.864127) | 2.374796 / 3.745712 (-1.370917) | 2.866534 / 5.269862 (-2.403327) | 1.819607 / 4.565676 (-2.746069) | 0.077914 / 0.424275 (-0.346361) | 0.005146 / 0.007607 (-0.002461) | 0.331722 / 0.226044 (0.105678) | 3.290875 / 2.268929 (1.021946) | 1.799806 / 55.444624 (-53.644818) | 1.476816 / 6.876477 (-5.399660) | 1.511441 / 2.142072 (-0.630631) | 0.798043 / 4.805227 (-4.007185) | 0.134577 / 6.500664 (-6.366087) | 0.042055 / 0.075469 (-0.033415) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.967908 / 1.841788 (-0.873880) | 11.215688 / 8.074308 (3.141380) | 9.486403 / 10.191392 (-0.704989) | 0.141864 / 0.680424 (-0.538560) | 0.013462 / 0.534201 (-0.520739) | 0.302601 / 0.579283 (-0.276682) | 0.266870 / 0.434364 (-0.167494) | 0.336963 / 0.540337 (-0.203375) | 0.425374 / 1.386936 (-0.961562) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005549 / 0.011353 (-0.005803) | 0.003464 / 0.011008 (-0.007544) | 0.051421 / 0.038508 (0.012913) | 0.032320 / 0.023109 (0.009211) | 0.269591 / 0.275898 (-0.006307) | 0.292015 / 0.323480 (-0.031465) | 0.004351 / 0.007986 (-0.003634) | 0.002772 / 0.004328 (-0.001556) | 0.048836 / 0.004250 (0.044586) | 0.039501 / 0.037052 (0.002449) | 0.282419 / 0.258489 (0.023930) | 0.312289 / 0.293841 (0.018448) | 0.031788 / 0.128546 (-0.096759) | 0.012074 / 0.075646 (-0.063572) | 0.060457 / 0.419271 (-0.358814) | 0.033106 / 0.043533 (-0.010427) | 0.270323 / 0.255139 (0.015184) | 0.287855 / 0.283200 (0.004655) | 0.017865 / 0.141683 (-0.123818) | 1.130406 / 1.452155 (-0.321749) | 1.178679 / 1.492716 (-0.314038) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093606 / 0.018006 (0.075600) | 0.297328 / 0.000490 (0.296838) | 0.000211 / 0.000200 (0.000011) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022498 / 0.037411 (-0.014913) | 0.076927 / 0.014526 (0.062401) | 0.088013 / 0.176557 (-0.088544) | 0.127279 / 0.737135 (-0.609857) | 0.089424 / 0.296338 (-0.206914) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296441 / 0.215209 (0.081232) | 2.913051 / 2.077655 (0.835396) | 1.581816 / 1.504120 (0.077696) | 1.451575 / 1.541195 (-0.089620) | 1.458968 / 1.468490 (-0.009522) | 0.727191 / 4.584777 (-3.857586) | 0.954607 / 3.745712 (-2.791106) | 2.824357 / 5.269862 (-2.445505) | 1.886779 / 4.565676 (-2.678898) | 0.079397 / 0.424275 (-0.344878) | 0.005566 / 0.007607 (-0.002041) | 0.351655 / 0.226044 (0.125611) | 3.395790 / 2.268929 (1.126862) | 1.886238 / 55.444624 (-53.558387) | 1.615413 / 6.876477 (-5.261064) | 1.723922 / 2.142072 (-0.418150) | 0.807858 / 4.805227 (-3.997369) | 0.132998 / 6.500664 (-6.367667) | 0.040396 / 0.075469 (-0.035073) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.008527 / 1.841788 (-0.833261) | 11.736104 / 8.074308 (3.661796) | 10.283367 / 10.191392 (0.091975) | 0.141386 / 0.680424 (-0.539038) | 0.015722 / 0.534201 (-0.518479) | 0.301785 / 0.579283 (-0.277498) | 0.123073 / 0.434364 (-0.311291) | 0.340478 / 0.540337 (-0.199859) | 0.462936 / 1.386936 (-0.924000) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bfb0a414d68e945addf95a9419a8314c372e19ba \"CML watermark\")\n" ]
Fix dump of bfloat16 torch tensor
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7002/reactions" }
PR_kwDODunzps5zhBld
{ "diff_url": "https://github.com/huggingface/datasets/pull/7002.diff", "html_url": "https://github.com/huggingface/datasets/pull/7002", "merged_at": "2024-06-25T15:51:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/7002.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7002" }
2024-06-25T15:38:09Z
https://api.github.com/repos/huggingface/datasets/issues/7002/comments
close https://github.com/huggingface/datasets/issues/7000
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/7002/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7002/timeline
closed
false
7,002
null
2024-06-25T15:51:52Z
null
true
2,372,930,879
https://api.github.com/repos/huggingface/datasets/issues/7001
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7001/events
[]
null
2024-06-25T15:21:19Z
[]
https://github.com/huggingface/datasets/issues/7001
NONE
null
null
null
[ "Ok it seems the solution is to use the directory string without the trailing \"/\" which in my case as: \r\n\r\n`parquet_dir = \"~/data/Parquet\" `\r\n\r\nStill i think this is a weird behavior... " ]
Datasetbuilder Local Download FileNotFoundError
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7001/reactions" }
I_kwDODunzps6NcA0_
null
2024-06-25T15:02:34Z
https://api.github.com/repos/huggingface/datasets/issues/7001/comments
### Describe the bug So I was trying to download a dataset and save it as parquet and I follow the [tutorial](https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage) of Huggingface. However, during the excution I face a FileNotFoundError. I debug the code and it seems there is a bug there: So first it creates a .incomplete folder and before moving its contents the following code deletes the directory [Code](https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/builder.py#L984) hence as a result I face with: ``` FileNotFoundError: [Errno 2] No such file or directory: '~/data/Parquet/.incomplete '``` ### Steps to reproduce the bug ``` from datasets import load_dataset_builder from pathlib import Path parquet_dir = "~/data/Parquet/" Path(parquet_dir).mkdir(parents=True, exist_ok=True) builder = load_dataset_builder( "rotten_tomatoes", ) builder.download_and_prepare(parquet_dir, file_format="parquet") ``` ### Expected behavior Downloads the files and saves as parquet ### Environment info Ubuntu, Python 3.10 ``` datasets 2.19.1 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/12601271?v=4", "events_url": "https://api.github.com/users/purefall/events{/privacy}", "followers_url": "https://api.github.com/users/purefall/followers", "following_url": "https://api.github.com/users/purefall/following{/other_user}", "gists_url": "https://api.github.com/users/purefall/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/purefall", "id": 12601271, "login": "purefall", "node_id": "MDQ6VXNlcjEyNjAxMjcx", "organizations_url": "https://api.github.com/users/purefall/orgs", "received_events_url": "https://api.github.com/users/purefall/received_events", "repos_url": "https://api.github.com/users/purefall/repos", "site_admin": false, "starred_url": "https://api.github.com/users/purefall/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/purefall/subscriptions", "type": "User", "url": "https://api.github.com/users/purefall" }
https://api.github.com/repos/huggingface/datasets/issues/7001/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7001/timeline
open
false
7,001
null
null
null
false
2,372,887,585
https://api.github.com/repos/huggingface/datasets/issues/7000
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7000/events
[]
null
2024-06-25T16:04:00Z
[]
https://github.com/huggingface/datasets/issues/7000
NONE
completed
null
null
[ "@lhoestq Thank you for merging #6607, but unfortunately the issue persists for `IterableDataset` :pensive: ", "Hi ! I opened https://github.com/huggingface/datasets/pull/7002 to fix this bug", "Amazing, thank you so much @lhoestq! :pray:" ]
IterableDataset: Unsupported ScalarType BFloat16
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7000/reactions" }
I_kwDODunzps6Nb2Qh
null
2024-06-25T14:43:26Z
https://api.github.com/repos/huggingface/datasets/issues/7000/comments
### Describe the bug `IterableDataset.from_generator` crashes when using BFloat16: ``` File "/usr/local/lib/python3.11/site-packages/datasets/utils/_dill.py", line 169, in _save_torchTensor args = (obj.detach().cpu().numpy(),) ^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: Got unsupported ScalarType BFloat16 ``` ### Steps to reproduce the bug ```python import torch from datasets import IterableDataset def demo(x): yield {"x": x} x = torch.tensor([1.], dtype=torch.bfloat16) dataset = IterableDataset.from_generator( demo, gen_kwargs=dict(x=x), ) example = next(iter(dataset)) print(example) ``` ### Expected behavior Code sample should print: ```python {'x': tensor([1.], dtype=torch.bfloat16)} ``` ### Environment info ``` datasets==2.20.0 torch==2.2.2 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/170015089?v=4", "events_url": "https://api.github.com/users/stoical07/events{/privacy}", "followers_url": "https://api.github.com/users/stoical07/followers", "following_url": "https://api.github.com/users/stoical07/following{/other_user}", "gists_url": "https://api.github.com/users/stoical07/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stoical07", "id": 170015089, "login": "stoical07", "node_id": "U_kgDOCiI5cQ", "organizations_url": "https://api.github.com/users/stoical07/orgs", "received_events_url": "https://api.github.com/users/stoical07/received_events", "repos_url": "https://api.github.com/users/stoical07/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stoical07/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stoical07/subscriptions", "type": "User", "url": "https://api.github.com/users/stoical07" }
https://api.github.com/repos/huggingface/datasets/issues/7000/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7000/timeline
closed
false
7,000
null
2024-06-25T15:51:53Z
null
false
2,372,124,589
https://api.github.com/repos/huggingface/datasets/issues/6999
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6999/events
[]
null
2024-07-03T12:01:42Z
[]
https://github.com/huggingface/datasets/pull/6999
MEMBER
null
false
{ "closed_at": null, "closed_issues": 3, "created_at": "2023-02-13T16:22:42Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }, "description": "Next major release", "due_on": null, "html_url": "https://github.com/huggingface/datasets/milestone/10", "id": 9038583, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels", "node_id": "MI_kwDODunzps4Aier3", "number": 10, "open_issues": 5, "state": "open", "title": "3.0", "updated_at": "2024-06-28T06:51:30Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/10" }
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6999). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
Remove tasks
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6999/reactions" }
PR_kwDODunzps5zd-ak
{ "diff_url": "https://github.com/huggingface/datasets/pull/6999.diff", "html_url": "https://github.com/huggingface/datasets/pull/6999", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6999.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6999" }
2024-06-25T09:06:16Z
https://api.github.com/repos/huggingface/datasets/issues/6999/comments
Remove tasks, as part of the 3.0 release.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6999/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6999/timeline
open
false
6,999
null
null
null
true
2,371,973,926
https://api.github.com/repos/huggingface/datasets/issues/6998
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6998/events
[]
null
2024-06-25T08:22:38Z
[]
https://github.com/huggingface/datasets/pull/6998
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6998). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005396 / 0.011353 (-0.005957) | 0.003974 / 0.011008 (-0.007034) | 0.063490 / 0.038508 (0.024982) | 0.030299 / 0.023109 (0.007189) | 0.244489 / 0.275898 (-0.031409) | 0.274116 / 0.323480 (-0.049364) | 0.003187 / 0.007986 (-0.004798) | 0.003433 / 0.004328 (-0.000896) | 0.049313 / 0.004250 (0.045062) | 0.043677 / 0.037052 (0.006624) | 0.260198 / 0.258489 (0.001709) | 0.283558 / 0.293841 (-0.010283) | 0.029728 / 0.128546 (-0.098819) | 0.011950 / 0.075646 (-0.063696) | 0.204371 / 0.419271 (-0.214901) | 0.035712 / 0.043533 (-0.007821) | 0.252715 / 0.255139 (-0.002424) | 0.268906 / 0.283200 (-0.014293) | 0.021153 / 0.141683 (-0.120529) | 1.125599 / 1.452155 (-0.326556) | 1.163122 / 1.492716 (-0.329594) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095089 / 0.018006 (0.077083) | 0.298576 / 0.000490 (0.298086) | 0.000214 / 0.000200 (0.000014) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018567 / 0.037411 (-0.018844) | 0.062337 / 0.014526 (0.047811) | 0.074231 / 0.176557 (-0.102326) | 0.120960 / 0.737135 (-0.616175) | 0.076124 / 0.296338 (-0.220215) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286936 / 0.215209 (0.071727) | 2.816656 / 2.077655 (0.739001) | 1.486772 / 1.504120 (-0.017348) | 1.373289 / 1.541195 (-0.167905) | 1.392739 / 1.468490 (-0.075752) | 0.708091 / 4.584777 (-3.876686) | 2.410034 / 3.745712 (-1.335678) | 2.912701 / 5.269862 (-2.357161) | 1.850924 / 4.565676 (-2.714752) | 0.078896 / 0.424275 (-0.345380) | 0.005116 / 0.007607 (-0.002491) | 0.332275 / 0.226044 (0.106231) | 3.306562 / 2.268929 (1.037633) | 1.853051 / 55.444624 (-53.591573) | 1.556210 / 6.876477 (-5.320267) | 1.558892 / 2.142072 (-0.583181) | 0.789917 / 4.805227 (-4.015310) | 0.133683 / 6.500664 (-6.366981) | 0.042566 / 0.075469 (-0.032904) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.957050 / 1.841788 (-0.884738) | 11.401462 / 8.074308 (3.327154) | 9.782988 / 10.191392 (-0.408404) | 0.142127 / 0.680424 (-0.538296) | 0.014730 / 0.534201 (-0.519471) | 0.302647 / 0.579283 (-0.276636) | 0.264654 / 0.434364 (-0.169710) | 0.341340 / 0.540337 (-0.198998) | 0.425808 / 1.386936 (-0.961128) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005679 / 0.011353 (-0.005674) | 0.003513 / 0.011008 (-0.007495) | 0.050135 / 0.038508 (0.011627) | 0.031614 / 0.023109 (0.008505) | 0.260064 / 0.275898 (-0.015834) | 0.285816 / 0.323480 (-0.037664) | 0.004428 / 0.007986 (-0.003558) | 0.002816 / 0.004328 (-0.001512) | 0.048441 / 0.004250 (0.044191) | 0.039622 / 0.037052 (0.002570) | 0.274940 / 0.258489 (0.016451) | 0.311837 / 0.293841 (0.017996) | 0.031439 / 0.128546 (-0.097107) | 0.012056 / 0.075646 (-0.063590) | 0.060109 / 0.419271 (-0.359163) | 0.033123 / 0.043533 (-0.010409) | 0.261563 / 0.255139 (0.006424) | 0.282640 / 0.283200 (-0.000560) | 0.017168 / 0.141683 (-0.124515) | 1.127859 / 1.452155 (-0.324295) | 1.182414 / 1.492716 (-0.310303) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095517 / 0.018006 (0.077510) | 0.300578 / 0.000490 (0.300088) | 0.000212 / 0.000200 (0.000012) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022192 / 0.037411 (-0.015220) | 0.076617 / 0.014526 (0.062091) | 0.087405 / 0.176557 (-0.089151) | 0.127011 / 0.737135 (-0.610124) | 0.088706 / 0.296338 (-0.207632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294260 / 0.215209 (0.079051) | 2.872879 / 2.077655 (0.795224) | 1.531374 / 1.504120 (0.027254) | 1.399232 / 1.541195 (-0.141962) | 1.400708 / 1.468490 (-0.067782) | 0.714003 / 4.584777 (-3.870773) | 0.943144 / 3.745712 (-2.802568) | 2.833396 / 5.269862 (-2.436466) | 1.890570 / 4.565676 (-2.675106) | 0.077664 / 0.424275 (-0.346611) | 0.005651 / 0.007607 (-0.001956) | 0.349476 / 0.226044 (0.123431) | 3.405768 / 2.268929 (1.136840) | 1.869739 / 55.444624 (-53.574885) | 1.575293 / 6.876477 (-5.301184) | 1.692981 / 2.142072 (-0.449092) | 0.795363 / 4.805227 (-4.009865) | 0.131532 / 6.500664 (-6.369132) | 0.041183 / 0.075469 (-0.034286) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000821 / 1.841788 (-0.840967) | 11.987795 / 8.074308 (3.913487) | 10.147652 / 10.191392 (-0.043740) | 0.141314 / 0.680424 (-0.539110) | 0.015506 / 0.534201 (-0.518695) | 0.305090 / 0.579283 (-0.274193) | 0.123403 / 0.434364 (-0.310960) | 0.346507 / 0.540337 (-0.193831) | 0.471318 / 1.386936 (-0.915618) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#186b560eb2393c7d1913f4b3e76e9e04a081e09b \"CML watermark\")\n" ]
Fix tests using hf-internal-testing/librispeech_asr_dummy
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6998/reactions" }
PR_kwDODunzps5zddYG
{ "diff_url": "https://github.com/huggingface/datasets/pull/6998.diff", "html_url": "https://github.com/huggingface/datasets/pull/6998", "merged_at": "2024-06-25T08:13:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/6998.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6998" }
2024-06-25T07:59:44Z
https://api.github.com/repos/huggingface/datasets/issues/6998/comments
Fix tests using hf-internal-testing/librispeech_asr_dummy once that dataset has been converted to Parquet. Fix #6997.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6998/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6998/timeline
closed
false
6,998
null
2024-06-25T08:13:42Z
null
true
2,371,966,127
https://api.github.com/repos/huggingface/datasets/issues/6997
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6997/events
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
null
2024-06-25T08:13:43Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6997
MEMBER
completed
null
null
[]
CI is broken for tests using hf-internal-testing/librispeech_asr_dummy
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6997/reactions" }
I_kwDODunzps6NYVSv
null
2024-06-25T07:55:44Z
https://api.github.com/repos/huggingface/datasets/issues/6997/comments
CI is broken: https://github.com/huggingface/datasets/actions/runs/9657882317/job/26637998686?pr=6996 ``` FAILED tests/test_inspect.py::test_get_dataset_config_names[hf-internal-testing/librispeech_asr_dummy-expected4] - AssertionError: assert ['clean'] == ['clean', 'other'] Right contains one more item: 'other' Full diff: [ 'clean', - 'other', ] FAILED tests/test_inspect.py::test_get_dataset_default_config_name[hf-internal-testing/librispeech_asr_dummy-None] - AssertionError: assert 'clean' is None ``` Note that repository was recently converted to Parquet: https://huggingface.co/datasets/hf-internal-testing/librispeech_asr_dummy/commit/5be91486e11a2d616f4ec5db8d3fd248585ac07a
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6997/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6997/timeline
closed
false
6,997
null
2024-06-25T08:13:43Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,371,841,671
https://api.github.com/repos/huggingface/datasets/issues/6996
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6996/events
[]
null
2024-07-01T12:36:59Z
[]
https://github.com/huggingface/datasets/pull/6996
MEMBER
null
false
{ "closed_at": null, "closed_issues": 3, "created_at": "2023-02-13T16:22:42Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }, "description": "Next major release", "due_on": null, "html_url": "https://github.com/huggingface/datasets/milestone/10", "id": 9038583, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels", "node_id": "MI_kwDODunzps4Aier3", "number": 10, "open_issues": 5, "state": "open", "title": "3.0", "updated_at": "2024-06-28T06:51:30Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/10" }
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6996). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
Remove deprecated code
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6996/reactions" }
PR_kwDODunzps5zdAg0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6996.diff", "html_url": "https://github.com/huggingface/datasets/pull/6996", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6996.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6996" }
2024-06-25T06:54:40Z
https://api.github.com/repos/huggingface/datasets/issues/6996/comments
Remove deprecated code, as part of the 3.0 release. First merge: - [x] #6983 - [x] #6987 - [ ] #6999
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6996/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6996/timeline
open
false
6,996
null
null
null
true
2,370,713,475
https://api.github.com/repos/huggingface/datasets/issues/6995
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6995/events
[]
null
2024-07-16T17:51:06Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6995
NONE
completed
null
null
[ "What is the version of your installed `huggingface-hub`:\r\n```python\r\nimport huggingface_hub\r\nprint(huggingface_hub.__version__)\r\n```\r\n\r\nIt seems you have a very old version of `huggingface-hub`, where `CommitInfo` was not still implemented. You need to update it:\r\n```\r\npip install -U huggingface-hub\r\n```\r\n\r\nNote that `CommitInfo` was implemented in huggingface-hub 0.10.0 and datasets requires \"huggingface-hub>=0.21.2\"", "The version of my huggingface-hub is 0.23.4.", "The error message says there is no CommitInfo in your installed huggingface-hub library:\r\n```\r\nImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (D:\\Anaconda3\\envs\\CS224S\\Lib\\site-packages\\huggingface_hub_init_.py)\r\n```\r\n\r\nAnd this is implemented since version 0.10.0:\r\n- https://github.com/huggingface/huggingface_hub/pull/1066", "I am getting the exact same issue when I `import datasets`. The version of my huggingface-hub is also 0.23.4. I dont see a solution in the comments. Not sure why is this issue closed?", "I closed the issue because the problem is not related to the `datasets` library.\r\n\r\nThe problem is with your local Python environment: it seems corrupted. You could try to remove it and regenerate it again.", "I have recreated my conda environment but still run into the same issue. Here is my environment:\r\n```\r\nconda create --name esm python=3.10\r\n conda activate esm\r\n conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia\r\n pip3 install -r requirements.txt\r\n```\r\nRequirements.txt\r\n```\r\naccelerate\r\ndatasets==2.20.0\r\npyfastx\r\ntransformers\r\nboto3\r\nhuggingface_hub==0.23.4\r\n```\r\n\r\nAnd then I get:\r\n```\r\n>>> import datasets\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/fsx/ubuntu/miniconda3/envs/esm2/lib/python3.10/site-packages/datasets/__init__.py\", line 17, in <module>\r\n from .arrow_dataset import Dataset\r\n File \"/fsx/ubuntu/miniconda3/envs/esm2/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 63, in <module>\r\n from huggingface_hub import (\r\nImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (/fsx/ubuntu/miniconda3/envs/esm2/lib/python3.10/site-packages/huggingface_hub/__init__.py)\r\n>>>\r\n```\r\n\r\n", "You can check:\r\n```\r\n>>> import huggingface_hub\r\n>>> print(huggingface_hub.__version__)\r\n```", "This is what I see:\r\n```\r\n>>> import huggingface_hub\r\n>>> print(huggingface_hub.__version__)\r\n0.23.4\r\n```", "Installing `chardet` makes it work for some reason" ]
ImportError when importing datasets.load_dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6995/reactions" }
I_kwDODunzps6NTjeD
null
2024-06-24T17:07:22Z
https://api.github.com/repos/huggingface/datasets/issues/6995/comments
### Describe the bug I encountered an ImportError while trying to import `load_dataset` from the `datasets` module in Hugging Face. The error message indicates a problem with importing 'CommitInfo' from 'huggingface_hub'. ### Steps to reproduce the bug 1. pip install git+https://github.com/huggingface/datasets 2. from datasets import load_dataset ### Expected behavior ImportError Traceback (most recent call last) Cell In[7], [line 1](vscode-notebook-cell:?execution_count=7&line=1) ----> [1](vscode-notebook-cell:?execution_count=7&line=1) from datasets import load_dataset [3](vscode-notebook-cell:?execution_count=7&line=3) train_set = load_dataset("mispeech/speechocean762", split="train") [4](vscode-notebook-cell:?execution_count=7&line=4) test_set = load_dataset("mispeech/speechocean762", split="test") File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py:[1](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:1)7 1 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors. [2](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:2) # [3](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:3) # Licensed under the Apache License, Version 2.0 (the "License"); (...) [12](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:12) # See the License for the specific language governing permissions and [13](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:13) # limitations under the License. [15](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:15) __version__ = "2.20.1.dev0" ---> [17](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:17) from .arrow_dataset import Dataset [18](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:18) from .arrow_reader import ReadInstruction [19](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:19) from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py:63 [61](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:61) import pyarrow.compute as pc [62](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:62) from fsspec.core import url_to_fs ---> [63](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:63) from huggingface_hub import ( [64](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:64) CommitInfo, [65](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:65) CommitOperationAdd, ... [70](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:70) ) [71](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:71) from huggingface_hub.hf_api import RepoFile [72](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:72) from multiprocess import Pool ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (d:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py) Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?580889ab-0f61-4f37-9214-eaa2b3807f85) or open in a [text editor](command:workbench.action.openLargeOutput?580889ab-0f61-4f37-9214-eaa2b3807f85). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)... ### Environment info Leo@DESKTOP-9NHUAMI MSYS /d/Anaconda3/envs/CS224S/Lib/site-packages/huggingface_hub $ datasets-cli env Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "D:\Anaconda3\envs\CS224S\Scripts\datasets-cli.exe\__main__.py", line 4, in <module> File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py", line 17, in <module> from .arrow_dataset import Dataset File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py", line 63, in <module> from huggingface_hub import ( ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (D:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py) (CS224S)
{ "avatar_url": "https://avatars.githubusercontent.com/u/124846947?v=4", "events_url": "https://api.github.com/users/Leo-Lsc/events{/privacy}", "followers_url": "https://api.github.com/users/Leo-Lsc/followers", "following_url": "https://api.github.com/users/Leo-Lsc/following{/other_user}", "gists_url": "https://api.github.com/users/Leo-Lsc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Leo-Lsc", "id": 124846947, "login": "Leo-Lsc", "node_id": "U_kgDOB3EDYw", "organizations_url": "https://api.github.com/users/Leo-Lsc/orgs", "received_events_url": "https://api.github.com/users/Leo-Lsc/received_events", "repos_url": "https://api.github.com/users/Leo-Lsc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Leo-Lsc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Leo-Lsc/subscriptions", "type": "User", "url": "https://api.github.com/users/Leo-Lsc" }
https://api.github.com/repos/huggingface/datasets/issues/6995/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6995/timeline
closed
false
6,995
null
2024-06-25T06:11:37Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,370,491,689
https://api.github.com/repos/huggingface/datasets/issues/6994
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6994/events
[]
null
2024-06-26T04:37:35Z
[]
https://github.com/huggingface/datasets/pull/6994
CONTRIBUTOR
null
false
null
[ "Sure~", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6994). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005538 / 0.011353 (-0.005815) | 0.003997 / 0.011008 (-0.007011) | 0.063444 / 0.038508 (0.024935) | 0.032552 / 0.023109 (0.009442) | 0.266574 / 0.275898 (-0.009324) | 0.282841 / 0.323480 (-0.040639) | 0.004279 / 0.007986 (-0.003706) | 0.002788 / 0.004328 (-0.001540) | 0.049226 / 0.004250 (0.044976) | 0.044688 / 0.037052 (0.007636) | 0.275464 / 0.258489 (0.016975) | 0.305278 / 0.293841 (0.011437) | 0.030097 / 0.128546 (-0.098450) | 0.012237 / 0.075646 (-0.063410) | 0.205526 / 0.419271 (-0.213745) | 0.036145 / 0.043533 (-0.007388) | 0.267395 / 0.255139 (0.012256) | 0.289149 / 0.283200 (0.005949) | 0.019044 / 0.141683 (-0.122639) | 1.162294 / 1.452155 (-0.289861) | 1.183642 / 1.492716 (-0.309074) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.139125 / 0.018006 (0.121119) | 0.301743 / 0.000490 (0.301253) | 0.000260 / 0.000200 (0.000061) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019494 / 0.037411 (-0.017917) | 0.063078 / 0.014526 (0.048552) | 0.076989 / 0.176557 (-0.099567) | 0.121363 / 0.737135 (-0.615773) | 0.080040 / 0.296338 (-0.216298) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284401 / 0.215209 (0.069192) | 2.805397 / 2.077655 (0.727742) | 1.555609 / 1.504120 (0.051489) | 1.405662 / 1.541195 (-0.135533) | 1.459492 / 1.468490 (-0.008999) | 0.718376 / 4.584777 (-3.866401) | 2.395918 / 3.745712 (-1.349794) | 2.976753 / 5.269862 (-2.293108) | 1.883938 / 4.565676 (-2.681738) | 0.078867 / 0.424275 (-0.345408) | 0.005207 / 0.007607 (-0.002400) | 0.335178 / 0.226044 (0.109133) | 3.313414 / 2.268929 (1.044485) | 1.856929 / 55.444624 (-53.587696) | 1.565319 / 6.876477 (-5.311158) | 1.592723 / 2.142072 (-0.549350) | 0.793621 / 4.805227 (-4.011606) | 0.134208 / 6.500664 (-6.366456) | 0.042853 / 0.075469 (-0.032616) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981553 / 1.841788 (-0.860235) | 11.810438 / 8.074308 (3.736130) | 9.529874 / 10.191392 (-0.661518) | 0.142216 / 0.680424 (-0.538207) | 0.014303 / 0.534201 (-0.519898) | 0.304600 / 0.579283 (-0.274684) | 0.261869 / 0.434364 (-0.172495) | 0.347301 / 0.540337 (-0.193036) | 0.437395 / 1.386936 (-0.949541) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005881 / 0.011353 (-0.005472) | 0.004039 / 0.011008 (-0.006969) | 0.050241 / 0.038508 (0.011733) | 0.032670 / 0.023109 (0.009561) | 0.264940 / 0.275898 (-0.010959) | 0.287105 / 0.323480 (-0.036374) | 0.004844 / 0.007986 (-0.003142) | 0.002867 / 0.004328 (-0.001462) | 0.048083 / 0.004250 (0.043833) | 0.040965 / 0.037052 (0.003913) | 0.274390 / 0.258489 (0.015901) | 0.312107 / 0.293841 (0.018266) | 0.031714 / 0.128546 (-0.096832) | 0.012603 / 0.075646 (-0.063043) | 0.060698 / 0.419271 (-0.358573) | 0.033130 / 0.043533 (-0.010402) | 0.264444 / 0.255139 (0.009305) | 0.282797 / 0.283200 (-0.000403) | 0.027872 / 0.141683 (-0.113811) | 1.139026 / 1.452155 (-0.313129) | 1.181431 / 1.492716 (-0.311285) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097314 / 0.018006 (0.079308) | 0.301326 / 0.000490 (0.300836) | 0.000215 / 0.000200 (0.000015) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023394 / 0.037411 (-0.014018) | 0.076270 / 0.014526 (0.061744) | 0.089065 / 0.176557 (-0.087491) | 0.129996 / 0.737135 (-0.607139) | 0.089642 / 0.296338 (-0.206697) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295390 / 0.215209 (0.080181) | 2.877849 / 2.077655 (0.800194) | 1.537129 / 1.504120 (0.033009) | 1.409441 / 1.541195 (-0.131754) | 1.432468 / 1.468490 (-0.036023) | 0.718054 / 4.584777 (-3.866722) | 0.930872 / 3.745712 (-2.814841) | 2.841028 / 5.269862 (-2.428834) | 1.921990 / 4.565676 (-2.643686) | 0.077638 / 0.424275 (-0.346637) | 0.005494 / 0.007607 (-0.002113) | 0.336331 / 0.226044 (0.110287) | 3.330490 / 2.268929 (1.061561) | 1.887994 / 55.444624 (-53.556630) | 1.593332 / 6.876477 (-5.283144) | 1.726956 / 2.142072 (-0.415116) | 0.783612 / 4.805227 (-4.021615) | 0.129926 / 6.500664 (-6.370738) | 0.040792 / 0.075469 (-0.034677) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.980274 / 1.841788 (-0.861514) | 12.193871 / 8.074308 (4.119563) | 10.348934 / 10.191392 (0.157542) | 0.141584 / 0.680424 (-0.538840) | 0.015737 / 0.534201 (-0.518464) | 0.300725 / 0.579283 (-0.278558) | 0.127190 / 0.434364 (-0.307174) | 0.341142 / 0.540337 (-0.199196) | 0.459523 / 1.386936 (-0.927413) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#637246baf96f07b19b193ed101f34b65cb35cffb \"CML watermark\")\n" ]
Fix incorrect rank value in data splitting
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6994/reactions" }
PR_kwDODunzps5zYYXr
{ "diff_url": "https://github.com/huggingface/datasets/pull/6994.diff", "html_url": "https://github.com/huggingface/datasets/pull/6994", "merged_at": "2024-06-25T16:19:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/6994.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6994" }
2024-06-24T15:07:47Z
https://api.github.com/repos/huggingface/datasets/issues/6994/comments
Fix #6990.
{ "avatar_url": "https://avatars.githubusercontent.com/u/18402347?v=4", "events_url": "https://api.github.com/users/yzhangcs/events{/privacy}", "followers_url": "https://api.github.com/users/yzhangcs/followers", "following_url": "https://api.github.com/users/yzhangcs/following{/other_user}", "gists_url": "https://api.github.com/users/yzhangcs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yzhangcs", "id": 18402347, "login": "yzhangcs", "node_id": "MDQ6VXNlcjE4NDAyMzQ3", "organizations_url": "https://api.github.com/users/yzhangcs/orgs", "received_events_url": "https://api.github.com/users/yzhangcs/received_events", "repos_url": "https://api.github.com/users/yzhangcs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yzhangcs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yzhangcs/subscriptions", "type": "User", "url": "https://api.github.com/users/yzhangcs" }
https://api.github.com/repos/huggingface/datasets/issues/6994/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6994/timeline
closed
false
6,994
null
2024-06-25T16:19:17Z
null
true
2,370,444,104
https://api.github.com/repos/huggingface/datasets/issues/6993
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6993/events
[]
null
2024-07-08T13:10:53Z
[]
https://github.com/huggingface/datasets/pull/6993
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6993). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005810 / 0.011353 (-0.005543) | 0.003984 / 0.011008 (-0.007024) | 0.064347 / 0.038508 (0.025839) | 0.031943 / 0.023109 (0.008834) | 0.252596 / 0.275898 (-0.023302) | 0.274032 / 0.323480 (-0.049448) | 0.003494 / 0.007986 (-0.004492) | 0.002817 / 0.004328 (-0.001511) | 0.050132 / 0.004250 (0.045881) | 0.048008 / 0.037052 (0.010955) | 0.249037 / 0.258489 (-0.009452) | 0.288526 / 0.293841 (-0.005315) | 0.031038 / 0.128546 (-0.097509) | 0.012542 / 0.075646 (-0.063104) | 0.205682 / 0.419271 (-0.213590) | 0.038022 / 0.043533 (-0.005511) | 0.259001 / 0.255139 (0.003862) | 0.267455 / 0.283200 (-0.015744) | 0.021980 / 0.141683 (-0.119703) | 1.123996 / 1.452155 (-0.328159) | 1.173801 / 1.492716 (-0.318915) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102827 / 0.018006 (0.084821) | 0.317210 / 0.000490 (0.316720) | 0.000222 / 0.000200 (0.000022) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019483 / 0.037411 (-0.017928) | 0.064098 / 0.014526 (0.049572) | 0.076219 / 0.176557 (-0.100337) | 0.122898 / 0.737135 (-0.614237) | 0.080657 / 0.296338 (-0.215681) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278378 / 0.215209 (0.063169) | 2.792314 / 2.077655 (0.714659) | 1.516439 / 1.504120 (0.012319) | 1.374052 / 1.541195 (-0.167143) | 1.370848 / 1.468490 (-0.097642) | 0.756002 / 4.584777 (-3.828775) | 2.349581 / 3.745712 (-1.396131) | 2.994094 / 5.269862 (-2.275768) | 1.904242 / 4.565676 (-2.661435) | 0.078769 / 0.424275 (-0.345506) | 0.005103 / 0.007607 (-0.002505) | 0.336331 / 0.226044 (0.110287) | 3.329502 / 2.268929 (1.060574) | 1.863545 / 55.444624 (-53.581079) | 1.554690 / 6.876477 (-5.321787) | 1.588448 / 2.142072 (-0.553624) | 0.787322 / 4.805227 (-4.017905) | 0.138345 / 6.500664 (-6.362320) | 0.042228 / 0.075469 (-0.033241) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.968607 / 1.841788 (-0.873181) | 11.972076 / 8.074308 (3.897768) | 9.927608 / 10.191392 (-0.263784) | 0.141666 / 0.680424 (-0.538758) | 0.014591 / 0.534201 (-0.519610) | 0.301995 / 0.579283 (-0.277288) | 0.274360 / 0.434364 (-0.160004) | 0.338396 / 0.540337 (-0.201941) | 0.431081 / 1.386936 (-0.955855) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006122 / 0.011353 (-0.005231) | 0.004201 / 0.011008 (-0.006807) | 0.050204 / 0.038508 (0.011695) | 0.033222 / 0.023109 (0.010113) | 0.274357 / 0.275898 (-0.001542) | 0.296238 / 0.323480 (-0.027242) | 0.004542 / 0.007986 (-0.003444) | 0.002880 / 0.004328 (-0.001449) | 0.049103 / 0.004250 (0.044852) | 0.042294 / 0.037052 (0.005242) | 0.286459 / 0.258489 (0.027970) | 0.324988 / 0.293841 (0.031147) | 0.032084 / 0.128546 (-0.096462) | 0.012329 / 0.075646 (-0.063318) | 0.060261 / 0.419271 (-0.359010) | 0.034130 / 0.043533 (-0.009403) | 0.271432 / 0.255139 (0.016293) | 0.306251 / 0.283200 (0.023051) | 0.019744 / 0.141683 (-0.121939) | 1.153483 / 1.452155 (-0.298672) | 1.209126 / 1.492716 (-0.283591) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004737 / 0.018006 (-0.013270) | 0.313458 / 0.000490 (0.312968) | 0.000216 / 0.000200 (0.000017) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022472 / 0.037411 (-0.014939) | 0.076725 / 0.014526 (0.062199) | 0.091356 / 0.176557 (-0.085201) | 0.132427 / 0.737135 (-0.604708) | 0.091072 / 0.296338 (-0.205266) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294414 / 0.215209 (0.079205) | 2.913695 / 2.077655 (0.836040) | 1.567309 / 1.504120 (0.063189) | 1.448664 / 1.541195 (-0.092531) | 1.466386 / 1.468490 (-0.002105) | 0.718605 / 4.584777 (-3.866172) | 0.951963 / 3.745712 (-2.793749) | 2.812565 / 5.269862 (-2.457297) | 1.886483 / 4.565676 (-2.679193) | 0.077912 / 0.424275 (-0.346363) | 0.005371 / 0.007607 (-0.002236) | 0.349528 / 0.226044 (0.123484) | 3.431049 / 2.268929 (1.162121) | 1.920210 / 55.444624 (-53.524414) | 1.637927 / 6.876477 (-5.238549) | 1.767502 / 2.142072 (-0.374570) | 0.808672 / 4.805227 (-3.996555) | 0.134261 / 6.500664 (-6.366403) | 0.041295 / 0.075469 (-0.034174) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.023454 / 1.841788 (-0.818334) | 12.433731 / 8.074308 (4.359423) | 10.413191 / 10.191392 (0.221799) | 0.156813 / 0.680424 (-0.523611) | 0.015446 / 0.534201 (-0.518755) | 0.301935 / 0.579283 (-0.277348) | 0.133655 / 0.434364 (-0.300709) | 0.340296 / 0.540337 (-0.200041) | 0.466314 / 1.386936 (-0.920622) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6cf563fd57807e923a29ebbe327fecb4ef969877 \"CML watermark\")\n", "Hi @lhoestq,\r\n\r\nI was confused by `legacy` prefix added to the [image data loading](https://huggingface.co/docs/datasets/main/en/image_dataset#legacy-loading-script) script section. I have a custom image dataset and have looked through the documentation to find something similar but can't find a good alternative What is now the recommend way to create a custom image dataset then? I want the HF format but will not host it on the hub.\r\n\r\nApologies in advance if this is the wrong place to ask such questions...", "We stopped making new features for datasets with scripts for obvious security reasons, that's why they are marked as \"legacy\". What is blocking you from hosting on HF ?", "Hi, thanks for the prompt answer :) I am working on proprietary datasets for the company where I am employed. We want to keep the data in-house but would like to investigate the use of the HF ecosystem.", "I see ! Note that it's possible to have private repos on HF (+ dataset viewer) and you can even choose the storage region, if it can help" ]
less script docs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6993/reactions" }
PR_kwDODunzps5zYN7P
{ "diff_url": "https://github.com/huggingface/datasets/pull/6993.diff", "html_url": "https://github.com/huggingface/datasets/pull/6993", "merged_at": "2024-06-27T09:31:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/6993.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6993" }
2024-06-24T14:45:28Z
https://api.github.com/repos/huggingface/datasets/issues/6993/comments
+ mark as legacy in some parts of the docs since we'll not build new features for script datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6993/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6993/timeline
closed
false
6,993
null
2024-06-27T09:31:21Z
null
true
2,367,890,622
https://api.github.com/repos/huggingface/datasets/issues/6992
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6992/events
[]
null
2024-06-25T15:43:05Z
[]
https://github.com/huggingface/datasets/issues/6992
NONE
null
null
null
[ "Hi ! can you try updating `datasets` and `huggingface_hub` ?\r\n\r\n```\r\npip install -U datasets huggingface_hub\r\n```" ]
Dataset with streaming doesn't work with proxy
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6992/reactions" }
I_kwDODunzps6NIyS-
null
2024-06-22T16:12:08Z
https://api.github.com/repos/huggingface/datasets/issues/6992/comments
### Describe the bug I'm currently trying to stream data using dataset since the dataset is too big but it hangs indefinitely without loading the first batch. I use AIMOS which is a supercomputer that uses proxy to connect to the internet. I assume it has to do with the network configurations. I've already set up both HTTP_PROXY and HTTPS_PROXY. streaming = False works fine. ### Steps to reproduce the bug use load_dataset with streaming = True in AIMOS ### Expected behavior does not hang indefinitely and loads batches to start training run ### Environment info _libgcc_mutex 0.1 conda_forge conda-forge _openmp_mutex 4.5 2_gnu conda-forge _pytorch_select 2.0 cuda_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 abseil-cpp 20220623.0 h9888cd1_6 conda-forge absl-py 1.0.0 py311h399429b_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 aiofiles 23.2.1 pyhd8ed1ab_0 conda-forge aiohttp 3.8.6 py311hf118e41_0 aiosignal 1.2.0 pyhd3eb1b0_0 archspec 0.2.3 pyhd8ed1ab_0 conda-forge arrow-cpp 11.0.0 ha3edaa6_5_cpu conda-forge async-timeout 4.0.2 py311h6ffa863_0 attrs 23.1.0 py311h6ffa863_0 av 10.0.0 py311he6153ed_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 aws-c-auth 0.6.24 hb81f6d7_5 conda-forge aws-c-cal 0.5.20 h3c2b4d9_6 conda-forge aws-c-common 0.8.11 h4194056_0 conda-forge aws-c-compression 0.2.16 ha19333d_3 conda-forge aws-c-event-stream 0.2.18 h12a9399_6 conda-forge aws-c-http 0.7.4 ha2cde00_2 conda-forge aws-c-io 0.13.17 h9189062_2 conda-forge aws-c-mqtt 0.8.6 h40d1a04_6 conda-forge aws-c-s3 0.2.4 hbdbe4f0_3 conda-forge aws-c-sdkutils 0.1.7 ha19333d_3 conda-forge aws-checksums 0.1.14 ha19333d_3 conda-forge aws-crt-cpp 0.19.7 hd018011_7 conda-forge aws-sdk-cpp 1.10.57 hb9575ba_4 conda-forge blas 1.0 openblas blinker 1.8.2 pyhd8ed1ab_0 conda-forge boltons 23.0.0 py311h6ffa863_0 boost-cpp 1.82.0 h25e6d66_2 bottleneck 1.3.5 py311h34f6284_0 brotli 1.0.9 hf118e41_7 brotli-bin 1.0.9 hf118e41_7 brotli-python 1.0.9 py311h4a02239_7 bzip2 1.0.8 h7b6447c_0 c-ares 1.19.1 hf118e41_0 ca-certificates 2024.6.2 h0f6029e_0 conda-forge cachetools 5.3.3 pyhd8ed1ab_0 conda-forge certifi 2024.6.2 pyhd8ed1ab_0 conda-forge cffi 1.15.1 py311hf118e41_3 charset-normalizer 2.0.4 pyhd3eb1b0_0 click 8.1.7 unix_pyh707e725_0 conda-forge conda 24.5.0 py311h1af927a_0 conda-forge conda-content-trust 0.2.0 py311h6ffa863_0 conda-libmamba-solver 23.11.1 py311h6ffa863_0 conda-package-handling 2.2.0 py311h6ffa863_0 conda-package-streaming 0.9.0 py311h6ffa863_0 contourpy 1.0.5 py311h25e6d66_0 cryptography 41.0.3 py311hb0e80e7_0 cudatoolkit 11.8.0 hedcfb66_13 conda-forge cudnn 8.9.2_11.8 h9ceb136_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 cycler 0.11.0 pyhd3eb1b0_0 datasets 2.12.0 py311h6ffa863_0 dill 0.3.6 py311h6ffa863_0 distro 1.9.0 pyhd8ed1ab_0 conda-forge ffmpeg 4.2.2 opence_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 filelock 3.9.0 py311h6ffa863_0 fmt 9.1.0 h25e6d66_0 fonttools 4.25.0 pyhd3eb1b0_0 freetype 2.12.1 hd23a775_0 frozendict 2.4.4 py311hb02d432_0 conda-forge frozenlist 1.4.0 py311hf118e41_0 fsspec 2023.9.2 py311h6ffa863_0 gflags 2.2.2 he6710b0_0 giflib 5.2.1 hf118e41_3 glog 0.6.0 hbe088e0_0 conda-forge gmp 6.3.0 h46f38da_0 conda-forge gmpy2 2.1.5 py311h2758da7_1 conda-forge google-auth 2.30.0 pyhff2d567_0 conda-forge google-auth-oauthlib 0.5.3 pyhd8ed1ab_0 conda-forge grpc-cpp 1.51.1 h8ba971d_1 conda-forge grpcio 1.54.3 py311h414e0d3_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 huggingface_hub 0.17.3 py311h6ffa863_0 icu 73.1 h4a02239_0 idna 3.4 py311h6ffa863_0 importlib-metadata 6.0.0 py311h6ffa863_0 jinja2 3.1.4 pyhd8ed1ab_0 conda-forge jpeg 9e hf118e41_1 jsonpatch 1.32 pyhd3eb1b0_0 jsonpointer 2.1 pyhd3eb1b0_0 kiwisolver 1.4.4 py311h4a02239_0 krb5 1.20.1 hc019ccd_1 lame 3.100 hb283c62_1003 conda-forge lcms2 2.12 h2045e0b_0 ld_impl_linux-ppc64le 2.38 hec883e6_1 lerc 3.0 h29c3540_0 leveldb 1.23 h24532b4_1 conda-forge libabseil 20220623.0 cxx17_h9235812_6 conda-forge libarchive 3.6.2 hd8ab008_2 libarrow 11.0.0 h837770b_5_cpu conda-forge libboost 1.82.0 haf51a6a_2 libbrotlicommon 1.0.9 hf118e41_7 libbrotlidec 1.0.9 hf118e41_7 libbrotlienc 1.0.9 hf118e41_7 libcrc32c 1.1.2 h3b9df90_0 conda-forge libcurl 8.4.0 h4d62439_0 libdeflate 1.17 hf118e41_1 libedit 3.1.20221030 hf118e41_0 libev 4.33 h140841e_1 libevent 2.1.10 h19c23f1_4 conda-forge libexpat 2.6.2 h46f38da_0 conda-forge libffi 3.4.4 h4a02239_0 libgcc-ng 13.2.0 h31e42bb_10 conda-forge libgfortran-ng 11.2.0 hb3889a9_1 libgfortran5 11.2.0 h1234567_1 libgomp 13.2.0 h31e42bb_10 conda-forge libgoogle-cloud 2.7.0 h11140b6_1 conda-forge libgrpc 1.51.1 h4d29a31_1 conda-forge libmamba 1.5.3 h7c6fafd_0 libmambapy 1.5.3 py311h828bf7b_0 libnghttp2 1.57.0 h44e5816_0 libnsl 2.0.1 ha17a0cc_0 conda-forge libopenblas 0.3.23 hc5a31fb_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 libopus 1.3.1 h4e0d66e_1 conda-forge libpng 1.6.39 hf118e41_0 libprotobuf 3.21.12 h1776448_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 libsolv 0.7.24 h0f529ac_0 libsqlite 3.45.3 hd4bbf49_0 conda-forge libssh2 1.10.0 h50fa78f_2 libstdcxx-ng 13.2.0 h262982c_10 conda-forge libthrift 0.18.0 h82f1162_0 conda-forge libtiff 4.5.1 h4a02239_0 libutf8proc 2.8.0 hb283c62_0 conda-forge libuuid 2.38.1 h4194056_0 conda-forge libvpx 1.13.1 h46f38da_0 conda-forge libwebp 1.3.2 h0f96ee2_0 libwebp-base 1.3.2 hf118e41_0 libxcrypt 4.4.36 ha17a0cc_1 conda-forge libxml2 2.10.4 h18e3229_1 libzlib 1.2.13 h1f2b957_6 conda-forge llvm-openmp 14.0.6 hc028133_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 lmdb 0.9.31 ha17a0cc_1 conda-forge lz4-c 1.9.4 h4a02239_0 markdown 3.4.4 pyhd8ed1ab_0 conda-forge markupsafe 2.1.5 py311h32d8acf_0 conda-forge matplotlib 3.8.0 py311h6ffa863_0 matplotlib-base 3.8.0 py311h52e1fcc_0 menuinst 2.1.1 py311h1af927a_0 conda-forge mpc 1.3.1 heaf1863_0 conda-forge mpfr 4.2.1 haad2271_1 conda-forge mpmath 1.3.0 pyhd8ed1ab_0 conda-forge multidict 6.0.2 py311hf118e41_0 multiprocess 0.70.14 py311h6ffa863_0 munkres 1.1.4 py_0 mypy_extensions 1.0.0 pyha770c72_0 conda-forge nccl 2.18.3 cuda11.8_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 ncurses 6.4 h4a02239_0 nest-asyncio 1.6.0 pyhd8ed1ab_0 conda-forge networkx 2.8.8 pyhd8ed1ab_0 conda-forge nomkl 3.0 0 https://ftp.osuosl.org/pub/open-ce/1.10.0 numactl 2.0.16 hba61f60_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 numexpr 2.8.7 py311hc46fc55_0 numpy 1.24.3 py311h148a09e_0 numpy-base 1.24.3 py311h06b82f6_0 oauthlib 3.2.2 pyhd8ed1ab_0 conda-forge openjpeg 2.4.0 hfe35807_0 openssl 3.3.1 h1f2b957_0 conda-forge orc 1.8.2 h341c9a4_2 conda-forge packaging 23.1 py311h6ffa863_0 pandas 2.1.1 py311h52e1fcc_0 pcre2 10.42 h280155c_0 pillow 10.0.1 py311he33076b_0 pip 23.3 py311h6ffa863_0 platformdirs 4.2.2 pyhd8ed1ab_0 conda-forge pluggy 1.0.0 py311h6ffa863_1 pooch 1.8.2 pyhd8ed1ab_0 conda-forge protobuf 4.21.12 py311ha7baec7_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 psutil 5.9.8 py311hd26027c_0 conda-forge pyarrow 11.0.0 py311h04a18d5_1 pyasn1 0.6.0 pyhd8ed1ab_0 conda-forge pyasn1-modules 0.4.0 pyhd8ed1ab_0 conda-forge pybind11-abi 4 hd3eb1b0_1 pycosat 0.6.6 py311hf118e41_0 pycparser 2.21 pyhd3eb1b0_0 pyjwt 2.8.0 pyhd8ed1ab_1 conda-forge pyopenssl 23.2.0 py311h6ffa863_0 pyparsing 3.0.9 py311h6ffa863_0 pyre-extensions 0.0.30 pyhd8ed1ab_0 conda-forge pysocks 1.7.1 py311h6ffa863_0 python 3.11.8 h3332dee_0_cpython conda-forge python-dateutil 2.8.2 pyhd3eb1b0_0 python-tzdata 2023.3 pyhd3eb1b0_0 python-xxhash 2.0.2 py311hf118e41_1 python_abi 3.11 4_cp311 conda-forge pytorch 2.0.1 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 pytorch-base 2.0.1 cuda11.8_py311_pb4.21.12_4 https://ftp.osuosl.org/pub/open-ce/1.10.0 pytz 2023.3.post1 py311h6ffa863_0 pyu2f 0.1.5 pyhd8ed1ab_0 conda-forge pyyaml 6.0.1 py311hf118e41_0 re2 2023.02.01 h883269e_0 conda-forge readline 8.2 hf118e41_0 regex 2023.10.3 py311hf118e41_0 reproc 14.2.4 h29c3540_1 reproc-cpp 14.2.4 h29c3540_1 requests 2.31.0 py311h6ffa863_0 requests-oauthlib 2.0.0 pyhd8ed1ab_0 conda-forge responses 0.13.3 pyhd3eb1b0_0 rsa 4.9 pyhd8ed1ab_0 conda-forge ruamel.yaml 0.17.21 py311hf118e41_0 s2n 1.3.37 h5e47323_0 conda-forge safetensors 0.4.0 py311hda16d9e_0 scipy 1.11.1 py311hd69e9bb_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 sentencepiece 0.1.97 h1e74c73_py311_pb4.21.12_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 setuptools 68.0.0 py311h6ffa863_0 six 1.16.0 pyhd3eb1b0_1 snappy 1.1.9 h29c3540_0 sqlite 3.41.2 hf118e41_0 sympy 1.12.1 pypyh2585a3b_103 conda-forge tabulate 0.8.10 pyhd8ed1ab_0 conda-forge tensorboard 2.13.0 pyhab0730d_pb4.21.12_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 tensorboard-data-server 0.7.0 pyh6f84499_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 tensorboard-plugin-wit 1.6.0 pyh9f0ad1d_0 conda-forge tk 8.6.13 hd4bbf49_0 conda-forge tokenizers 0.13.3 py311h3d4f45a_0 torchdata 0.6.0 py311_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 torchsnapshot 0.1.0 pyhd8ed1ab_0 conda-forge torchtext-base 0.15.2 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 torchtnt 0.2.4 pyhd8ed1ab_0 conda-forge torchvision-base 0.15.2 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 tornado 6.3.3 py311hf118e41_0 tqdm 4.65.0 py311h7837921_0 transformers 4.32.1 py311h6ffa863_0 truststore 0.8.0 py311h6ffa863_0 typing-extensions 4.7.1 py311h6ffa863_0 typing_extensions 4.7.1 py311h6ffa863_0 typing_inspect 0.9.0 pyhd8ed1ab_0 conda-forge tzdata 2023c h04d1e81_0 urllib3 1.26.18 py311h6ffa863_0 utf8proc 2.6.1 h140841e_0 werkzeug 2.3.8 pyhd8ed1ab_0 conda-forge wheel 0.41.2 py311h6ffa863_0 xxhash 0.8.0 h140841e_3 xz 5.4.2 hf118e41_0 yaml 0.2.5 h7b6447c_0 yaml-cpp 0.8.0 h4a02239_0 yarl 1.8.1 py311hf118e41_0 zipp 3.11.0 py311h6ffa863_0 zlib 1.2.13 h1f2b957_6 conda-forge zstandard 0.19.0 py311hf118e41_0 zstd 1.5.5 h57e4825_0
{ "avatar_url": "https://avatars.githubusercontent.com/u/57779173?v=4", "events_url": "https://api.github.com/users/YHL04/events{/privacy}", "followers_url": "https://api.github.com/users/YHL04/followers", "following_url": "https://api.github.com/users/YHL04/following{/other_user}", "gists_url": "https://api.github.com/users/YHL04/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/YHL04", "id": 57779173, "login": "YHL04", "node_id": "MDQ6VXNlcjU3Nzc5MTcz", "organizations_url": "https://api.github.com/users/YHL04/orgs", "received_events_url": "https://api.github.com/users/YHL04/received_events", "repos_url": "https://api.github.com/users/YHL04/repos", "site_admin": false, "starred_url": "https://api.github.com/users/YHL04/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YHL04/subscriptions", "type": "User", "url": "https://api.github.com/users/YHL04" }
https://api.github.com/repos/huggingface/datasets/issues/6992/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6992/timeline
open
false
6,992
null
null
null
false
2,367,711,094
https://api.github.com/repos/huggingface/datasets/issues/6991
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6991/events
[]
null
2024-07-12T12:11:18Z
[]
https://github.com/huggingface/datasets/pull/6991
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6991). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@albertvillanova Any chance we could get this in before the next release? Everything depending on HuggingFace has their NumPy upgrade blocked.", "The incompatible libraries are:\r\n- faiss-cpu 1.8.0.post1 requires numpy<2.0,>=1.0, but you have numpy 2.0.0 which is incompatible.\r\n- tensorflow 2.16.2 requires numpy<2.0.0,>=1.23.5; python_version <= \"3.11\", but you have numpy 2.0.0 which is incompatible.\r\n- transformers 4.42.3 requires numpy<2.0,>=1.17, but you have numpy 2.0.0 which is incompatible.", "Why is it installing numpy 2 if the dependencies don't support it?", "For me, I'm getting:\r\n```\r\n❯ uv pip install --system \"datasets[tests] @ .\"\r\nFound existing alias for \"uv pip install\". You should use: \"pipi\"\r\nResolved 119 packages in 934ms\r\n Built datasets @ file:///Users/neil/src/datasets\r\nPrepared 1 package in 1.28s\r\nUninstalled 1 package in 10ms\r\nInstalled 2 packages in 17ms\r\n - datasets==2.20.1.dev0 (from file:///Users/neil/src/datasets)\r\n + datasets==2.20.1.dev0 (from file:///Users/neil/src/datasets)\r\n + numpy==1.26.4\r\n```", "Which version on Python do you have?", "3.12.4 I'll try on 3.10 now.", "Please, note that I obtained the previous incompatible libraries in my local environment, by forcing the update of numpy.", "In the Python 3.10 CI, the situation is different:\r\n- for example, they install an older version of tensorflow (2.14.0), where probably the constraint on numpy was not yet implemented. See the details: https://github.com/huggingface/datasets/actions/runs/9879100332/job/27306903343?pr=6991\r\n```\r\n> uv pip install --system \"datasets[tests] @ .\"\r\n...\r\n + faiss-cpu==1.8.0\r\n...\r\n + numpy==2.0.0\r\n...\r\n + tensorflow==2.14.0\r\n```\r\n\r\nSee, CI installs:\r\n- faiss-cpu 1.8.0 instead of 1.8.0.post1\r\n- tensorflow 2.14.0 instead of 2.16.2\r\n- transformers 4.41.2 instead of 4.42.3", "~~The main point is that we cannot support numpy 2.0 until tensorflow and faiss do.~~\r\n\r\nAlternatively, we should ignore/select tests depending on the installed versions.", "> Alternatively, we should ignore/select tests depending on the installed versions.\r\n\r\nThat works.\r\n\r\nAlternatively, you could depend on tensorflow >= 2.16.2 (etc.) for the tests?", "Yes, I was thinking of a workaround solution.\r\n\r\nThe issue I see is that our CI will not test numpy 2.0 indeed.", "> The issue I see is that our CI will not test numpy 2.0 indeed.\r\n\r\nRight, that's the advantage of the test skipping you wanted, I see your point.\r\n\r\nThing is, it won't be long before tensorflow supports numpy 2.0, and then the situation is resolved and your tests test numpy 2.0. Do you really want to invest a lot of effort into testing numpy 2.0 for a few months benefit?", "Without testing Numpy 2.0, we do not know if there are some other parts in the code broken.", "> Without testing Numpy 2.0, we do not know if there are some other parts in the code broken.\r\n\r\nYes, you're right. I understand you're point, but you could say this for anything that your test dependencies don't support.\r\n\r\nI guess the solution is to write tests that don't depend on tensorflow, etc., but still use numpy. You could write some Jax tests for example.\r\n\r\nThat said, blocking numpy 2 isn't a good solution in my opinion. These dependencies are extremely late in supporting Numpy 2. They were supposed to be testing against preview releases over three months ago. I don't think the world should have to wait for them.", "> I guess the solution is to write tests that don't depend on tensorflow, etc., but still use numpy.\r\nThat is my point. What we cannot do is just blindly support Numpy 2.0 without knowing its consequences. We need to test it:\r\n- to know if our core code works with it\r\n- to know what optional libraries are incompatible\r\n\r\nFor example, while testing locally, I have discovered that librosa is also incompatible with numpy-2.0, due to its dependency on soxr:\r\n- https://github.com/dofuuz/python-soxr/issues/28", "While testing locally, I have also discovered that pytorch does not support Numpy 2.0 on Windows platforms:\r\n- https://github.com/pytorch/pytorch/issues/128860", "I am adding Numpy 2.0 tests to your PR if you don't mind, before merging this PR.", "Awesome, thank you! Please let me know if I need to do anything.", "Now we test numpy 2.0 in the `test_py310_numpy2` CI tests: https://github.com/huggingface/datasets/actions/runs/9907254874/job/27370545495?pr=6991\r\n```\r\n + numpy==2.0.0\r\n```", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005709 / 0.011353 (-0.005643) | 0.003947 / 0.011008 (-0.007061) | 0.064407 / 0.038508 (0.025899) | 0.029903 / 0.023109 (0.006794) | 0.244838 / 0.275898 (-0.031060) | 0.268894 / 0.323480 (-0.054586) | 0.003200 / 0.007986 (-0.004786) | 0.002867 / 0.004328 (-0.001461) | 0.050016 / 0.004250 (0.045765) | 0.047682 / 0.037052 (0.010629) | 0.252186 / 0.258489 (-0.006303) | 0.292050 / 0.293841 (-0.001791) | 0.030277 / 0.128546 (-0.098270) | 0.012283 / 0.075646 (-0.063364) | 0.205875 / 0.419271 (-0.213397) | 0.037202 / 0.043533 (-0.006331) | 0.246045 / 0.255139 (-0.009094) | 0.272422 / 0.283200 (-0.010777) | 0.020572 / 0.141683 (-0.121111) | 1.114343 / 1.452155 (-0.337812) | 1.169909 / 1.492716 (-0.322808) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096612 / 0.018006 (0.078605) | 0.303025 / 0.000490 (0.302535) | 0.000210 / 0.000200 (0.000010) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019292 / 0.037411 (-0.018119) | 0.062548 / 0.014526 (0.048023) | 0.076027 / 0.176557 (-0.100530) | 0.121752 / 0.737135 (-0.615383) | 0.076608 / 0.296338 (-0.219730) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283900 / 0.215209 (0.068691) | 2.829829 / 2.077655 (0.752174) | 1.428934 / 1.504120 (-0.075186) | 1.316796 / 1.541195 (-0.224399) | 1.330012 / 1.468490 (-0.138478) | 0.702245 / 4.584777 (-3.882532) | 2.380454 / 3.745712 (-1.365259) | 2.882881 / 5.269862 (-2.386980) | 1.920345 / 4.565676 (-2.645332) | 0.077860 / 0.424275 (-0.346415) | 0.005295 / 0.007607 (-0.002312) | 0.336968 / 0.226044 (0.110924) | 3.327808 / 2.268929 (1.058879) | 1.781958 / 55.444624 (-53.662666) | 1.489412 / 6.876477 (-5.387065) | 1.634829 / 2.142072 (-0.507243) | 0.787985 / 4.805227 (-4.017243) | 0.134397 / 6.500664 (-6.366267) | 0.042906 / 0.075469 (-0.032563) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.967647 / 1.841788 (-0.874141) | 11.714541 / 8.074308 (3.640233) | 9.350228 / 10.191392 (-0.841164) | 0.142675 / 0.680424 (-0.537749) | 0.014609 / 0.534201 (-0.519592) | 0.301970 / 0.579283 (-0.277314) | 0.262350 / 0.434364 (-0.172014) | 0.342933 / 0.540337 (-0.197404) | 0.437321 / 1.386936 (-0.949615) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005622 / 0.011353 (-0.005731) | 0.003958 / 0.011008 (-0.007050) | 0.050667 / 0.038508 (0.012159) | 0.032842 / 0.023109 (0.009733) | 0.252292 / 0.275898 (-0.023606) | 0.280602 / 0.323480 (-0.042878) | 0.004313 / 0.007986 (-0.003673) | 0.002870 / 0.004328 (-0.001458) | 0.049549 / 0.004250 (0.045299) | 0.040448 / 0.037052 (0.003396) | 0.270264 / 0.258489 (0.011775) | 0.302988 / 0.293841 (0.009147) | 0.030840 / 0.128546 (-0.097707) | 0.012131 / 0.075646 (-0.063515) | 0.060061 / 0.419271 (-0.359211) | 0.033025 / 0.043533 (-0.010507) | 0.251909 / 0.255139 (-0.003230) | 0.275511 / 0.283200 (-0.007689) | 0.018399 / 0.141683 (-0.123284) | 1.160744 / 1.452155 (-0.291411) | 1.188265 / 1.492716 (-0.304452) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097719 / 0.018006 (0.079712) | 0.304389 / 0.000490 (0.303899) | 0.000217 / 0.000200 (0.000017) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022964 / 0.037411 (-0.014447) | 0.076897 / 0.014526 (0.062372) | 0.088930 / 0.176557 (-0.087626) | 0.128926 / 0.737135 (-0.608209) | 0.091049 / 0.296338 (-0.205290) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285670 / 0.215209 (0.070461) | 2.806071 / 2.077655 (0.728416) | 1.527161 / 1.504120 (0.023041) | 1.410291 / 1.541195 (-0.130903) | 1.427071 / 1.468490 (-0.041419) | 0.705527 / 4.584777 (-3.879250) | 0.926915 / 3.745712 (-2.818797) | 2.893078 / 5.269862 (-2.376784) | 1.907113 / 4.565676 (-2.658564) | 0.077326 / 0.424275 (-0.346949) | 0.005182 / 0.007607 (-0.002425) | 0.332282 / 0.226044 (0.106237) | 3.312889 / 2.268929 (1.043960) | 1.853839 / 55.444624 (-53.590785) | 1.592013 / 6.876477 (-5.284464) | 1.620234 / 2.142072 (-0.521838) | 0.776894 / 4.805227 (-4.028333) | 0.132411 / 6.500664 (-6.368253) | 0.041430 / 0.075469 (-0.034039) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.003468 / 1.841788 (-0.838320) | 12.472251 / 8.074308 (4.397943) | 10.603243 / 10.191392 (0.411851) | 0.132561 / 0.680424 (-0.547863) | 0.015790 / 0.534201 (-0.518411) | 0.306724 / 0.579283 (-0.272559) | 0.125812 / 0.434364 (-0.308552) | 0.343782 / 0.540337 (-0.196555) | 0.445915 / 1.386936 (-0.941021) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#dfc2b1b14ab8f32730d2bc36c8016ecefbcbabd1 \"CML watermark\")\n" ]
Unblock NumPy 2.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6991/reactions" }
PR_kwDODunzps5zPoQs
{ "diff_url": "https://github.com/huggingface/datasets/pull/6991.diff", "html_url": "https://github.com/huggingface/datasets/pull/6991", "merged_at": "2024-07-12T12:04:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/6991.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6991" }
2024-06-22T09:19:53Z
https://api.github.com/repos/huggingface/datasets/issues/6991/comments
Fixes https://github.com/huggingface/datasets/issues/6980
{ "avatar_url": "https://avatars.githubusercontent.com/u/730137?v=4", "events_url": "https://api.github.com/users/NeilGirdhar/events{/privacy}", "followers_url": "https://api.github.com/users/NeilGirdhar/followers", "following_url": "https://api.github.com/users/NeilGirdhar/following{/other_user}", "gists_url": "https://api.github.com/users/NeilGirdhar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NeilGirdhar", "id": 730137, "login": "NeilGirdhar", "node_id": "MDQ6VXNlcjczMDEzNw==", "organizations_url": "https://api.github.com/users/NeilGirdhar/orgs", "received_events_url": "https://api.github.com/users/NeilGirdhar/received_events", "repos_url": "https://api.github.com/users/NeilGirdhar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NeilGirdhar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NeilGirdhar/subscriptions", "type": "User", "url": "https://api.github.com/users/NeilGirdhar" }
https://api.github.com/repos/huggingface/datasets/issues/6991/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6991/timeline
closed
false
6,991
null
2024-07-12T12:04:53Z
null
true
2,366,660,785
https://api.github.com/repos/huggingface/datasets/issues/6990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6990/events
[]
null
2024-06-25T16:19:19Z
[]
https://github.com/huggingface/datasets/issues/6990
CONTRIBUTOR
completed
null
null
[ "ah yes good catch ! feel free to open a PR with your suggested fix" ]
Problematic rank after calling `split_dataset_by_node` twice
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6990/reactions" }
I_kwDODunzps6NEGCx
null
2024-06-21T14:25:26Z
https://api.github.com/repos/huggingface/datasets/issues/6990/comments
### Describe the bug I'm trying to split `IterableDataset` by `split_dataset_by_node`. But when doing split on a already split dataset, the resulting `rank` is greater than `world_size`. ### Steps to reproduce the bug Here is the minimal code for reproduction: ```py >>> from datasets import load_dataset >>> from datasets.distributed import split_dataset_by_node >>> dataset = load_dataset('fla-hub/slimpajama-test', split='train', streaming=True) >>> dataset = split_dataset_by_node(dataset, 1, 32) >>> dataset._distributed DistributedConfig(rank=1, world_size=32) >>> dataset = split_dataset_by_node(dataset, 1, 15) >>> dataset._distributed DistributedConfig(rank=481, world_size=480) ``` As you can see, the second rank 481 > 480, which is problematic. ### Expected behavior I think this error comes from this line @lhoestq https://github.com/huggingface/datasets/blob/a6ccf944e42c1a84de81bf326accab9999b86c90/src/datasets/iterable_dataset.py#L2943-L2944 We may need to obtain the rank first. Then the above code gives ```py >>> dataset._distributed DistributedConfig(rank=16, world_size=480) ``` ### Environment info datasets==2.20.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/18402347?v=4", "events_url": "https://api.github.com/users/yzhangcs/events{/privacy}", "followers_url": "https://api.github.com/users/yzhangcs/followers", "following_url": "https://api.github.com/users/yzhangcs/following{/other_user}", "gists_url": "https://api.github.com/users/yzhangcs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yzhangcs", "id": 18402347, "login": "yzhangcs", "node_id": "MDQ6VXNlcjE4NDAyMzQ3", "organizations_url": "https://api.github.com/users/yzhangcs/orgs", "received_events_url": "https://api.github.com/users/yzhangcs/received_events", "repos_url": "https://api.github.com/users/yzhangcs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yzhangcs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yzhangcs/subscriptions", "type": "User", "url": "https://api.github.com/users/yzhangcs" }
https://api.github.com/repos/huggingface/datasets/issues/6990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6990/timeline
closed
false
6,990
null
2024-06-25T16:19:19Z
null
false
2,365,556,449
https://api.github.com/repos/huggingface/datasets/issues/6989
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6989/events
[]
null
2024-06-21T02:12:55Z
[]
https://github.com/huggingface/datasets/issues/6989
NONE
null
null
null
[]
cache in nfs error
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6989/reactions" }
I_kwDODunzps6M_4bh
null
2024-06-21T02:09:22Z
https://api.github.com/repos/huggingface/datasets/issues/6989/comments
### Describe the bug - When reading dataset, a cache will be generated to the ~/. cache/huggingface/datasets directory - When using .map and .filter operations, runtime cache will be generated to the /tmp/hf_datasets-* directory - The default is to use the path of tempfile.tempdir - If I modify this path to the NFS disk, an error will be reported, but the program will continue to run - https://github.com/huggingface/datasets/blob/main/src/datasets/config.py#L257 ``` Traceback (most recent call last): File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 315, in _bootstrap self.run() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 616, in _run_server server.serve_forever() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 182, in serve_forever sys.exit(0) SystemExit: 0 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 300, in _run_finalizers finalizer() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 224, in __call__ res = self._callback(*self._args, **self._kwargs) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 133, in _remove_temp_dir rmtree(tempdir) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 718, in rmtree _rmtree_safe_fd(fd, path, onerror) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 675, in _rmtree_safe_fd onerror(os.unlink, fullname, sys.exc_info()) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 673, in _rmtree_safe_fd os.unlink(entry.name, dir_fd=topfd) OSError: [Errno 16] Device or resource busy: '.nfs000000038330a012000030b4' Traceback (most recent call last): File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 315, in _bootstrap self.run() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 616, in _run_server server.serve_forever() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 182, in serve_forever sys.exit(0) SystemExit: 0 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 300, in _run_finalizers finalizer() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 224, in __call__ res = self._callback(*self._args, **self._kwargs) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 133, in _remove_temp_dir rmtree(tempdir) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 718, in rmtree _rmtree_safe_fd(fd, path, onerror) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 675, in _rmtree_safe_fd onerror(os.unlink, fullname, sys.exc_info()) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 673, in _rmtree_safe_fd os.unlink(entry.name, dir_fd=topfd) OSError: [Errno 16] Device or resource busy: '.nfs0000000400064d4a000030e5' ``` ### Steps to reproduce the bug ``` import os import time import tempfile from datasets import load_dataset def add_column(sample): # print(type(sample)) # time.sleep(0.1) sample['__ds__stats__'] = {'data': 123} return sample def filt_column(sample): # print(type(sample)) if len(sample['content']) > 10: return True else: return False if __name__ == '__main__': input_dir = '/mnt/temp/CN/small' # some json dataset dataset = load_dataset('json', data_dir=input_dir) temp_dir = '/media/release/release/temp/temp' # a nfs folder os.makedirs(temp_dir, exist_ok=True) # change huggingface-datasets runtime cache in nfs(default in /tmpοΌ‰ tempfile.tempdir = temp_dir aa = dataset.map(add_column, num_proc=64) aa = aa.filter(filt_column, num_proc=64) print(aa) ``` ### Expected behavior no error occur ### Environment info datasets==2.18.0 ubuntu 20.04
{ "avatar_url": "https://avatars.githubusercontent.com/u/66729924?v=4", "events_url": "https://api.github.com/users/simplew2011/events{/privacy}", "followers_url": "https://api.github.com/users/simplew2011/followers", "following_url": "https://api.github.com/users/simplew2011/following{/other_user}", "gists_url": "https://api.github.com/users/simplew2011/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/simplew2011", "id": 66729924, "login": "simplew2011", "node_id": "MDQ6VXNlcjY2NzI5OTI0", "organizations_url": "https://api.github.com/users/simplew2011/orgs", "received_events_url": "https://api.github.com/users/simplew2011/received_events", "repos_url": "https://api.github.com/users/simplew2011/repos", "site_admin": false, "starred_url": "https://api.github.com/users/simplew2011/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simplew2011/subscriptions", "type": "User", "url": "https://api.github.com/users/simplew2011" }
https://api.github.com/repos/huggingface/datasets/issues/6989/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6989/timeline
open
false
6,989
null
null
null
false
2,364,129,918
https://api.github.com/repos/huggingface/datasets/issues/6988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6988/events
[]
null
2024-06-21T16:04:58Z
[]
https://github.com/huggingface/datasets/pull/6988
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6988). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "`Dataset` objects are not made to be subclassed, so I don't think going in that direction is a good idea. In particular there is absolutely no test to make sure it works well, and nothing in the internal has been made to anticipate this use case.\r\n\r\nI'd suggest to use a separate function to push changes to the Dataset card, and call it after `push_to_hub()`. This way people can also use a similar logic with other tools that `datasets`. You can also use composition instead of subclassing.", "Would you consider an alternative where a Dataset instance carries a dataset card template which can be updated?\n\nI don't want to burden my users with having to call another method after `push_to_hub` themselves. If you're not a fan of the template approach above either, then I'll likely subclass `push_to_hub` to once again download the just-uploaded-but-empty dataset card, update it, and reupload it. It'll just be a bit more requests than necessary, but not a big deal overall.\n\n- Tom Aarsen ", "Actually I find the idea of overriding `_create_dataset_card` better than implementing a templating logic. My main concern is that if we go in that direction we better make sure that subclasses of `Dataset` are working well. \r\n\r\nWell if it's been working fine on your side why not, but make sure you test correctly features that could not work because of subclassing (e.g. I'm pretty sure `map()` won't return your subclass of `Dataset`). Or at least the ones that matter for your lib.\r\n\r\nIf it sounds good to you I'm fine with merging your addition to let you override the dataset card.", "> e.g. I'm pretty sure map() won't return your subclass of Dataset\r\n\r\nI understand that there's limitations such as this one. The subclass doesn't have to be robust - I'd just like some simple automatic dataset card generation options directly after generating the dataset. This can be removed if the user does additional steps before pushing the model, e.g. mapping, filtering, saving to disk and uploading the loaded dataset, etc.\r\n\r\n> If it sounds good to you I'm fine with merging your addition to let you override the dataset card.\r\n\r\nThat would be quite useful for me! I appreciate it.\r\n\r\nI'm not very sure what the test failures are caused by, I believe the only change in behaviour is that\r\n```python\r\n DatasetInfosDict({config_name: info_to_dump}).to_dataset_card_data(dataset_card_data)\r\n MetadataConfigs({config_name: metadata_config_to_dump}).to_dataset_card_data(dataset_card_data)\r\n```\r\nare not called when `dataset_card` was already defined. Unless these have side-effects other than updating `dataset_card_data`, it shouldn't be any different than `main`.\r\n\r\n- Tom Aarsen", "Let's try to have this PR merged then !\r\n\r\nIMO your current implementation can be improved since you path both the dataset card data and the dataset card itself, which is redundant. Also I anticipate the failures in the CI to come from your default implementation which doesn't correspond to what it was doing before\r\n\r\n> Unless these have side-effects other than updating dataset_card_data, it shouldn't be any different than main.\r\n\r\nIndeed the dataset_card_data is the value from attribute of the dataset_card from a few lines before your changes, so yes it modifies the dataset_card object too." ]
[`feat`] Move dataset card creation to method for easier overriding
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6988/reactions" }
PR_kwDODunzps5zDpXX
{ "diff_url": "https://github.com/huggingface/datasets/pull/6988.diff", "html_url": "https://github.com/huggingface/datasets/pull/6988", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6988.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6988" }
2024-06-20T10:47:57Z
https://api.github.com/repos/huggingface/datasets/issues/6988/comments
Hello! ## Pull Request overview * Move dataset card creation to method for easier overriding ## Details It's common for me to fully automatically download, reformat, and upload a dataset (e.g. see https://huggingface.co/datasets?other=sentence-transformers), but one aspect that I cannot easily automate is the dataset card generation. This is because during `push_to_hub`, the dataset card is created in 3 lines of code in a much larger method. To automatically generate a dataset card, I need to either: 1. Subclass `Dataset`/`DatasetDict`, copy the entire `push_to_hub` method to override the ~3 lines used to generate the dataset card. This is not viable as the method is likely to change over time. 2. Use `push_to_hub` normally, then separately download the pushed (but empty) dataset card, update it, and reupload the modified dataset. This works fine, but prevents me from being able to return a `Dataset` to my users which will automatically use a nice dataset card. So, in this PR I'm proposing to move the dataset generation into another method so that it can be overridden more easily. For example, imagine the following use case: ````python import json from typing import Any, Dict, Optional from datasets import Dataset, load_dataset from datasets.info import DatasetInfosDict, DatasetInfo from datasets.utils.metadata import MetadataConfigs from huggingface_hub import DatasetCardData, DatasetCard TEMPLATE = r"""--- {dataset_card_data} --- # Dataset Card for {source_dataset_name} with mined hard negatives This dataset is a collection of {column_one}-{column_two}-negative triplets from the {source_dataset_name} dataset. See [{source_dataset_name}](https://huggingface.co/datasets/{source_dataset_id}) for additional information. This dataset can be used directly with Sentence Transformers to train embedding models. ## Mining Parameters The negative samples have been mined using the following parameters: - `range_min`: {range_min}, i.e. we skip the {range_min} most similar samples - `range_max`: {range_max}, i.e. we only look at the top {range_max} most similar samples - `margin`: {margin}, i.e. we require negative similarity + margin < positive similarity, so negative samples can't be more similar than the known true answer - `sampling_strategy`: {sampling_strategy}, i.e. whether to randomly sample from the candidate negatives or take the "top" negatives - `num_negatives`: {num_negatives}, i.e. we mine {num_negatives} negatives per question-answer pair ## Dataset Format - Columns: {column_one}, {column_two}, negative - Column types: str, str, str - Example: ```python {example} ``` """ class HNMDataset(Dataset): @classmethod def from_dict(cls, *args, mining_kwargs: Dict[str, Any], **kwargs) -> "HNMDataset": dataset = super().from_dict(*args, **kwargs) dataset.mining_kwargs = mining_kwargs return dataset def _create_dataset_card( self, dataset_card_data: DatasetCardData, dataset_card: Optional[DatasetCard], config_name: str, info_to_dump: DatasetInfo, metadata_config_to_dump: MetadataConfigs, ) -> DatasetCard: if dataset_card: return dataset_card DatasetInfosDict({config_name: info_to_dump}).to_dataset_card_data(dataset_card_data) MetadataConfigs({config_name: metadata_config_to_dump}).to_dataset_card_data(dataset_card_data) dataset_card_data.tags = ["sentence-transformers"] dataset_name = self.mining_kwargs["source_dataset"].info.dataset_name # Very messy, just as an example: dataset_id = list(self.mining_kwargs["source_dataset"].info.download_checksums.keys())[0].removeprefix("hf://datasets/").split("@")[0] content = TEMPLATE.format(**{ "dataset_card_data": str(dataset_card_data), "source_dataset_name": dataset_name, "source_dataset_id": dataset_id, "range_min": self.mining_kwargs["range_min"], "range_max": self.mining_kwargs["range_max"], "margin": self.mining_kwargs["margin"], "sampling_strategy": self.mining_kwargs["sampling_strategy"], "num_negatives": self.mining_kwargs["num_negatives"], "column_one": self.column_names[0], "column_two": self.column_names[1], "example": json.dumps(self[0], indent=4), }) return DatasetCard(content) source_dataset = load_dataset("sentence-transformers/gooaq", split="train[:100]") dataset = HNMDataset.from_dict({ "query": source_dataset["question"], "answer": source_dataset["answer"], # "negative": ... <- In my case, this column would be 'mined' automatically with these parameters }, mining_kwargs={ "range_min": 10, "range_max": 20, "max_score": 0.9, "margin": 0.1, "sampling_strategy": "random", "num_negatives": 3, "source_dataset": source_dataset, }) dataset.push_to_hub("tomaarsen/mining_demo", private=True) ```` In this script, I've created a subclass which stores some additional information about how the dataset was generated. It's a bit hacky (e.g. setting a `mining_kwargs` parameter in `from_dict` that wasn't created in `__init__`, but that's just a consequence of how the `from_...` methods don't accept kwargs), but it allows me to create a "hard negatives mining" function that returns a dataset which people can use locally like normal, but if they choose to upload it, then it'll automatically include some information, e.g.: https://huggingface.co/datasets/tomaarsen/mining_demo This allows others to actually find this dataset (e.g. via the `sentence-transformers` tag) and get an idea of the quality, source, etc. by looking at the model card. ## Note I'm not fixed on this solution whatsoever: I am also completely fine with other solutions, e.g. a `dataset.set_dataset_card_creator` method that allows me to provide a function without even having to subclass anything. I'm open to all ideas :) cc @albertvillanova @lhoestq cc @LysandreJik - Tom Aarsen
{ "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tomaarsen", "id": 37621491, "login": "tomaarsen", "node_id": "MDQ6VXNlcjM3NjIxNDkx", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "repos_url": "https://api.github.com/users/tomaarsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "type": "User", "url": "https://api.github.com/users/tomaarsen" }
https://api.github.com/repos/huggingface/datasets/issues/6988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6988/timeline
open
false
6,988
null
null
null
true
2,363,728,190
https://api.github.com/repos/huggingface/datasets/issues/6987
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6987/events
[]
null
2024-06-26T19:41:55Z
[]
https://github.com/huggingface/datasets/pull/6987
MEMBER
null
false
{ "closed_at": null, "closed_issues": 3, "created_at": "2023-02-13T16:22:42Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }, "description": "Next major release", "due_on": null, "html_url": "https://github.com/huggingface/datasets/milestone/10", "id": 9038583, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels", "node_id": "MI_kwDODunzps4Aier3", "number": 10, "open_issues": 5, "state": "open", "title": "3.0", "updated_at": "2024-06-28T06:51:30Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/10" }
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6987). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005931 / 0.011353 (-0.005422) | 0.004127 / 0.011008 (-0.006881) | 0.063854 / 0.038508 (0.025346) | 0.034687 / 0.023109 (0.011577) | 0.251397 / 0.275898 (-0.024501) | 0.280348 / 0.323480 (-0.043132) | 0.005008 / 0.007986 (-0.002977) | 0.002930 / 0.004328 (-0.001398) | 0.050703 / 0.004250 (0.046452) | 0.047109 / 0.037052 (0.010057) | 0.258525 / 0.258489 (0.000035) | 0.288759 / 0.293841 (-0.005081) | 0.030547 / 0.128546 (-0.097999) | 0.102184 / 0.075646 (0.026537) | 0.207934 / 0.419271 (-0.211338) | 0.036477 / 0.043533 (-0.007056) | 0.338160 / 0.255139 (0.083021) | 0.310735 / 0.283200 (0.027535) | 0.018637 / 0.141683 (-0.123045) | 1.228539 / 1.452155 (-0.223616) | 1.168004 / 1.492716 (-0.324713) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098355 / 0.018006 (0.080348) | 0.302310 / 0.000490 (0.301820) | 0.000215 / 0.000200 (0.000015) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019607 / 0.037411 (-0.017804) | 0.063795 / 0.014526 (0.049269) | 0.075029 / 0.176557 (-0.101528) | 0.121293 / 0.737135 (-0.615842) | 0.076480 / 0.296338 (-0.219858) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285285 / 0.215209 (0.070076) | 2.747455 / 2.077655 (0.669801) | 1.454190 / 1.504120 (-0.049929) | 1.330777 / 1.541195 (-0.210418) | 1.358292 / 1.468490 (-0.110198) | 0.724991 / 4.584777 (-3.859786) | 2.374889 / 3.745712 (-1.370823) | 2.985868 / 5.269862 (-2.283994) | 1.921521 / 4.565676 (-2.644156) | 0.078589 / 0.424275 (-0.345686) | 0.005104 / 0.007607 (-0.002503) | 0.333898 / 0.226044 (0.107853) | 3.317702 / 2.268929 (1.048773) | 1.887161 / 55.444624 (-53.557463) | 1.510700 / 6.876477 (-5.365777) | 1.544175 / 2.142072 (-0.597898) | 0.804262 / 4.805227 (-4.000965) | 0.134015 / 6.500664 (-6.366649) | 0.042819 / 0.075469 (-0.032650) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.012142 / 1.841788 (-0.829645) | 11.861780 / 8.074308 (3.787472) | 9.797285 / 10.191392 (-0.394107) | 0.142114 / 0.680424 (-0.538310) | 0.013984 / 0.534201 (-0.520217) | 0.302412 / 0.579283 (-0.276871) | 0.265060 / 0.434364 (-0.169304) | 0.337510 / 0.540337 (-0.202828) | 0.432197 / 1.386936 (-0.954739) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005920 / 0.011353 (-0.005433) | 0.003991 / 0.011008 (-0.007017) | 0.049874 / 0.038508 (0.011366) | 0.033771 / 0.023109 (0.010662) | 0.264789 / 0.275898 (-0.011109) | 0.287554 / 0.323480 (-0.035926) | 0.004341 / 0.007986 (-0.003644) | 0.002888 / 0.004328 (-0.001441) | 0.049383 / 0.004250 (0.045133) | 0.040757 / 0.037052 (0.003704) | 0.286067 / 0.258489 (0.027578) | 0.311105 / 0.293841 (0.017264) | 0.031482 / 0.128546 (-0.097064) | 0.012358 / 0.075646 (-0.063288) | 0.060298 / 0.419271 (-0.358973) | 0.033237 / 0.043533 (-0.010296) | 0.265804 / 0.255139 (0.010665) | 0.281273 / 0.283200 (-0.001927) | 0.017879 / 0.141683 (-0.123804) | 1.154059 / 1.452155 (-0.298096) | 1.156758 / 1.492716 (-0.335958) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004677 / 0.018006 (-0.013329) | 0.300768 / 0.000490 (0.300278) | 0.000212 / 0.000200 (0.000013) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023032 / 0.037411 (-0.014379) | 0.077498 / 0.014526 (0.062973) | 0.089134 / 0.176557 (-0.087422) | 0.129691 / 0.737135 (-0.607444) | 0.091372 / 0.296338 (-0.204967) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290823 / 0.215209 (0.075613) | 2.873159 / 2.077655 (0.795504) | 1.563361 / 1.504120 (0.059241) | 1.447048 / 1.541195 (-0.094147) | 1.490473 / 1.468490 (0.021983) | 0.715642 / 4.584777 (-3.869135) | 0.996223 / 3.745712 (-2.749489) | 2.861466 / 5.269862 (-2.408396) | 1.915581 / 4.565676 (-2.650096) | 0.077892 / 0.424275 (-0.346383) | 0.005463 / 0.007607 (-0.002144) | 0.339670 / 0.226044 (0.113626) | 3.412830 / 2.268929 (1.143902) | 1.908676 / 55.444624 (-53.535949) | 1.625358 / 6.876477 (-5.251119) | 1.769437 / 2.142072 (-0.372635) | 0.792505 / 4.805227 (-4.012722) | 0.133007 / 6.500664 (-6.367657) | 0.041305 / 0.075469 (-0.034164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.986882 / 1.841788 (-0.854905) | 12.368101 / 8.074308 (4.293793) | 10.367439 / 10.191392 (0.176047) | 0.141248 / 0.680424 (-0.539176) | 0.016144 / 0.534201 (-0.518057) | 0.300962 / 0.579283 (-0.278321) | 0.126863 / 0.434364 (-0.307501) | 0.341107 / 0.540337 (-0.199230) | 0.439819 / 1.386936 (-0.947117) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b2754625d45e153bd9758af40e65e7545321fc2a \"CML watermark\")\n" ]
Remove beam
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6987/reactions" }
PR_kwDODunzps5zCRH6
{ "diff_url": "https://github.com/huggingface/datasets/pull/6987.diff", "html_url": "https://github.com/huggingface/datasets/pull/6987", "merged_at": "2024-06-26T19:35:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/6987.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6987" }
2024-06-20T07:27:14Z
https://api.github.com/repos/huggingface/datasets/issues/6987/comments
Remove beam, as part of the 3.0 release.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6987/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6987/timeline
closed
false
6,987
null
2024-06-26T19:35:42Z
null
true
2,362,584,179
https://api.github.com/repos/huggingface/datasets/issues/6986
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6986/events
[]
null
2024-08-12T14:43:48Z
[]
https://github.com/huggingface/datasets/pull/6986
NONE
null
false
null
[ "@albertvillanova @KennethEnevoldsen" ]
Add large_list type support in string_to_arrow
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6986/reactions" }
PR_kwDODunzps5y-Zi0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6986.diff", "html_url": "https://github.com/huggingface/datasets/pull/6986", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6986.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6986" }
2024-06-19T14:54:25Z
https://api.github.com/repos/huggingface/datasets/issues/6986/comments
add large_list type support in string_to_arrow() and _arrow_to_datasets_dtype() in features.py Fix #6984
{ "avatar_url": "https://avatars.githubusercontent.com/u/16257131?v=4", "events_url": "https://api.github.com/users/arthasking123/events{/privacy}", "followers_url": "https://api.github.com/users/arthasking123/followers", "following_url": "https://api.github.com/users/arthasking123/following{/other_user}", "gists_url": "https://api.github.com/users/arthasking123/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/arthasking123", "id": 16257131, "login": "arthasking123", "node_id": "MDQ6VXNlcjE2MjU3MTMx", "organizations_url": "https://api.github.com/users/arthasking123/orgs", "received_events_url": "https://api.github.com/users/arthasking123/received_events", "repos_url": "https://api.github.com/users/arthasking123/repos", "site_admin": false, "starred_url": "https://api.github.com/users/arthasking123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arthasking123/subscriptions", "type": "User", "url": "https://api.github.com/users/arthasking123" }
https://api.github.com/repos/huggingface/datasets/issues/6986/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6986/timeline
closed
false
6,986
null
2024-08-12T14:43:47Z
null
true
2,362,378,276
https://api.github.com/repos/huggingface/datasets/issues/6985
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6985/events
[]
null
2024-08-01T03:35:02Z
[]
https://github.com/huggingface/datasets/issues/6985
NONE
completed
null
null
[ "Please note that the error is raised just at import:\r\n```python\r\nimport pyarrow.parquet as pq\r\n```\r\n\r\nTherefore it must be caused by some problem with your pyarrow installation. I would recommend you uninstall and install pyarrow again.\r\n\r\nI also see that it seems you use conda to install pyarrow. Please note that pyarrow offers 3 different packages in conda-forge: https://arrow.apache.org/docs/python/install.html#using-conda\r\n```\r\nconda install -c conda-forge pyarrow\r\n```\r\n> While the pyarrow [conda-forge](https://conda-forge.org/) package is the right choice for most users, both a minimal and maximal variant of the package exist, either of which may be better for your use case. See [Differences between conda-forge packages](https://arrow.apache.org/docs/python/install.html#python-conda-differences).\r\n\r\nPlease, make sure you install the right one: I guess it is either `pyarrow` (or `pyarrow-all`).", "I have same issue, please downgrade pyarrow==15.0.2, it seem datasets library need to be fix", "It is not a problem with the `datasets` library: we support latest version of `pyarrow` and our Continuous Integration tests are using pyarrow 16.1.0 without any problem.\r\n\r\nThe error reported here is raised when importing pyarrow.parquet:\r\n```\r\n---> 29 import pyarrow.parquet as pq\r\n```\r\n```\r\nFile /opt/conda/lib/python3.10/site-packages/pyarrow/parquet/__init__.py:20\r\n 1 # Licensed to the Apache Software Foundation (ASF) under one\r\n 2 # or more contributor license agreements. See the NOTICE file\r\n 3 # distributed with this work for additional information\r\n (...)\r\n 17 \r\n 18 # flake8: noqa\r\n---> 20 from .core import *\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/pyarrow/parquet/core.py:33\r\n 30 import pyarrow as pa\r\n 32 try:\r\n---> 33 import pyarrow._parquet as _parquet\r\n 34 except ImportError as exc:\r\n 35 raise ImportError(\r\n 36 \"The pyarrow installation is not built with support \"\r\n 37 f\"for the Parquet file format ({str(exc)})\"\r\n 38 ) from None\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/pyarrow/_parquet.pyx:1, in init pyarrow._parquet()\r\n\r\nAttributeError: module 'pyarrow.lib' has no attribute 'ListViewType'\r\n```\r\n\r\nThis can only be explained if pyarrow was not properly installed. \r\n\r\nIf the user just installed `pyarrow-core` from conda-forge, then its parquet subpackage is not installed and cannot be imported. You can check pyarrow docs:\r\n- Differences between conda-forge packages: https://arrow.apache.org/docs/python/install.html#python-conda-differences\r\n> The `pyarrow-core` package includes the following functionality:\r\n> ...\r\n> The `pyarrow` package adds the following:\r\n> ...\r\n> Parquet (i.e., `pyarrow.parquet`)", "I'm still seeing the same issue on datasets version 2.20.0. I installed pyarrow version 17.0.0 with `pip install`. Downgrading to pyarrow==15.0.2 also did not resolve the issue.", "@RenaLu As of UTC time 07/27/2024 23:20:00, I hit the same issue and reinstalling `pyarrow==15.0.2` resolved the issue for me. You may want to check if your `pyarrow` is successfully downgraded.", "I can confirm @albertvillanova's [analysis & suggestion](https://github.com/huggingface/datasets/issues/6985#issuecomment-2188022888) - `pip uninstall pyarrow` followed by `pip install pyarrow` solved it for me. \r\n\r\nI suspect this is because pyarrow was initially installed as a pandas extra `pandas[...,parquet,...]`, then pip-upgrading pyarrow resulted in the issue.\r\n\r\n@RenaLu did you uninstall pyarrow between changing versions?", "After trying all the above combinations and failing, running the following in the notebook fixed the error!!\r\n`!conda install -c conda-forge -y datasets pyarrow libparquet`\r\nNote : Uninstall any existing dataset and pyarrow installations in the env before executing the above." ]
AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType'
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6985/reactions" }
I_kwDODunzps6Mzwgk
null
2024-06-19T13:22:28Z
https://api.github.com/repos/huggingface/datasets/issues/6985/comments
### Describe the bug I have been struggling with this for two days, any help would be appreciated. Python 3.10 ``` from setfit import SetFitModel from huggingface_hub import login access_token_read = "cccxxxccc" # Authenticate with the Hugging Face Hub login(token=access_token_read) # Load the models from the Hugging Face Hub trainer_relv = SetFitModel.from_pretrained("snowdere/trainer_relevance") trainer_trust = SetFitModel.from_pretrained("snowdere/trainer_trust") trainer_sent = SetFitModel.from_pretrained("snowdere/trainer_sent") trainer_topic = SetFitModel.from_pretrained("snowdere/trainer_topic") ``` ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[6], line 1 ----> 1 from setfit import SetFitModel 2 from huggingface_hub import login 4 access_token_read = "ccsddsds" File /opt/conda/lib/python3.10/site-packages/setfit/__init__.py:7 4 import os 5 import warnings ----> 7 from .data import get_templated_dataset, sample_dataset 8 from .model_card import SetFitModelCardData 9 from .modeling import SetFitHead, SetFitModel File /opt/conda/lib/python3.10/site-packages/setfit/data.py:5 3 import pandas as pd 4 import torch ----> 5 from datasets import Dataset, DatasetDict, load_dataset 6 from torch.utils.data import Dataset as TorchDataset 8 from . import logging File /opt/conda/lib/python3.10/site-packages/datasets/__init__.py:18 1 # ruff: noqa 2 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors. 3 # (...) 13 # See the License for the specific language governing permissions and 14 # limitations under the License. 16 __version__ = "2.19.0" ---> 18 from .arrow_dataset import Dataset 19 from .arrow_reader import ReadInstruction 20 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:76 73 from tqdm.contrib.concurrent import thread_map 75 from . import config ---> 76 from .arrow_reader import ArrowReader 77 from .arrow_writer import ArrowWriter, OptimizedTypedSequence 78 from .data_files import sanitize_patterns File /opt/conda/lib/python3.10/site-packages/datasets/arrow_reader.py:29 26 from typing import TYPE_CHECKING, List, Optional, Union 28 import pyarrow as pa ---> 29 import pyarrow.parquet as pq 30 from tqdm.contrib.concurrent import thread_map 32 from .download.download_config import DownloadConfig File /opt/conda/lib/python3.10/site-packages/pyarrow/parquet/__init__.py:20 1 # Licensed to the Apache Software Foundation (ASF) under one 2 # or more contributor license agreements. See the NOTICE file 3 # distributed with this work for additional information (...) 17 18 # flake8: noqa ---> 20 from .core import * File /opt/conda/lib/python3.10/site-packages/pyarrow/parquet/core.py:33 30 import pyarrow as pa 32 try: ---> 33 import pyarrow._parquet as _parquet 34 except ImportError as exc: 35 raise ImportError( 36 "The pyarrow installation is not built with support " 37 f"for the Parquet file format ({str(exc)})" 38 ) from None File /opt/conda/lib/python3.10/site-packages/pyarrow/_parquet.pyx:1, in init pyarrow._parquet() AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType' ``` setfit: 1.0.3 transformers: 4.41.2 lingua-language-detector: 2.0.2 polars: 0.20.31 lightning: None google-cloud-bigquery: 3.24.0 shapely: 2.0.4 pyarrow: 16.0.0 ### Steps to reproduce the bug I have tried all version combinations for Dataset and Pyarrow, the all have the same error since a few days ago. This is accross multiple scripts I have. ### Expected behavior Just ron normally. ### Environment info 3.10
{ "avatar_url": "https://avatars.githubusercontent.com/u/26666267?v=4", "events_url": "https://api.github.com/users/firmai/events{/privacy}", "followers_url": "https://api.github.com/users/firmai/followers", "following_url": "https://api.github.com/users/firmai/following{/other_user}", "gists_url": "https://api.github.com/users/firmai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/firmai", "id": 26666267, "login": "firmai", "node_id": "MDQ6VXNlcjI2NjY2MjY3", "organizations_url": "https://api.github.com/users/firmai/orgs", "received_events_url": "https://api.github.com/users/firmai/received_events", "repos_url": "https://api.github.com/users/firmai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/firmai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/firmai/subscriptions", "type": "User", "url": "https://api.github.com/users/firmai" }
https://api.github.com/repos/huggingface/datasets/issues/6985/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6985/timeline
closed
false
6,985
null
2024-06-25T05:40:51Z
null
false
2,362,143,554
https://api.github.com/repos/huggingface/datasets/issues/6984
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6984/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-08-12T14:43:46Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6984
NONE
completed
null
null
[ "Hi ! Thanks for reporting :)\r\n\r\nWe don't support `large_list` yet, though it should be added to `Sequence` IMO (maybe with a parameter `large=True` ?)" ]
Convert polars DataFrame back to datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6984/reactions" }
I_kwDODunzps6My3NC
null
2024-06-19T11:38:48Z
https://api.github.com/repos/huggingface/datasets/issues/6984/comments
### Feature request This returns error. ```python from datasets import Dataset dsdf = Dataset.from_dict({"x": [[1, 2], [3, 4, 5]], "y": ["a", "b"]}) Dataset.from_polars(dsdf.to_polars()) ``` ValueError: Arrow type large_list<item: int64> does not have a datasets dtype equivalent. ### Motivation When datasets contain Sequence data type, it will be converted to Arrow type large_list. However, the reverse (from large_list to Sequence) does not work. ### Your contribution No
{ "avatar_url": "https://avatars.githubusercontent.com/u/38550511?v=4", "events_url": "https://api.github.com/users/ljw20180420/events{/privacy}", "followers_url": "https://api.github.com/users/ljw20180420/followers", "following_url": "https://api.github.com/users/ljw20180420/following{/other_user}", "gists_url": "https://api.github.com/users/ljw20180420/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ljw20180420", "id": 38550511, "login": "ljw20180420", "node_id": "MDQ6VXNlcjM4NTUwNTEx", "organizations_url": "https://api.github.com/users/ljw20180420/orgs", "received_events_url": "https://api.github.com/users/ljw20180420/received_events", "repos_url": "https://api.github.com/users/ljw20180420/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ljw20180420/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ljw20180420/subscriptions", "type": "User", "url": "https://api.github.com/users/ljw20180420" }
https://api.github.com/repos/huggingface/datasets/issues/6984/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6984/timeline
closed
false
6,984
null
2024-08-12T14:43:46Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,361,806,201
https://api.github.com/repos/huggingface/datasets/issues/6983
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6983/events
[]
null
2024-06-28T06:57:38Z
[]
https://github.com/huggingface/datasets/pull/6983
MEMBER
null
false
{ "closed_at": null, "closed_issues": 3, "created_at": "2023-02-13T16:22:42Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }, "description": "Next major release", "due_on": null, "html_url": "https://github.com/huggingface/datasets/milestone/10", "id": 9038583, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels", "node_id": "MI_kwDODunzps4Aier3", "number": 10, "open_issues": 5, "state": "open", "title": "3.0", "updated_at": "2024-06-28T06:51:30Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/10" }
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6983). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005566 / 0.011353 (-0.005787) | 0.003977 / 0.011008 (-0.007031) | 0.063250 / 0.038508 (0.024742) | 0.030907 / 0.023109 (0.007798) | 0.244989 / 0.275898 (-0.030909) | 0.272139 / 0.323480 (-0.051341) | 0.004332 / 0.007986 (-0.003653) | 0.002960 / 0.004328 (-0.001368) | 0.050147 / 0.004250 (0.045896) | 0.044740 / 0.037052 (0.007688) | 0.256947 / 0.258489 (-0.001542) | 0.290372 / 0.293841 (-0.003469) | 0.030444 / 0.128546 (-0.098102) | 0.012675 / 0.075646 (-0.062971) | 0.203852 / 0.419271 (-0.215420) | 0.036977 / 0.043533 (-0.006556) | 0.244401 / 0.255139 (-0.010738) | 0.270020 / 0.283200 (-0.013179) | 0.018177 / 0.141683 (-0.123506) | 1.122189 / 1.452155 (-0.329966) | 1.176688 / 1.492716 (-0.316028) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100721 / 0.018006 (0.082715) | 0.311824 / 0.000490 (0.311335) | 0.000222 / 0.000200 (0.000022) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020039 / 0.037411 (-0.017373) | 0.062084 / 0.014526 (0.047558) | 0.074317 / 0.176557 (-0.102240) | 0.123935 / 0.737135 (-0.613200) | 0.076186 / 0.296338 (-0.220153) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284827 / 0.215209 (0.069618) | 2.782727 / 2.077655 (0.705072) | 1.417624 / 1.504120 (-0.086496) | 1.294476 / 1.541195 (-0.246718) | 1.332658 / 1.468490 (-0.135832) | 0.724820 / 4.584777 (-3.859957) | 2.384546 / 3.745712 (-1.361166) | 2.866759 / 5.269862 (-2.403103) | 1.930756 / 4.565676 (-2.634921) | 0.083090 / 0.424275 (-0.341185) | 0.005566 / 0.007607 (-0.002041) | 0.340117 / 0.226044 (0.114072) | 3.342417 / 2.268929 (1.073488) | 1.807842 / 55.444624 (-53.636782) | 1.511647 / 6.876477 (-5.364830) | 1.653893 / 2.142072 (-0.488179) | 0.803983 / 4.805227 (-4.001244) | 0.136205 / 6.500664 (-6.364459) | 0.042815 / 0.075469 (-0.032654) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962346 / 1.841788 (-0.879442) | 11.792239 / 8.074308 (3.717931) | 9.236256 / 10.191392 (-0.955136) | 0.143200 / 0.680424 (-0.537224) | 0.015050 / 0.534201 (-0.519151) | 0.304623 / 0.579283 (-0.274660) | 0.266417 / 0.434364 (-0.167947) | 0.341213 / 0.540337 (-0.199124) | 0.454258 / 1.386936 (-0.932678) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005917 / 0.011353 (-0.005436) | 0.004005 / 0.011008 (-0.007003) | 0.049781 / 0.038508 (0.011273) | 0.033310 / 0.023109 (0.010200) | 0.271881 / 0.275898 (-0.004017) | 0.296855 / 0.323480 (-0.026625) | 0.004479 / 0.007986 (-0.003507) | 0.002818 / 0.004328 (-0.001510) | 0.048213 / 0.004250 (0.043962) | 0.043480 / 0.037052 (0.006428) | 0.285963 / 0.258489 (0.027473) | 0.317304 / 0.293841 (0.023463) | 0.031619 / 0.128546 (-0.096928) | 0.012312 / 0.075646 (-0.063335) | 0.059904 / 0.419271 (-0.359368) | 0.033152 / 0.043533 (-0.010381) | 0.274198 / 0.255139 (0.019059) | 0.290469 / 0.283200 (0.007269) | 0.019424 / 0.141683 (-0.122258) | 1.133669 / 1.452155 (-0.318485) | 1.194427 / 1.492716 (-0.298290) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101561 / 0.018006 (0.083555) | 0.312617 / 0.000490 (0.312127) | 0.000216 / 0.000200 (0.000016) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023705 / 0.037411 (-0.013706) | 0.076781 / 0.014526 (0.062255) | 0.089922 / 0.176557 (-0.086634) | 0.129182 / 0.737135 (-0.607953) | 0.092022 / 0.296338 (-0.204317) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300977 / 0.215209 (0.085768) | 2.909088 / 2.077655 (0.831433) | 1.592821 / 1.504120 (0.088701) | 1.466627 / 1.541195 (-0.074568) | 1.497558 / 1.468490 (0.029068) | 0.720986 / 4.584777 (-3.863791) | 0.958039 / 3.745712 (-2.787673) | 3.023413 / 5.269862 (-2.246448) | 1.933245 / 4.565676 (-2.632432) | 0.080500 / 0.424275 (-0.343775) | 0.005243 / 0.007607 (-0.002364) | 0.361259 / 0.226044 (0.135215) | 3.447317 / 2.268929 (1.178389) | 1.938234 / 55.444624 (-53.506390) | 1.671563 / 6.876477 (-5.204913) | 1.674647 / 2.142072 (-0.467425) | 0.790606 / 4.805227 (-4.014621) | 0.133312 / 6.500664 (-6.367352) | 0.041241 / 0.075469 (-0.034228) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996167 / 1.841788 (-0.845621) | 12.460877 / 8.074308 (4.386569) | 10.608415 / 10.191392 (0.417023) | 0.134076 / 0.680424 (-0.546348) | 0.016166 / 0.534201 (-0.518035) | 0.301218 / 0.579283 (-0.278065) | 0.128979 / 0.434364 (-0.305385) | 0.336453 / 0.540337 (-0.203884) | 0.435561 / 1.386936 (-0.951375) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#70e7355b7125fb792107ef5128ee3ad15cbec26c \"CML watermark\")\n" ]
Remove metrics
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6983/reactions" }
PR_kwDODunzps5y7tK7
{ "diff_url": "https://github.com/huggingface/datasets/pull/6983.diff", "html_url": "https://github.com/huggingface/datasets/pull/6983", "merged_at": "2024-06-28T06:51:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/6983.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6983" }
2024-06-19T09:08:55Z
https://api.github.com/repos/huggingface/datasets/issues/6983/comments
Remove all metrics, as part of the 3.0 release. Note they are deprecated since 2.5.0 version.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6983/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6983/timeline
closed
false
6,983
null
2024-06-28T06:51:30Z
null
true
2,361,661,469
https://api.github.com/repos/huggingface/datasets/issues/6982
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6982/events
[]
null
2024-07-08T06:20:16Z
[]
https://github.com/huggingface/datasets/issues/6982
NONE
completed
null
null
[ "it seems the bug will happened in all windows system, I tried it in windows8.1, 10, 11 and all of them failed. But it won't happened in the Linux(Ubuntu and Centos7) and Mac (both my virtual and physical machine). I still don't know what the problem is. May be related to the path? I cannot run the split file in my windows server which created in Linux (even I replace the path in the arrow document)....work for it for a week but still cannot fix it .....upset", "Have you properly logged in? Are you using the a valid token?\r\n\r\nNote that this dataset is gated and you must follow the right procedure to be able to access it. You can find more info in the docs: https://huggingface.co/docs/hub/datasets-gated#access-gated-datasets-as-a-user", "> Have you properly logged in? Are you using the a valid token?\r\n> \r\n> Note that this dataset is gated and you must follow the right procedure to be able to access it. You can find more info in the docs: https://huggingface.co/docs/hub/datasets-gated#access-gated-datasets-as-a-user\r\n\r\nI finally found it what happened. It is not about the logging. When I copy the dataset from its original path (C:/Users/cybes/.cache/huggingface/datasets/downloads/extracted/XXX/cv-corpus-7.0-2021-07-21) to the desktop and load each tsv in it one by one , when I load the test spilt, the following warning occurs:\r\n\"ArrowInvalid: Failed to parse string: 'Benchmark' as a scalar of type double\"\r\n\r\nThen I manually deleted them in the \"segment\", the error won't happen anymore, even I replace the original path with these revised tsv and use the previous loading method (common_voice_train = load_dataset(\"mozilla-foundation/common_voice_7_0\", \"ja\", split=\"train\", trust_remote_code=True)). It can work properly." ]
cannot split dataset when using load_dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6982/reactions" }
I_kwDODunzps6MxBgd
null
2024-06-19T08:07:16Z
https://api.github.com/repos/huggingface/datasets/issues/6982/comments
### Describe the bug when I use load_dataset methods to load mozilla-foundation/common_voice_7_0, it can successfully download and extracted the dataset but It cannot generating the arrow document, This bug happened in my server, my laptop, so as #6906 , but it won't happen in the google colab. I work for it for days, even I load the datasets from local path, it can Generating train split and validation split but bug happen again in test split. ### Steps to reproduce the bug from datasets import load_dataset, load_metric, Audio common_voice_train = load_dataset("mozilla-foundation/common_voice_7_0", "ja", split="train", token=selftoken, trust_remote_code=True) ### Expected behavior ``` { "name": "ValueError", "message": "Instruction \"train\" corresponds to no data!", "stack": "--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[2], line 3 1 from datasets import load_dataset, load_metric, Audio ----> 3 common_voice_train = load_dataset(\"mozilla-foundation/common_voice_7_0\", \"ja\", split=\"train\",token='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')#,trust_remote_code=True)#,streaming=True) 4 common_voice_test = load_dataset(\"mozilla-foundation/common_voice_7_0\", \"ja\", split=\"test\",token='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')#,trust_remote_code=True)#,streaming=True) File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\load.py:2626, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2622 # Build dataset for splits 2623 keep_in_memory = ( 2624 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 2625 ) -> 2626 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) 2627 # Rename and cast features to match task schema 2628 if task is not None: 2629 # To avoid issuing the same warning twice File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\builder.py:1266, in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory) 1263 verification_mode = VerificationMode(verification_mode or VerificationMode.BASIC_CHECKS) 1265 # Create a dataset for each of the given splits -> 1266 datasets = map_nested( 1267 partial( 1268 self._build_single_dataset, 1269 run_post_process=run_post_process, 1270 verification_mode=verification_mode, 1271 in_memory=in_memory, 1272 ), 1273 split, 1274 map_tuple=True, 1275 disable_tqdm=True, 1276 ) 1277 if isinstance(datasets, dict): 1278 datasets = DatasetDict(datasets) File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\utils\\py_utils.py:484, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, batched, batch_size, types, disable_tqdm, desc) 482 if batched: 483 data_struct = [data_struct] --> 484 mapped = function(data_struct) 485 if batched: 486 mapped = mapped[0] File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\builder.py:1296, in DatasetBuilder._build_single_dataset(self, split, run_post_process, verification_mode, in_memory) 1293 split = Split(split) 1295 # Build base dataset -> 1296 ds = self._as_dataset( 1297 split=split, 1298 in_memory=in_memory, 1299 ) 1300 if run_post_process: 1301 for resource_file_name in self._post_processing_resources(split).values(): File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\builder.py:1370, in DatasetBuilder._as_dataset(self, split, in_memory) 1368 if self._check_legacy_cache(): 1369 dataset_name = self.name -> 1370 dataset_kwargs = ArrowReader(cache_dir, self.info).read( 1371 name=dataset_name, 1372 instructions=split, 1373 split_infos=self.info.splits.values(), 1374 in_memory=in_memory, 1375 ) 1376 fingerprint = self._get_dataset_fingerprint(split) 1377 return Dataset(fingerprint=fingerprint, **dataset_kwargs) File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\arrow_reader.py:256, in BaseReader.read(self, name, instructions, split_infos, in_memory) 254 msg = f'Instruction \"{instructions}\" corresponds to no data!' 255 #msg = f'Instruction \"{self._path}\",\"{name}\",\"{instructions}\",\"{split_infos}\" corresponds to no data!' --> 256 raise ValueError(msg) 257 return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) ValueError: Instruction \"train\" corresponds to no data!" } ``` ### Environment info Environment: python 3.9 windows 11 pro VScode+jupyter
{ "avatar_url": "https://avatars.githubusercontent.com/u/17721894?v=4", "events_url": "https://api.github.com/users/cybest0608/events{/privacy}", "followers_url": "https://api.github.com/users/cybest0608/followers", "following_url": "https://api.github.com/users/cybest0608/following{/other_user}", "gists_url": "https://api.github.com/users/cybest0608/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cybest0608", "id": 17721894, "login": "cybest0608", "node_id": "MDQ6VXNlcjE3NzIxODk0", "organizations_url": "https://api.github.com/users/cybest0608/orgs", "received_events_url": "https://api.github.com/users/cybest0608/received_events", "repos_url": "https://api.github.com/users/cybest0608/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cybest0608/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cybest0608/subscriptions", "type": "User", "url": "https://api.github.com/users/cybest0608" }
https://api.github.com/repos/huggingface/datasets/issues/6982/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6982/timeline
closed
false
6,982
null
2024-07-08T06:20:16Z
null
false
2,361,520,022
https://api.github.com/repos/huggingface/datasets/issues/6981
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6981/events
[]
null
2024-06-19T14:32:59Z
[]
https://github.com/huggingface/datasets/pull/6981
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6981). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005578 / 0.011353 (-0.005775) | 0.003946 / 0.011008 (-0.007062) | 0.063317 / 0.038508 (0.024808) | 0.031878 / 0.023109 (0.008769) | 0.312571 / 0.275898 (0.036673) | 0.281415 / 0.323480 (-0.042065) | 0.004139 / 0.007986 (-0.003846) | 0.002730 / 0.004328 (-0.001598) | 0.049539 / 0.004250 (0.045289) | 0.045056 / 0.037052 (0.008003) | 0.263820 / 0.258489 (0.005330) | 0.297817 / 0.293841 (0.003976) | 0.029490 / 0.128546 (-0.099056) | 0.012467 / 0.075646 (-0.063179) | 0.204607 / 0.419271 (-0.214664) | 0.036305 / 0.043533 (-0.007228) | 0.244102 / 0.255139 (-0.011037) | 0.267855 / 0.283200 (-0.015345) | 0.019794 / 0.141683 (-0.121889) | 1.130784 / 1.452155 (-0.321371) | 1.172507 / 1.492716 (-0.320209) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092430 / 0.018006 (0.074424) | 0.296460 / 0.000490 (0.295970) | 0.000210 / 0.000200 (0.000010) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019467 / 0.037411 (-0.017944) | 0.062850 / 0.014526 (0.048324) | 0.074067 / 0.176557 (-0.102490) | 0.123280 / 0.737135 (-0.613856) | 0.077036 / 0.296338 (-0.219302) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282687 / 0.215209 (0.067478) | 2.786715 / 2.077655 (0.709060) | 1.492028 / 1.504120 (-0.012092) | 1.373603 / 1.541195 (-0.167592) | 1.405004 / 1.468490 (-0.063486) | 0.714408 / 4.584777 (-3.870369) | 2.376785 / 3.745712 (-1.368927) | 2.916150 / 5.269862 (-2.353712) | 1.921184 / 4.565676 (-2.644493) | 0.078354 / 0.424275 (-0.345921) | 0.005236 / 0.007607 (-0.002371) | 0.334647 / 0.226044 (0.108603) | 3.262069 / 2.268929 (0.993140) | 1.858300 / 55.444624 (-53.586324) | 1.572968 / 6.876477 (-5.303509) | 1.659145 / 2.142072 (-0.482927) | 0.779546 / 4.805227 (-4.025681) | 0.132623 / 6.500664 (-6.368041) | 0.042423 / 0.075469 (-0.033046) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985516 / 1.841788 (-0.856271) | 12.001321 / 8.074308 (3.927013) | 9.927011 / 10.191392 (-0.264381) | 0.142645 / 0.680424 (-0.537779) | 0.013808 / 0.534201 (-0.520393) | 0.303422 / 0.579283 (-0.275861) | 0.262666 / 0.434364 (-0.171698) | 0.339369 / 0.540337 (-0.200969) | 0.431028 / 1.386936 (-0.955908) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005848 / 0.011353 (-0.005505) | 0.003971 / 0.011008 (-0.007037) | 0.050746 / 0.038508 (0.012238) | 0.031554 / 0.023109 (0.008445) | 0.277678 / 0.275898 (0.001780) | 0.300776 / 0.323480 (-0.022704) | 0.004428 / 0.007986 (-0.003558) | 0.002773 / 0.004328 (-0.001555) | 0.049882 / 0.004250 (0.045632) | 0.039833 / 0.037052 (0.002780) | 0.289143 / 0.258489 (0.030654) | 0.321425 / 0.293841 (0.027584) | 0.031701 / 0.128546 (-0.096845) | 0.012687 / 0.075646 (-0.062960) | 0.060650 / 0.419271 (-0.358621) | 0.033318 / 0.043533 (-0.010215) | 0.277019 / 0.255139 (0.021880) | 0.292345 / 0.283200 (0.009145) | 0.018520 / 0.141683 (-0.123163) | 1.143933 / 1.452155 (-0.308222) | 1.183913 / 1.492716 (-0.308803) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094467 / 0.018006 (0.076461) | 0.298822 / 0.000490 (0.298332) | 0.000201 / 0.000200 (0.000001) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022811 / 0.037411 (-0.014601) | 0.078084 / 0.014526 (0.063558) | 0.089079 / 0.176557 (-0.087477) | 0.130229 / 0.737135 (-0.606906) | 0.090851 / 0.296338 (-0.205487) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294981 / 0.215209 (0.079772) | 2.908294 / 2.077655 (0.830639) | 1.591281 / 1.504120 (0.087161) | 1.446032 / 1.541195 (-0.095162) | 1.469441 / 1.468490 (0.000951) | 0.726477 / 4.584777 (-3.858300) | 0.983086 / 3.745712 (-2.762626) | 2.892715 / 5.269862 (-2.377147) | 1.974092 / 4.565676 (-2.591584) | 0.079500 / 0.424275 (-0.344775) | 0.005497 / 0.007607 (-0.002110) | 0.342220 / 0.226044 (0.116176) | 3.414508 / 2.268929 (1.145579) | 1.941550 / 55.444624 (-53.503074) | 1.645268 / 6.876477 (-5.231209) | 1.805909 / 2.142072 (-0.336163) | 0.814483 / 4.805227 (-3.990744) | 0.135867 / 6.500664 (-6.364797) | 0.041718 / 0.075469 (-0.033751) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.999751 / 1.841788 (-0.842036) | 12.488263 / 8.074308 (4.413954) | 10.867040 / 10.191392 (0.675648) | 0.143999 / 0.680424 (-0.536425) | 0.015496 / 0.534201 (-0.518705) | 0.302170 / 0.579283 (-0.277113) | 0.123753 / 0.434364 (-0.310611) | 0.340424 / 0.540337 (-0.199913) | 0.458339 / 1.386936 (-0.928597) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a6ccf944e42c1a84de81bf326accab9999b86c90 \"CML watermark\")\n" ]
Update docs on trust_remote_code defaults to False
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6981/reactions" }
PR_kwDODunzps5y6tnN
{ "diff_url": "https://github.com/huggingface/datasets/pull/6981.diff", "html_url": "https://github.com/huggingface/datasets/pull/6981", "merged_at": "2024-06-19T14:26:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/6981.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6981" }
2024-06-19T07:12:21Z
https://api.github.com/repos/huggingface/datasets/issues/6981/comments
Update docs on trust_remote_code defaults to False. The docs needed to be updated due to this PR: - #6954
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6981/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6981/timeline
closed
false
6,981
null
2024-06-19T14:26:37Z
null
true
2,360,909,930
https://api.github.com/repos/huggingface/datasets/issues/6980
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6980/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-07-12T12:04:54Z
[]
https://github.com/huggingface/datasets/issues/6980
CONTRIBUTOR
completed
null
null
[]
Support NumPy 2.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6980/reactions" }
I_kwDODunzps6MuKBq
null
2024-06-18T23:30:22Z
https://api.github.com/repos/huggingface/datasets/issues/6980/comments
### Feature request Support NumPy 2.0. ### Motivation NumPy introduces the Array API, which bridges the gap between machine learning libraries. Many clients of HuggingFace are eager to start using the Array API. Besides that, NumPy 2 provides a cleaner interface than NumPy 1. ### Tasks NumPy 2.0 was released for testing so that libraries could ensure compatibility [since mid-March](https://github.com/numpy/numpy/issues/24300#issuecomment-1986815755). What needs to be done for HuggingFace to support Numpy 2? - [x] Fix use of `array`: https://github.com/huggingface/datasets/pull/6976 - [ ] Remove [NumPy version limit](https://github.com/huggingface/datasets/pull/6975): https://github.com/huggingface/datasets/pull/6991
{ "avatar_url": "https://avatars.githubusercontent.com/u/730137?v=4", "events_url": "https://api.github.com/users/NeilGirdhar/events{/privacy}", "followers_url": "https://api.github.com/users/NeilGirdhar/followers", "following_url": "https://api.github.com/users/NeilGirdhar/following{/other_user}", "gists_url": "https://api.github.com/users/NeilGirdhar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NeilGirdhar", "id": 730137, "login": "NeilGirdhar", "node_id": "MDQ6VXNlcjczMDEzNw==", "organizations_url": "https://api.github.com/users/NeilGirdhar/orgs", "received_events_url": "https://api.github.com/users/NeilGirdhar/received_events", "repos_url": "https://api.github.com/users/NeilGirdhar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NeilGirdhar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NeilGirdhar/subscriptions", "type": "User", "url": "https://api.github.com/users/NeilGirdhar" }
https://api.github.com/repos/huggingface/datasets/issues/6980/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6980/timeline
closed
false
6,980
null
2024-07-12T12:04:53Z
null
false
2,360,175,363
https://api.github.com/repos/huggingface/datasets/issues/6979
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6979/events
[]
null
2024-06-21T17:09:32Z
[]
https://github.com/huggingface/datasets/issues/6979
NONE
completed
null
null
[ "Hello,\r\n\r\nHave you tried loading the dataset in streaming mode? [Documentation](https://huggingface.co/docs/datasets/v2.20.0/stream)\r\n\r\nThis way you wouldn't have to load it all. Also, let's be nice to Parquet, it's a really nice technology and we don't need to be mean :)", "I have downloaded part of it, just want to know how to load part of it, stream mode is not work for me since my network (in china) not stable, I don't want do it all again and again.\r\n\r\nJust curious, doesn't there a way to load part of it?", "Could you convert the IterableDataset to a Dataset after taking the first 100 rows with `.take`? This way, you would have a local copy of the first 100 rows on your system and thus won't need to download. Would that work?\r\n\r\nHere is a [SO question](https://stackoverflow.com/questions/76227219/can-i-convert-an-iterabledataset-to-dataset) detailing how to do the conversion.", "I mean, the parquet is like:\r\n\r\n00000-0143554\r\n00001-0143554\r\n00002-0143554\r\n...\r\n00100-0143554\r\n...\r\n09100-0143554\r\n\r\nI just downloaded the first 9900 part of it. \r\n\r\nI can not load with load_dataset, it throw an error says my file is not same as parquet all amount.\r\n\r\nHow could I load the only I have? \r\n\r\n( I really don't want downlaod them all, cause, I don't need all, and pulus, its huge.... )\r\n\r\nAs I said, I have donwloaded about 9999... It's not about stream... I just wnat to konw how to load offline... part....", "Hi, @lucasjinreal.\r\n\r\nI am not sure of understanding your issue. What is the error message and stack trace you get? What version of `datasets` are you using? Could you provide a reproducible example?\r\n\r\nWithout knowing all those details, I would naively say that you can load whatever number of Parquet files by using the \"parquet\" loader: https://huggingface.co/docs/datasets/loading#parquet\r\n```python\r\nds = load_dataset(\"parquet\", data_files=\"data/train-001*-of-00314.parquet\", split=\"train\")\r\n```", "@albertvillanova Not sure you have tested with this or not, but I have tried,\r\n\r\nthe only error I got is it still laodding all parquet with a progress bar maxium to the whole number 014354, and it loads my 0 - 000999 part, then throws an error.\r\n\r\nSays Numinfo is not same.\r\n\r\nI am so confused,", "Yes, my code snippet works.\n\nCould you copy-paste your code and the output? Otherwise we are not able to know what the issue is.", "@albertvillanova Hi, thanks for the tracing of the issue.\r\n\r\nThis is the output:\r\n\r\n```\r\nython get_llava_recap_cc3m.py\r\nGenerating train split: 3%|β–ˆβ–ˆβ–ˆβ–‹ | 101910/3199866 [00:16<08:30, 6065.67 examples/s]\r\nTraceback (most recent call last):\r\n File \"get_llava_recap_cc3m.py\", line 31, in <module>\r\n dataset = load_dataset(\"llava-recap-cc3m/\", data_files=\"data/train-0000*-of-00314.parquet\")\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/load.py\", line 2582, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 1005, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 1118, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/info_utils.py\", line 101, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=156885281898.75, num_examples=3199866, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=4994080770, num_examples=101910, shard_lengths=[10191, 10291, 10291, 10291, 10291, 10191, 10191, 10291, 10291, 9591], dataset_name='llava-recap-cc3m')}]\r\n```\r\n\r\nthis is my code:\r\n\r\n```\r\ndataset = load_dataset(\"llava-recap-cc3m/\", data_files=\"data/train-0000*-of-00314.parquet\")\r\n```\r\n\r\nMy situation and requirements:\r\n\r\n00314 is all, but I downlaode about 150, half of it, as you can see, i used `0000*-of-00314.` which should be at most 99 file being loaded.\r\n\r\nBut it just fail.\r\n\r\nCan u understand my issue now?\r\n\r\nIf so, then **do not** suggest me with stream, Just want to know, is there a way to load part if it...... **and please don't say you can not replicate my issue when you have downloaded them all**, my english is not good, but I think all situations and all prerequists I have addressed already.\r\n\r\n", "I see you did not use the \"parquet\" loader as I suggested in my code snippet above: https://github.com/huggingface/datasets/issues/6979#issuecomment-2182031415\r\nPlease try passing \"parquet\" instead of \"llava-recap-cc3m/\" to `load_dataset`, and the complete path to data files in `data_files`:\r\n```python\r\nload_dataset(\"parquet\", data_files=\"llava-recap-cc3m/data/train-001*-of-00314.parquet\")\r\n```", "Let me explain that you get the error because of this content within the `dataset_info` YAML tag in the `llava-recap-cc3m/README.md`:\r\n```\r\n - name: train\r\n num_bytes: 156885281898.75\r\n num_examples: 3199866\r\n```\r\n\r\nBy default, if there is that content in the README file, `load_dataset` performs a basic check to verify it the generated number of examples matches the expected one and raises a `NonMatchingSplitsSizesError` if that is not the case. \r\n\r\nYou can avoid this basic check by passing `verification_mode=\"no_checks\"`:\r\n```python\r\nload_dataset(\"llava-recap-cc3m/\", data_files=\"data/train-0000*-of-00314.parquet\", verification_mode=\"no_checks\")\r\n```", "And please, next time you have an issue, please fill the Bug template issue with all the necessary information: https://github.com/huggingface/datasets/issues/new?assignees=&labels=&projects=&template=bug-report.yml\r\n\r\nOtherwise it is very difficult for us to understand the underlying problem and to propose a pertinent solution.", "thank u albert!\r\n\r\nIt solved my issue!" ]
How can I load partial parquet files only?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6979/reactions" }
I_kwDODunzps6MrWsD
null
2024-06-18T15:44:16Z
https://api.github.com/repos/huggingface/datasets/issues/6979/comments
I have a HUGE dataset about 14TB, I unable to download all parquet all. I just take about 100 from it. dataset = load_dataset("xx/", data_files="data/train-001*-of-00314.parquet") How can I just using 000 - 100 from a 00314 from all partially? I search whole net didn't found a solution, **this is stupid if they didn't support it, and I swear I wont using stupid parquet any more**
{ "avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4", "events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}", "followers_url": "https://api.github.com/users/lucasjinreal/followers", "following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}", "gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucasjinreal", "id": 21303438, "login": "lucasjinreal", "node_id": "MDQ6VXNlcjIxMzAzNDM4", "organizations_url": "https://api.github.com/users/lucasjinreal/orgs", "received_events_url": "https://api.github.com/users/lucasjinreal/received_events", "repos_url": "https://api.github.com/users/lucasjinreal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions", "type": "User", "url": "https://api.github.com/users/lucasjinreal" }
https://api.github.com/repos/huggingface/datasets/issues/6979/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6979/timeline
closed
false
6,979
null
2024-06-21T13:32:50Z
null
false
2,359,511,469
https://api.github.com/repos/huggingface/datasets/issues/6978
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6978/events
[]
null
2024-06-19T06:23:24Z
[]
https://github.com/huggingface/datasets/pull/6978
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6978). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005144 / 0.011353 (-0.006209) | 0.003500 / 0.011008 (-0.007509) | 0.063670 / 0.038508 (0.025162) | 0.031793 / 0.023109 (0.008683) | 0.239611 / 0.275898 (-0.036287) | 0.276681 / 0.323480 (-0.046799) | 0.004148 / 0.007986 (-0.003838) | 0.002713 / 0.004328 (-0.001615) | 0.048832 / 0.004250 (0.044582) | 0.043066 / 0.037052 (0.006014) | 0.256835 / 0.258489 (-0.001655) | 0.292224 / 0.293841 (-0.001617) | 0.027530 / 0.128546 (-0.101017) | 0.010509 / 0.075646 (-0.065137) | 0.203370 / 0.419271 (-0.215901) | 0.035643 / 0.043533 (-0.007890) | 0.252161 / 0.255139 (-0.002978) | 0.271883 / 0.283200 (-0.011316) | 0.018658 / 0.141683 (-0.123024) | 1.081676 / 1.452155 (-0.370479) | 1.142146 / 1.492716 (-0.350571) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093484 / 0.018006 (0.075477) | 0.298607 / 0.000490 (0.298117) | 0.000220 / 0.000200 (0.000020) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019021 / 0.037411 (-0.018390) | 0.062471 / 0.014526 (0.047946) | 0.075393 / 0.176557 (-0.101163) | 0.121040 / 0.737135 (-0.616095) | 0.077613 / 0.296338 (-0.218726) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294857 / 0.215209 (0.079648) | 2.931143 / 2.077655 (0.853489) | 1.510866 / 1.504120 (0.006746) | 1.379574 / 1.541195 (-0.161621) | 1.352358 / 1.468490 (-0.116133) | 0.561670 / 4.584777 (-4.023107) | 2.378434 / 3.745712 (-1.367278) | 2.713203 / 5.269862 (-2.556658) | 1.706416 / 4.565676 (-2.859260) | 0.062355 / 0.424275 (-0.361920) | 0.004971 / 0.007607 (-0.002636) | 0.336498 / 0.226044 (0.110453) | 3.316464 / 2.268929 (1.047535) | 1.833035 / 55.444624 (-53.611589) | 1.532808 / 6.876477 (-5.343668) | 1.537323 / 2.142072 (-0.604749) | 0.639430 / 4.805227 (-4.165798) | 0.115808 / 6.500664 (-6.384856) | 0.043545 / 0.075469 (-0.031924) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974428 / 1.841788 (-0.867360) | 11.368914 / 8.074308 (3.294606) | 9.754488 / 10.191392 (-0.436904) | 0.146277 / 0.680424 (-0.534146) | 0.013917 / 0.534201 (-0.520284) | 0.286809 / 0.579283 (-0.292474) | 0.267144 / 0.434364 (-0.167219) | 0.326161 / 0.540337 (-0.214177) | 0.418059 / 1.386936 (-0.968877) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005341 / 0.011353 (-0.006012) | 0.003460 / 0.011008 (-0.007548) | 0.050135 / 0.038508 (0.011627) | 0.032014 / 0.023109 (0.008905) | 0.259835 / 0.275898 (-0.016063) | 0.286275 / 0.323480 (-0.037205) | 0.004350 / 0.007986 (-0.003636) | 0.002800 / 0.004328 (-0.001529) | 0.049358 / 0.004250 (0.045107) | 0.040182 / 0.037052 (0.003130) | 0.278352 / 0.258489 (0.019863) | 0.307869 / 0.293841 (0.014028) | 0.029151 / 0.128546 (-0.099395) | 0.010091 / 0.075646 (-0.065555) | 0.058814 / 0.419271 (-0.360458) | 0.033150 / 0.043533 (-0.010383) | 0.263594 / 0.255139 (0.008455) | 0.284065 / 0.283200 (0.000866) | 0.017968 / 0.141683 (-0.123714) | 1.145605 / 1.452155 (-0.306550) | 1.196884 / 1.492716 (-0.295832) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094045 / 0.018006 (0.076039) | 0.299031 / 0.000490 (0.298541) | 0.000210 / 0.000200 (0.000011) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022510 / 0.037411 (-0.014901) | 0.077478 / 0.014526 (0.062953) | 0.087746 / 0.176557 (-0.088811) | 0.129311 / 0.737135 (-0.607825) | 0.089921 / 0.296338 (-0.206418) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290279 / 0.215209 (0.075070) | 2.880725 / 2.077655 (0.803070) | 1.541262 / 1.504120 (0.037142) | 1.424475 / 1.541195 (-0.116719) | 1.436397 / 1.468490 (-0.032093) | 0.578237 / 4.584777 (-4.006540) | 0.965249 / 3.745712 (-2.780463) | 2.682534 / 5.269862 (-2.587327) | 1.732859 / 4.565676 (-2.832817) | 0.065523 / 0.424275 (-0.358752) | 0.005466 / 0.007607 (-0.002141) | 0.343985 / 0.226044 (0.117940) | 3.397463 / 2.268929 (1.128534) | 1.929370 / 55.444624 (-53.515255) | 1.605135 / 6.876477 (-5.271342) | 1.753926 / 2.142072 (-0.388146) | 0.659929 / 4.805227 (-4.145298) | 0.118093 / 6.500664 (-6.382571) | 0.041252 / 0.075469 (-0.034217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.009177 / 1.841788 (-0.832610) | 11.959624 / 8.074308 (3.885316) | 10.484672 / 10.191392 (0.293280) | 0.142085 / 0.680424 (-0.538339) | 0.015955 / 0.534201 (-0.518245) | 0.283649 / 0.579283 (-0.295634) | 0.125681 / 0.434364 (-0.308683) | 0.320490 / 0.540337 (-0.219847) | 0.440353 / 1.386936 (-0.946583) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e47a746bcda4b97db2467542b76d3215b3569ff0 \"CML watermark\")\n", "Maybe a patch release will be needed with this fix." ]
Fix regression for pandas < 2.0.0 in JSON loader
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6978/reactions" }
PR_kwDODunzps5yz0h6
{ "diff_url": "https://github.com/huggingface/datasets/pull/6978.diff", "html_url": "https://github.com/huggingface/datasets/pull/6978", "merged_at": "2024-06-19T05:50:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/6978.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6978" }
2024-06-18T10:26:34Z
https://api.github.com/repos/huggingface/datasets/issues/6978/comments
A regression was introduced for pandas < 2.0.0 in PR: - #6914 As described in pandas docs, the `dtype_backend` parameter was first added in pandas 2.0.0: https://pandas.pydata.org/docs/reference/api/pandas.read_json.html This PR fixes the regression by passing (or not) the `dtype_backend` parameter depending on pandas version. Maybe, in a future 3.0 `datasets` release, we could just require pandas > 2.0. Reported by: - #6977
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6978/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6978/timeline
closed
false
6,978
null
2024-06-19T05:50:18Z
null
true
2,359,295,045
https://api.github.com/repos/huggingface/datasets/issues/6977
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6977/events
[]
null
2024-06-18T10:06:10Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6977
NONE
completed
null
null
[ "Thanks for reporting, @xiaoyaolangzhi.\r\n\r\nIndeed, we are currently requiring `pandas` >= 2.0.0.\r\n\r\nYou will need to update pandas in your local environment:\r\n```\r\npip install -U pandas\r\n``` ", "Thank you very much." ]
load json file error with v2.20.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6977/reactions" }
I_kwDODunzps6Mn_xF
null
2024-06-18T08:41:01Z
https://api.github.com/repos/huggingface/datasets/issues/6977/comments
### Describe the bug ``` load_dataset(path="json", data_files="./test.json") ``` ``` Generating train split: 0 examples [00:00, ? examples/s] Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py", line 132, in _generate_tables pa_table = paj.read_json( File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1997, in _prepare_split_single for _, table in generator: File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py", line 155, in _generate_tables df = pd.read_json(f, dtype_backend="pyarrow") File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 211, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 331, in wrapper return func(*args, **kwargs) TypeError: read_json() got an unexpected keyword argument 'dtype_backend' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/app/t1.py", line 11, in <module> load_dataset(path=data_path, data_files="./t2.json") File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2616, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1029, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1124, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1884, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 2040, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset ``` ``` import pandas as pd with open("./test.json", "r") as f: df = pd.read_json(f, dtype_backend="pyarrow") ``` ``` Traceback (most recent call last): File "/app/t3.py", line 3, in <module> df = pd.read_json(f, dtype_backend="pyarrow") File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 211, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 331, in wrapper return func(*args, **kwargs) TypeError: read_json() got an unexpected keyword argument 'dtype_backend' ``` ### Steps to reproduce the bug . ### Expected behavior . ### Environment info ``` datasets 2.20.0 pandas 1.5.3 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/15037766?v=4", "events_url": "https://api.github.com/users/xiaoyaolangzhi/events{/privacy}", "followers_url": "https://api.github.com/users/xiaoyaolangzhi/followers", "following_url": "https://api.github.com/users/xiaoyaolangzhi/following{/other_user}", "gists_url": "https://api.github.com/users/xiaoyaolangzhi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xiaoyaolangzhi", "id": 15037766, "login": "xiaoyaolangzhi", "node_id": "MDQ6VXNlcjE1MDM3NzY2", "organizations_url": "https://api.github.com/users/xiaoyaolangzhi/orgs", "received_events_url": "https://api.github.com/users/xiaoyaolangzhi/received_events", "repos_url": "https://api.github.com/users/xiaoyaolangzhi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xiaoyaolangzhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiaoyaolangzhi/subscriptions", "type": "User", "url": "https://api.github.com/users/xiaoyaolangzhi" }
https://api.github.com/repos/huggingface/datasets/issues/6977/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6977/timeline
closed
false
6,977
null
2024-06-18T10:06:09Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,357,107,203
https://api.github.com/repos/huggingface/datasets/issues/6976
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6976/events
[]
null
2024-06-19T14:30:32Z
[]
https://github.com/huggingface/datasets/pull/6976
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6976). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005361 / 0.011353 (-0.005992) | 0.003983 / 0.011008 (-0.007025) | 0.062865 / 0.038508 (0.024357) | 0.029880 / 0.023109 (0.006771) | 0.261465 / 0.275898 (-0.014433) | 0.269791 / 0.323480 (-0.053689) | 0.004198 / 0.007986 (-0.003788) | 0.002942 / 0.004328 (-0.001387) | 0.049002 / 0.004250 (0.044751) | 0.043232 / 0.037052 (0.006180) | 0.328774 / 0.258489 (0.070285) | 0.297308 / 0.293841 (0.003467) | 0.030552 / 0.128546 (-0.097994) | 0.012632 / 0.075646 (-0.063015) | 0.204156 / 0.419271 (-0.215116) | 0.036014 / 0.043533 (-0.007519) | 0.241224 / 0.255139 (-0.013915) | 0.268358 / 0.283200 (-0.014842) | 0.019227 / 0.141683 (-0.122456) | 1.114515 / 1.452155 (-0.337639) | 1.147029 / 1.492716 (-0.345688) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094925 / 0.018006 (0.076919) | 0.301548 / 0.000490 (0.301059) | 0.000211 / 0.000200 (0.000011) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018875 / 0.037411 (-0.018536) | 0.062824 / 0.014526 (0.048298) | 0.075657 / 0.176557 (-0.100900) | 0.121926 / 0.737135 (-0.615209) | 0.077102 / 0.296338 (-0.219236) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286018 / 0.215209 (0.070808) | 2.832222 / 2.077655 (0.754567) | 1.462629 / 1.504120 (-0.041491) | 1.354746 / 1.541195 (-0.186449) | 1.339504 / 1.468490 (-0.128986) | 0.718381 / 4.584777 (-3.866396) | 2.401456 / 3.745712 (-1.344256) | 3.013518 / 5.269862 (-2.256343) | 1.944892 / 4.565676 (-2.620784) | 0.078793 / 0.424275 (-0.345482) | 0.005219 / 0.007607 (-0.002388) | 0.349551 / 0.226044 (0.123507) | 3.417844 / 2.268929 (1.148916) | 1.830669 / 55.444624 (-53.613956) | 1.502134 / 6.876477 (-5.374343) | 1.529242 / 2.142072 (-0.612830) | 0.793732 / 4.805227 (-4.011495) | 0.133571 / 6.500664 (-6.367093) | 0.042588 / 0.075469 (-0.032881) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988167 / 1.841788 (-0.853620) | 11.926728 / 8.074308 (3.852420) | 9.806971 / 10.191392 (-0.384421) | 0.173951 / 0.680424 (-0.506473) | 0.015308 / 0.534201 (-0.518893) | 0.310768 / 0.579283 (-0.268515) | 0.268261 / 0.434364 (-0.166103) | 0.342962 / 0.540337 (-0.197375) | 0.431255 / 1.386936 (-0.955681) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005680 / 0.011353 (-0.005673) | 0.004231 / 0.011008 (-0.006778) | 0.051009 / 0.038508 (0.012501) | 0.031431 / 0.023109 (0.008322) | 0.268582 / 0.275898 (-0.007316) | 0.287942 / 0.323480 (-0.035538) | 0.004442 / 0.007986 (-0.003543) | 0.002818 / 0.004328 (-0.001511) | 0.050241 / 0.004250 (0.045991) | 0.039933 / 0.037052 (0.002881) | 0.285814 / 0.258489 (0.027325) | 0.316082 / 0.293841 (0.022241) | 0.032416 / 0.128546 (-0.096130) | 0.012398 / 0.075646 (-0.063248) | 0.060779 / 0.419271 (-0.358493) | 0.033706 / 0.043533 (-0.009827) | 0.273915 / 0.255139 (0.018776) | 0.289752 / 0.283200 (0.006553) | 0.017859 / 0.141683 (-0.123824) | 1.150224 / 1.452155 (-0.301930) | 1.197467 / 1.492716 (-0.295250) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093810 / 0.018006 (0.075803) | 0.302529 / 0.000490 (0.302039) | 0.000221 / 0.000200 (0.000021) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022903 / 0.037411 (-0.014508) | 0.077445 / 0.014526 (0.062919) | 0.089335 / 0.176557 (-0.087222) | 0.130848 / 0.737135 (-0.606287) | 0.091106 / 0.296338 (-0.205232) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294194 / 0.215209 (0.078985) | 2.886983 / 2.077655 (0.809328) | 1.557768 / 1.504120 (0.053648) | 1.424467 / 1.541195 (-0.116727) | 1.440625 / 1.468490 (-0.027865) | 0.724793 / 4.584777 (-3.859984) | 0.985216 / 3.745712 (-2.760496) | 2.856826 / 5.269862 (-2.413036) | 1.911638 / 4.565676 (-2.654039) | 0.080350 / 0.424275 (-0.343925) | 0.005616 / 0.007607 (-0.001991) | 0.348713 / 0.226044 (0.122668) | 3.414764 / 2.268929 (1.145835) | 1.925056 / 55.444624 (-53.519568) | 1.635752 / 6.876477 (-5.240725) | 1.761117 / 2.142072 (-0.380955) | 0.808309 / 4.805227 (-3.996918) | 0.136893 / 6.500664 (-6.363771) | 0.042116 / 0.075469 (-0.033354) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004740 / 1.841788 (-0.837048) | 12.495859 / 8.074308 (4.421550) | 10.681233 / 10.191392 (0.489841) | 0.133320 / 0.680424 (-0.547104) | 0.015943 / 0.534201 (-0.518258) | 0.304869 / 0.579283 (-0.274414) | 0.128616 / 0.434364 (-0.305748) | 0.345930 / 0.540337 (-0.194407) | 0.457434 / 1.386936 (-0.929502) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#84d9dea52098c9403efb43d5b542dd6d45000bec \"CML watermark\")\n" ]
Ensure compatibility with numpy 2.0.0
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6976/reactions" }
PR_kwDODunzps5yrmNP
{ "diff_url": "https://github.com/huggingface/datasets/pull/6976.diff", "html_url": "https://github.com/huggingface/datasets/pull/6976", "merged_at": "2024-06-19T14:04:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/6976.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6976" }
2024-06-17T11:29:22Z
https://api.github.com/repos/huggingface/datasets/issues/6976/comments
Following the conversion guide, copy=False is no longer required and will result in an error: https://numpy.org/devdocs/numpy_2_0_migration_guide.html#adapting-to-changes-in-the-copy-keyword. The following fix should resolve the issue. error found during testing on the MTEB repository e.g. [here](https://github.com/embeddings-benchmark/mteb/pull/938)
{ "avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4", "events_url": "https://api.github.com/users/KennethEnevoldsen/events{/privacy}", "followers_url": "https://api.github.com/users/KennethEnevoldsen/followers", "following_url": "https://api.github.com/users/KennethEnevoldsen/following{/other_user}", "gists_url": "https://api.github.com/users/KennethEnevoldsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/KennethEnevoldsen", "id": 23721977, "login": "KennethEnevoldsen", "node_id": "MDQ6VXNlcjIzNzIxOTc3", "organizations_url": "https://api.github.com/users/KennethEnevoldsen/orgs", "received_events_url": "https://api.github.com/users/KennethEnevoldsen/received_events", "repos_url": "https://api.github.com/users/KennethEnevoldsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/KennethEnevoldsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KennethEnevoldsen/subscriptions", "type": "User", "url": "https://api.github.com/users/KennethEnevoldsen" }
https://api.github.com/repos/huggingface/datasets/issues/6976/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6976/timeline
closed
false
6,976
null
2024-06-19T14:04:34Z
null
true
2,357,003,959
https://api.github.com/repos/huggingface/datasets/issues/6975
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6975/events
[]
null
2024-06-17T12:49:53Z
[]
https://github.com/huggingface/datasets/pull/6975
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6975). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005168 / 0.011353 (-0.006185) | 0.003720 / 0.011008 (-0.007288) | 0.063347 / 0.038508 (0.024839) | 0.031474 / 0.023109 (0.008364) | 0.243233 / 0.275898 (-0.032665) | 0.276695 / 0.323480 (-0.046785) | 0.004109 / 0.007986 (-0.003877) | 0.002689 / 0.004328 (-0.001639) | 0.049522 / 0.004250 (0.045271) | 0.043477 / 0.037052 (0.006425) | 0.258578 / 0.258489 (0.000088) | 0.288134 / 0.293841 (-0.005707) | 0.027836 / 0.128546 (-0.100710) | 0.010677 / 0.075646 (-0.064969) | 0.206412 / 0.419271 (-0.212860) | 0.036204 / 0.043533 (-0.007329) | 0.250588 / 0.255139 (-0.004551) | 0.272354 / 0.283200 (-0.010846) | 0.018359 / 0.141683 (-0.123324) | 1.118867 / 1.452155 (-0.333288) | 1.157318 / 1.492716 (-0.335399) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092927 / 0.018006 (0.074921) | 0.298252 / 0.000490 (0.297762) | 0.000228 / 0.000200 (0.000028) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018824 / 0.037411 (-0.018588) | 0.069304 / 0.014526 (0.054778) | 0.075094 / 0.176557 (-0.101462) | 0.122546 / 0.737135 (-0.614590) | 0.076453 / 0.296338 (-0.219885) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287131 / 0.215209 (0.071922) | 2.838945 / 2.077655 (0.761291) | 1.473578 / 1.504120 (-0.030542) | 1.351214 / 1.541195 (-0.189981) | 1.354924 / 1.468490 (-0.113566) | 0.577092 / 4.584777 (-4.007685) | 2.348072 / 3.745712 (-1.397640) | 2.762130 / 5.269862 (-2.507732) | 1.725195 / 4.565676 (-2.840482) | 0.063596 / 0.424275 (-0.360679) | 0.004921 / 0.007607 (-0.002686) | 0.335422 / 0.226044 (0.109377) | 3.340398 / 2.268929 (1.071469) | 1.789390 / 55.444624 (-53.655234) | 1.516247 / 6.876477 (-5.360229) | 1.529653 / 2.142072 (-0.612420) | 0.643547 / 4.805227 (-4.161680) | 0.116491 / 6.500664 (-6.384173) | 0.042404 / 0.075469 (-0.033065) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.959839 / 1.841788 (-0.881948) | 11.269778 / 8.074308 (3.195470) | 9.574898 / 10.191392 (-0.616494) | 0.128979 / 0.680424 (-0.551444) | 0.013901 / 0.534201 (-0.520300) | 0.280778 / 0.579283 (-0.298505) | 0.256511 / 0.434364 (-0.177853) | 0.319361 / 0.540337 (-0.220977) | 0.411803 / 1.386936 (-0.975133) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005453 / 0.011353 (-0.005899) | 0.003478 / 0.011008 (-0.007530) | 0.050055 / 0.038508 (0.011547) | 0.031415 / 0.023109 (0.008306) | 0.275057 / 0.275898 (-0.000841) | 0.296690 / 0.323480 (-0.026789) | 0.004253 / 0.007986 (-0.003732) | 0.002777 / 0.004328 (-0.001551) | 0.049553 / 0.004250 (0.045303) | 0.039843 / 0.037052 (0.002791) | 0.286938 / 0.258489 (0.028449) | 0.318579 / 0.293841 (0.024738) | 0.029773 / 0.128546 (-0.098774) | 0.010404 / 0.075646 (-0.065242) | 0.057915 / 0.419271 (-0.361356) | 0.033486 / 0.043533 (-0.010047) | 0.273293 / 0.255139 (0.018154) | 0.293155 / 0.283200 (0.009955) | 0.017843 / 0.141683 (-0.123839) | 1.131130 / 1.452155 (-0.321024) | 1.167412 / 1.492716 (-0.325304) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092553 / 0.018006 (0.074547) | 0.298888 / 0.000490 (0.298399) | 0.000201 / 0.000200 (0.000001) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022646 / 0.037411 (-0.014765) | 0.076921 / 0.014526 (0.062395) | 0.089238 / 0.176557 (-0.087318) | 0.128793 / 0.737135 (-0.608342) | 0.089190 / 0.296338 (-0.207148) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292552 / 0.215209 (0.077343) | 2.884277 / 2.077655 (0.806622) | 1.568798 / 1.504120 (0.064678) | 1.441819 / 1.541195 (-0.099375) | 1.435766 / 1.468490 (-0.032724) | 0.572435 / 4.584777 (-4.012342) | 0.957387 / 3.745712 (-2.788326) | 2.650843 / 5.269862 (-2.619019) | 1.727424 / 4.565676 (-2.838252) | 0.063470 / 0.424275 (-0.360805) | 0.005314 / 0.007607 (-0.002293) | 0.345881 / 0.226044 (0.119836) | 3.395463 / 2.268929 (1.126535) | 1.921340 / 55.444624 (-53.523285) | 1.621563 / 6.876477 (-5.254914) | 1.742561 / 2.142072 (-0.399512) | 0.639948 / 4.805227 (-4.165279) | 0.116091 / 6.500664 (-6.384573) | 0.041218 / 0.075469 (-0.034251) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.991506 / 1.841788 (-0.850281) | 11.897462 / 8.074308 (3.823154) | 10.083008 / 10.191392 (-0.108384) | 0.140626 / 0.680424 (-0.539798) | 0.015454 / 0.534201 (-0.518747) | 0.283856 / 0.579283 (-0.295427) | 0.125935 / 0.434364 (-0.308429) | 0.323884 / 0.540337 (-0.216454) | 0.438348 / 1.386936 (-0.948588) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e59582adc7fcb53a86a8ca8eda7e04a4e7b25bd2 \"CML watermark\")\n" ]
Set temporary numpy upper version < 2.0.0 to fix CI
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6975/reactions" }
PR_kwDODunzps5yrPct
{ "diff_url": "https://github.com/huggingface/datasets/pull/6975.diff", "html_url": "https://github.com/huggingface/datasets/pull/6975", "merged_at": "2024-06-17T12:43:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/6975.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6975" }
2024-06-17T10:36:54Z
https://api.github.com/repos/huggingface/datasets/issues/6975/comments
Set temporary numpy upper version < 2.0.0 to fix CI. See: https://github.com/huggingface/datasets/actions/runs/9546031216/job/26308072017 ``` A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.0 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'. If you are a user of the module, the easiest solution will be to downgrade to 'numpy<2' or try to upgrade the affected module. We expect that some modules will need time to support NumPy 2. ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6975/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6975/timeline
closed
false
6,975
null
2024-06-17T12:43:56Z
null
true
2,355,517,362
https://api.github.com/repos/huggingface/datasets/issues/6973
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6973/events
[]
null
2024-07-01T11:25:40Z
[]
https://github.com/huggingface/datasets/issues/6973
NONE
completed
null
null
[ "add remove_unused_columns=False to training_args\r\nhttps://github.com/huggingface/datasets/issues/6535#issuecomment-1874024704", "Closing this issue because it was a reported and fixed in transformers." ]
IndexError during training with Squad dataset and T5-small model
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6973/reactions" }
I_kwDODunzps6MZley
null
2024-06-16T07:53:54Z
https://api.github.com/repos/huggingface/datasets/issues/6973/comments
### Describe the bug I am encountering an IndexError while training a T5-small model on the Squad dataset using the transformers and datasets libraries. The error occurs even with a minimal reproducible example, suggesting a potential bug or incompatibility. ### Steps to reproduce the bug 1.Install the required libraries: !pip install transformers datasets 2.Run the following code: !pip install transformers datasets import torch from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, TrainingArguments, Trainer, DataCollatorWithPadding # Load a small, publicly available dataset from datasets import load_dataset dataset = load_dataset("squad", split="train[:100]") # Use a small subset for testing # Load a pre-trained model and tokenizer model_name = "t5-small" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) # Define a basic data collator data_collator = DataCollatorWithPadding(tokenizer=tokenizer) # Define training arguments training_args = TrainingArguments( output_dir="./results", per_device_train_batch_size=2, num_train_epochs=1, ) # Create a trainer trainer = Trainer( model=model, args=training_args, train_dataset=dataset, data_collator=data_collator, ) # Train the model trainer.train() ### Expected behavior --------------------------------------------------------------------------- IndexError Traceback (most recent call last) [<ipython-input-23-f13a4b23c001>](https://localhost:8080/#) in <cell line: 34>() 32 33 # Train the model ---> 34 trainer.train() 10 frames [/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_index_key(key, size) 427 if isinstance(key, int): 428 if (key < 0 and key + size < 0) or (key >= size): --> 429 raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") 430 return 431 elif isinstance(key, slice): IndexError: Invalid key: 42 is out of bounds for size 0 ### Environment info transformers version:4.41.2 datasets version:1.18.4 Python version:3.10.12
{ "avatar_url": "https://avatars.githubusercontent.com/u/151521233?v=4", "events_url": "https://api.github.com/users/ramtunguturi36/events{/privacy}", "followers_url": "https://api.github.com/users/ramtunguturi36/followers", "following_url": "https://api.github.com/users/ramtunguturi36/following{/other_user}", "gists_url": "https://api.github.com/users/ramtunguturi36/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ramtunguturi36", "id": 151521233, "login": "ramtunguturi36", "node_id": "U_kgDOCQgH0Q", "organizations_url": "https://api.github.com/users/ramtunguturi36/orgs", "received_events_url": "https://api.github.com/users/ramtunguturi36/received_events", "repos_url": "https://api.github.com/users/ramtunguturi36/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ramtunguturi36/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ramtunguturi36/subscriptions", "type": "User", "url": "https://api.github.com/users/ramtunguturi36" }
https://api.github.com/repos/huggingface/datasets/issues/6973/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6973/timeline
closed
false
6,973
null
2024-07-01T11:25:40Z
null
false
2,353,531,912
https://api.github.com/repos/huggingface/datasets/issues/6972
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6972/events
[]
null
2024-06-14T15:43:43Z
[]
https://github.com/huggingface/datasets/pull/6972
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6972). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005195 / 0.011353 (-0.006157) | 0.003734 / 0.011008 (-0.007275) | 0.063087 / 0.038508 (0.024579) | 0.031467 / 0.023109 (0.008358) | 0.245183 / 0.275898 (-0.030715) | 0.280071 / 0.323480 (-0.043409) | 0.003205 / 0.007986 (-0.004780) | 0.003311 / 0.004328 (-0.001018) | 0.049967 / 0.004250 (0.045717) | 0.044927 / 0.037052 (0.007875) | 0.262244 / 0.258489 (0.003755) | 0.284549 / 0.293841 (-0.009292) | 0.027595 / 0.128546 (-0.100952) | 0.010521 / 0.075646 (-0.065126) | 0.206928 / 0.419271 (-0.212343) | 0.036179 / 0.043533 (-0.007354) | 0.254256 / 0.255139 (-0.000883) | 0.272733 / 0.283200 (-0.010467) | 0.020456 / 0.141683 (-0.121226) | 1.118527 / 1.452155 (-0.333628) | 1.152741 / 1.492716 (-0.339975) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096642 / 0.018006 (0.078636) | 0.306981 / 0.000490 (0.306491) | 0.000220 / 0.000200 (0.000020) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019031 / 0.037411 (-0.018380) | 0.063960 / 0.014526 (0.049435) | 0.074428 / 0.176557 (-0.102129) | 0.121226 / 0.737135 (-0.615909) | 0.077111 / 0.296338 (-0.219228) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279830 / 0.215209 (0.064621) | 2.748243 / 2.077655 (0.670588) | 1.481554 / 1.504120 (-0.022566) | 1.355015 / 1.541195 (-0.186180) | 1.379655 / 1.468490 (-0.088835) | 0.560378 / 4.584777 (-4.024399) | 2.407241 / 3.745712 (-1.338471) | 2.837090 / 5.269862 (-2.432771) | 1.767084 / 4.565676 (-2.798593) | 0.063517 / 0.424275 (-0.360758) | 0.005024 / 0.007607 (-0.002584) | 0.334845 / 0.226044 (0.108800) | 3.290712 / 2.268929 (1.021783) | 1.836923 / 55.444624 (-53.607702) | 1.543671 / 6.876477 (-5.332806) | 1.582319 / 2.142072 (-0.559754) | 0.637689 / 4.805227 (-4.167538) | 0.119515 / 6.500664 (-6.381149) | 0.042191 / 0.075469 (-0.033278) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.980018 / 1.841788 (-0.861770) | 11.620211 / 8.074308 (3.545903) | 9.697799 / 10.191392 (-0.493593) | 0.131733 / 0.680424 (-0.548691) | 0.014007 / 0.534201 (-0.520193) | 0.286046 / 0.579283 (-0.293237) | 0.264776 / 0.434364 (-0.169588) | 0.325041 / 0.540337 (-0.215296) | 0.452740 / 1.386936 (-0.934196) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005603 / 0.011353 (-0.005750) | 0.003810 / 0.011008 (-0.007199) | 0.050773 / 0.038508 (0.012265) | 0.032601 / 0.023109 (0.009492) | 0.268035 / 0.275898 (-0.007863) | 0.292614 / 0.323480 (-0.030866) | 0.005076 / 0.007986 (-0.002910) | 0.004487 / 0.004328 (0.000159) | 0.049988 / 0.004250 (0.045737) | 0.040258 / 0.037052 (0.003205) | 0.284145 / 0.258489 (0.025656) | 0.318291 / 0.293841 (0.024450) | 0.029672 / 0.128546 (-0.098875) | 0.010534 / 0.075646 (-0.065113) | 0.059020 / 0.419271 (-0.360252) | 0.033451 / 0.043533 (-0.010082) | 0.270220 / 0.255139 (0.015081) | 0.290500 / 0.283200 (0.007300) | 0.017123 / 0.141683 (-0.124560) | 1.130870 / 1.452155 (-0.321285) | 1.160038 / 1.492716 (-0.332678) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097045 / 0.018006 (0.079039) | 0.314573 / 0.000490 (0.314083) | 0.000203 / 0.000200 (0.000003) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022396 / 0.037411 (-0.015015) | 0.079393 / 0.014526 (0.064867) | 0.088460 / 0.176557 (-0.088097) | 0.128050 / 0.737135 (-0.609085) | 0.093070 / 0.296338 (-0.203268) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293858 / 0.215209 (0.078649) | 2.819956 / 2.077655 (0.742301) | 1.540181 / 1.504120 (0.036061) | 1.419671 / 1.541195 (-0.121524) | 1.441594 / 1.468490 (-0.026897) | 0.565200 / 4.584777 (-4.019577) | 0.963967 / 3.745712 (-2.781745) | 2.752137 / 5.269862 (-2.517725) | 1.779239 / 4.565676 (-2.786438) | 0.063787 / 0.424275 (-0.360488) | 0.005344 / 0.007607 (-0.002263) | 0.344283 / 0.226044 (0.118239) | 3.353263 / 2.268929 (1.084334) | 1.898678 / 55.444624 (-53.545947) | 1.607868 / 6.876477 (-5.268609) | 1.781938 / 2.142072 (-0.360134) | 0.652119 / 4.805227 (-4.153108) | 0.117883 / 6.500664 (-6.382781) | 0.048811 / 0.075469 (-0.026658) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.013154 / 1.841788 (-0.828634) | 12.421963 / 8.074308 (4.347655) | 10.352056 / 10.191392 (0.160664) | 0.143784 / 0.680424 (-0.536640) | 0.016370 / 0.534201 (-0.517831) | 0.283668 / 0.579283 (-0.295615) | 0.127070 / 0.434364 (-0.307294) | 0.326199 / 0.540337 (-0.214138) | 0.432776 / 1.386936 (-0.954160) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5e72fb13b4824dcb27aedb807e4e28c420dec244 \"CML watermark\")\n" ]
Fix webdataset pickling
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6972/reactions" }
PR_kwDODunzps5yfa_e
{ "diff_url": "https://github.com/huggingface/datasets/pull/6972.diff", "html_url": "https://github.com/huggingface/datasets/pull/6972", "merged_at": "2024-06-14T15:37:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/6972.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6972" }
2024-06-14T14:43:02Z
https://api.github.com/repos/huggingface/datasets/issues/6972/comments
...by making tracked iterables picklable. This is important to make streaming datasets compatible with multiprocessing e.g. for parallel data loading
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6972/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6972/timeline
closed
false
6,972
null
2024-06-14T15:37:35Z
null
true
2,351,830,856
https://api.github.com/repos/huggingface/datasets/issues/6971
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6971/events
[]
null
2024-06-14T14:03:34Z
[]
https://github.com/huggingface/datasets/pull/6971
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6971). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@HuggingFaceDocBuilderDev There is no doc for this change. Call a human.", "Haha it was me who triggered the CI for your PR", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005051 / 0.011353 (-0.006302) | 0.004831 / 0.011008 (-0.006178) | 0.063006 / 0.038508 (0.024498) | 0.031589 / 0.023109 (0.008480) | 0.296202 / 0.275898 (0.020304) | 0.274274 / 0.323480 (-0.049205) | 0.003199 / 0.007986 (-0.004786) | 0.002768 / 0.004328 (-0.001561) | 0.049422 / 0.004250 (0.045172) | 0.045174 / 0.037052 (0.008121) | 0.263814 / 0.258489 (0.005325) | 0.288125 / 0.293841 (-0.005716) | 0.027641 / 0.128546 (-0.100905) | 0.010439 / 0.075646 (-0.065207) | 0.203075 / 0.419271 (-0.216196) | 0.036259 / 0.043533 (-0.007274) | 0.245159 / 0.255139 (-0.009980) | 0.268897 / 0.283200 (-0.014303) | 0.019493 / 0.141683 (-0.122190) | 1.108330 / 1.452155 (-0.343824) | 1.155835 / 1.492716 (-0.336881) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096860 / 0.018006 (0.078854) | 0.309428 / 0.000490 (0.308938) | 0.000197 / 0.000200 (-0.000003) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019275 / 0.037411 (-0.018136) | 0.062623 / 0.014526 (0.048098) | 0.073871 / 0.176557 (-0.102686) | 0.120410 / 0.737135 (-0.616726) | 0.075766 / 0.296338 (-0.220572) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279876 / 0.215209 (0.064667) | 2.742429 / 2.077655 (0.664774) | 1.414368 / 1.504120 (-0.089752) | 1.293194 / 1.541195 (-0.248001) | 1.318043 / 1.468490 (-0.150447) | 0.570904 / 4.584777 (-4.013873) | 2.384386 / 3.745712 (-1.361326) | 2.757953 / 5.269862 (-2.511908) | 1.728766 / 4.565676 (-2.836910) | 0.062699 / 0.424275 (-0.361576) | 0.004951 / 0.007607 (-0.002656) | 0.332222 / 0.226044 (0.106177) | 3.407429 / 2.268929 (1.138500) | 1.777136 / 55.444624 (-53.667488) | 1.521269 / 6.876477 (-5.355207) | 1.544814 / 2.142072 (-0.597258) | 0.646249 / 4.805227 (-4.158978) | 0.117032 / 6.500664 (-6.383632) | 0.042274 / 0.075469 (-0.033195) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.016249 / 1.841788 (-0.825539) | 11.794003 / 8.074308 (3.719695) | 9.871925 / 10.191392 (-0.319467) | 0.133694 / 0.680424 (-0.546730) | 0.014904 / 0.534201 (-0.519297) | 0.287453 / 0.579283 (-0.291831) | 0.271802 / 0.434364 (-0.162561) | 0.324711 / 0.540337 (-0.215626) | 0.411812 / 1.386936 (-0.975124) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005376 / 0.011353 (-0.005977) | 0.003631 / 0.011008 (-0.007377) | 0.050154 / 0.038508 (0.011646) | 0.033665 / 0.023109 (0.010556) | 0.279062 / 0.275898 (0.003164) | 0.298899 / 0.323480 (-0.024581) | 0.004388 / 0.007986 (-0.003598) | 0.002810 / 0.004328 (-0.001518) | 0.049032 / 0.004250 (0.044781) | 0.040531 / 0.037052 (0.003478) | 0.287220 / 0.258489 (0.028731) | 0.319060 / 0.293841 (0.025219) | 0.029473 / 0.128546 (-0.099073) | 0.010317 / 0.075646 (-0.065329) | 0.058483 / 0.419271 (-0.360789) | 0.033359 / 0.043533 (-0.010174) | 0.276404 / 0.255139 (0.021265) | 0.295013 / 0.283200 (0.011813) | 0.019372 / 0.141683 (-0.122311) | 1.172624 / 1.452155 (-0.279531) | 1.176815 / 1.492716 (-0.315902) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097347 / 0.018006 (0.079341) | 0.306959 / 0.000490 (0.306469) | 0.000200 / 0.000200 (-0.000000) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022776 / 0.037411 (-0.014635) | 0.077865 / 0.014526 (0.063340) | 0.088806 / 0.176557 (-0.087751) | 0.130448 / 0.737135 (-0.606687) | 0.090973 / 0.296338 (-0.205365) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301168 / 0.215209 (0.085959) | 2.957634 / 2.077655 (0.879979) | 1.556999 / 1.504120 (0.052879) | 1.413940 / 1.541195 (-0.127255) | 1.427970 / 1.468490 (-0.040520) | 0.587653 / 4.584777 (-3.997124) | 0.951295 / 3.745712 (-2.794417) | 2.691004 / 5.269862 (-2.578858) | 1.755826 / 4.565676 (-2.809851) | 0.064883 / 0.424275 (-0.359392) | 0.005379 / 0.007607 (-0.002228) | 0.353790 / 0.226044 (0.127745) | 3.457747 / 2.268929 (1.188818) | 1.891884 / 55.444624 (-53.552740) | 1.616619 / 6.876477 (-5.259858) | 1.736167 / 2.142072 (-0.405906) | 0.669257 / 4.805227 (-4.135970) | 0.119620 / 6.500664 (-6.381044) | 0.041390 / 0.075469 (-0.034080) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.008851 / 1.841788 (-0.832937) | 13.151216 / 8.074308 (5.076908) | 10.398371 / 10.191392 (0.206979) | 0.143420 / 0.680424 (-0.537004) | 0.015759 / 0.534201 (-0.518442) | 0.293068 / 0.579283 (-0.286215) | 0.131449 / 0.434364 (-0.302914) | 0.334715 / 0.540337 (-0.205623) | 0.445824 / 1.386936 (-0.941112) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#087671dcaf817c906a8649404c07b0440e2732ea \"CML watermark\")\n" ]
packaging: Remove useless dependencies
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6971/reactions" }
PR_kwDODunzps5yZoc3
{ "diff_url": "https://github.com/huggingface/datasets/pull/6971.diff", "html_url": "https://github.com/huggingface/datasets/pull/6971", "merged_at": "2024-06-14T13:57:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/6971.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6971" }
2024-06-13T18:43:43Z
https://api.github.com/repos/huggingface/datasets/issues/6971/comments
Revert changes in #6396 and #6404. CVE-2023-47248 has been fixed since PyArrow v14.0.1. Meanwhile Python requirements requires `pyarrow>=15.0.0`.
{ "avatar_url": "https://avatars.githubusercontent.com/u/9336514?v=4", "events_url": "https://api.github.com/users/daskol/events{/privacy}", "followers_url": "https://api.github.com/users/daskol/followers", "following_url": "https://api.github.com/users/daskol/following{/other_user}", "gists_url": "https://api.github.com/users/daskol/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/daskol", "id": 9336514, "login": "daskol", "node_id": "MDQ6VXNlcjkzMzY1MTQ=", "organizations_url": "https://api.github.com/users/daskol/orgs", "received_events_url": "https://api.github.com/users/daskol/received_events", "repos_url": "https://api.github.com/users/daskol/repos", "site_admin": false, "starred_url": "https://api.github.com/users/daskol/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daskol/subscriptions", "type": "User", "url": "https://api.github.com/users/daskol" }
https://api.github.com/repos/huggingface/datasets/issues/6971/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6971/timeline
closed
false
6,971
null
2024-06-14T13:57:24Z
null
true
2,351,380,029
https://api.github.com/repos/huggingface/datasets/issues/6970
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6970/events
[]
null
2024-06-13T15:06:18Z
[]
https://github.com/huggingface/datasets/pull/6970
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6970). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005450 / 0.011353 (-0.005902) | 0.003911 / 0.011008 (-0.007098) | 0.063467 / 0.038508 (0.024959) | 0.031029 / 0.023109 (0.007920) | 0.247916 / 0.275898 (-0.027982) | 0.274737 / 0.323480 (-0.048743) | 0.003255 / 0.007986 (-0.004731) | 0.002842 / 0.004328 (-0.001487) | 0.049617 / 0.004250 (0.045366) | 0.046689 / 0.037052 (0.009637) | 0.255152 / 0.258489 (-0.003337) | 0.288630 / 0.293841 (-0.005211) | 0.028174 / 0.128546 (-0.100372) | 0.010773 / 0.075646 (-0.064873) | 0.202119 / 0.419271 (-0.217153) | 0.035914 / 0.043533 (-0.007619) | 0.248197 / 0.255139 (-0.006942) | 0.273508 / 0.283200 (-0.009691) | 0.020626 / 0.141683 (-0.121057) | 1.125668 / 1.452155 (-0.326487) | 1.156678 / 1.492716 (-0.336038) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098294 / 0.018006 (0.080288) | 0.306661 / 0.000490 (0.306172) | 0.000227 / 0.000200 (0.000027) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019118 / 0.037411 (-0.018293) | 0.063086 / 0.014526 (0.048560) | 0.077735 / 0.176557 (-0.098822) | 0.123159 / 0.737135 (-0.613976) | 0.077228 / 0.296338 (-0.219111) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280031 / 0.215209 (0.064822) | 2.762524 / 2.077655 (0.684870) | 1.444571 / 1.504120 (-0.059549) | 1.330590 / 1.541195 (-0.210604) | 1.371937 / 1.468490 (-0.096553) | 0.563847 / 4.584777 (-4.020930) | 2.369908 / 3.745712 (-1.375804) | 2.827441 / 5.269862 (-2.442420) | 1.749864 / 4.565676 (-2.815812) | 0.063996 / 0.424275 (-0.360279) | 0.005060 / 0.007607 (-0.002547) | 0.326067 / 0.226044 (0.100023) | 3.270170 / 2.268929 (1.001242) | 1.785164 / 55.444624 (-53.659460) | 1.560432 / 6.876477 (-5.316045) | 1.587005 / 2.142072 (-0.555068) | 0.645714 / 4.805227 (-4.159513) | 0.119975 / 6.500664 (-6.380689) | 0.043962 / 0.075469 (-0.031507) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.979003 / 1.841788 (-0.862785) | 11.988701 / 8.074308 (3.914393) | 9.788564 / 10.191392 (-0.402828) | 0.142644 / 0.680424 (-0.537780) | 0.014924 / 0.534201 (-0.519277) | 0.285942 / 0.579283 (-0.293341) | 0.264086 / 0.434364 (-0.170278) | 0.343360 / 0.540337 (-0.196977) | 0.413467 / 1.386936 (-0.973469) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005818 / 0.011353 (-0.005535) | 0.003726 / 0.011008 (-0.007283) | 0.050936 / 0.038508 (0.012428) | 0.032000 / 0.023109 (0.008890) | 0.273282 / 0.275898 (-0.002616) | 0.293889 / 0.323480 (-0.029591) | 0.004287 / 0.007986 (-0.003699) | 0.002797 / 0.004328 (-0.001531) | 0.049088 / 0.004250 (0.044838) | 0.040235 / 0.037052 (0.003183) | 0.280240 / 0.258489 (0.021751) | 0.315749 / 0.293841 (0.021908) | 0.029986 / 0.128546 (-0.098560) | 0.010440 / 0.075646 (-0.065206) | 0.058935 / 0.419271 (-0.360336) | 0.033198 / 0.043533 (-0.010335) | 0.274321 / 0.255139 (0.019182) | 0.288039 / 0.283200 (0.004840) | 0.018865 / 0.141683 (-0.122818) | 1.114915 / 1.452155 (-0.337240) | 1.180548 / 1.492716 (-0.312169) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095028 / 0.018006 (0.077022) | 0.304797 / 0.000490 (0.304307) | 0.000221 / 0.000200 (0.000021) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022556 / 0.037411 (-0.014855) | 0.076839 / 0.014526 (0.062313) | 0.090255 / 0.176557 (-0.086302) | 0.128748 / 0.737135 (-0.608387) | 0.091718 / 0.296338 (-0.204621) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296061 / 0.215209 (0.080852) | 2.851376 / 2.077655 (0.773722) | 1.548084 / 1.504120 (0.043964) | 1.428589 / 1.541195 (-0.112606) | 1.467244 / 1.468490 (-0.001246) | 0.583533 / 4.584777 (-4.001244) | 0.967436 / 3.745712 (-2.778277) | 2.774775 / 5.269862 (-2.495087) | 1.800435 / 4.565676 (-2.765242) | 0.063998 / 0.424275 (-0.360277) | 0.005420 / 0.007607 (-0.002187) | 0.346353 / 0.226044 (0.120308) | 3.383885 / 2.268929 (1.114956) | 1.902914 / 55.444624 (-53.541710) | 1.599545 / 6.876477 (-5.276932) | 1.772754 / 2.142072 (-0.369318) | 0.651804 / 4.805227 (-4.153423) | 0.120380 / 6.500664 (-6.380284) | 0.043311 / 0.075469 (-0.032159) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004414 / 1.841788 (-0.837374) | 12.356077 / 8.074308 (4.281769) | 10.513420 / 10.191392 (0.322028) | 0.132419 / 0.680424 (-0.548005) | 0.015470 / 0.534201 (-0.518731) | 0.284883 / 0.579283 (-0.294400) | 0.130763 / 0.434364 (-0.303601) | 0.320068 / 0.540337 (-0.220270) | 0.430284 / 1.386936 (-0.956652) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#574791e0a0cf57ba761f679a054b9e89e4a3ee22 \"CML watermark\")\n" ]
Set dev version
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6970/reactions" }
PR_kwDODunzps5yYF37
{ "diff_url": "https://github.com/huggingface/datasets/pull/6970.diff", "html_url": "https://github.com/huggingface/datasets/pull/6970", "merged_at": "2024-06-13T14:59:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/6970.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6970" }
2024-06-13T14:59:45Z
https://api.github.com/repos/huggingface/datasets/issues/6970/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6970/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6970/timeline
closed
false
6,970
null
2024-06-13T14:59:56Z
null
true
2,351,351,436
https://api.github.com/repos/huggingface/datasets/issues/6969
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6969/events
[]
null
2024-06-13T15:04:39Z
[]
https://github.com/huggingface/datasets/pull/6969
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6969). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005414 / 0.011353 (-0.005939) | 0.003936 / 0.011008 (-0.007073) | 0.064129 / 0.038508 (0.025621) | 0.032985 / 0.023109 (0.009875) | 0.244051 / 0.275898 (-0.031847) | 0.273500 / 0.323480 (-0.049980) | 0.003227 / 0.007986 (-0.004759) | 0.002858 / 0.004328 (-0.001470) | 0.049212 / 0.004250 (0.044962) | 0.046432 / 0.037052 (0.009380) | 0.249543 / 0.258489 (-0.008946) | 0.297339 / 0.293841 (0.003498) | 0.027880 / 0.128546 (-0.100666) | 0.010582 / 0.075646 (-0.065065) | 0.202345 / 0.419271 (-0.216927) | 0.036402 / 0.043533 (-0.007131) | 0.253157 / 0.255139 (-0.001982) | 0.283355 / 0.283200 (0.000155) | 0.021907 / 0.141683 (-0.119776) | 1.174431 / 1.452155 (-0.277723) | 1.172103 / 1.492716 (-0.320613) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097942 / 0.018006 (0.079936) | 0.307114 / 0.000490 (0.306624) | 0.000230 / 0.000200 (0.000030) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019149 / 0.037411 (-0.018262) | 0.064283 / 0.014526 (0.049758) | 0.075643 / 0.176557 (-0.100913) | 0.122531 / 0.737135 (-0.614604) | 0.077360 / 0.296338 (-0.218978) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291790 / 0.215209 (0.076581) | 2.869234 / 2.077655 (0.791580) | 1.550266 / 1.504120 (0.046146) | 1.392392 / 1.541195 (-0.148802) | 1.375700 / 1.468490 (-0.092790) | 0.574963 / 4.584777 (-4.009814) | 2.444746 / 3.745712 (-1.300966) | 2.920602 / 5.269862 (-2.349259) | 1.812720 / 4.565676 (-2.752957) | 0.064811 / 0.424275 (-0.359464) | 0.005163 / 0.007607 (-0.002444) | 0.341306 / 0.226044 (0.115261) | 3.443177 / 2.268929 (1.174249) | 1.843510 / 55.444624 (-53.601115) | 1.534023 / 6.876477 (-5.342454) | 1.603575 / 2.142072 (-0.538498) | 0.656923 / 4.805227 (-4.148304) | 0.120338 / 6.500664 (-6.380326) | 0.042958 / 0.075469 (-0.032511) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975993 / 1.841788 (-0.865795) | 11.942335 / 8.074308 (3.868027) | 9.964277 / 10.191392 (-0.227115) | 0.131247 / 0.680424 (-0.549176) | 0.014166 / 0.534201 (-0.520035) | 0.283994 / 0.579283 (-0.295290) | 0.267516 / 0.434364 (-0.166848) | 0.328363 / 0.540337 (-0.211974) | 0.412204 / 1.386936 (-0.974732) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005867 / 0.011353 (-0.005486) | 0.003860 / 0.011008 (-0.007148) | 0.050247 / 0.038508 (0.011739) | 0.033819 / 0.023109 (0.010710) | 0.264840 / 0.275898 (-0.011058) | 0.291253 / 0.323480 (-0.032227) | 0.004481 / 0.007986 (-0.003504) | 0.002880 / 0.004328 (-0.001449) | 0.048528 / 0.004250 (0.044278) | 0.041720 / 0.037052 (0.004667) | 0.280467 / 0.258489 (0.021978) | 0.315244 / 0.293841 (0.021404) | 0.030569 / 0.128546 (-0.097977) | 0.010494 / 0.075646 (-0.065152) | 0.058652 / 0.419271 (-0.360620) | 0.034181 / 0.043533 (-0.009352) | 0.266466 / 0.255139 (0.011327) | 0.292038 / 0.283200 (0.008838) | 0.018501 / 0.141683 (-0.123182) | 1.115965 / 1.452155 (-0.336189) | 1.162753 / 1.492716 (-0.329963) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101301 / 0.018006 (0.083295) | 0.296812 / 0.000490 (0.296322) | 0.000212 / 0.000200 (0.000012) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023662 / 0.037411 (-0.013749) | 0.080678 / 0.014526 (0.066153) | 0.089867 / 0.176557 (-0.086689) | 0.130803 / 0.737135 (-0.606332) | 0.091479 / 0.296338 (-0.204860) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286028 / 0.215209 (0.070819) | 2.780072 / 2.077655 (0.702418) | 1.520146 / 1.504120 (0.016026) | 1.372952 / 1.541195 (-0.168243) | 1.428734 / 1.468490 (-0.039756) | 0.571484 / 4.584777 (-4.013293) | 0.969643 / 3.745712 (-2.776069) | 2.788157 / 5.269862 (-2.481705) | 1.841166 / 4.565676 (-2.724511) | 0.063311 / 0.424275 (-0.360964) | 0.005320 / 0.007607 (-0.002287) | 0.333341 / 0.226044 (0.107296) | 3.295141 / 2.268929 (1.026213) | 1.865537 / 55.444624 (-53.579088) | 1.584655 / 6.876477 (-5.291821) | 1.747417 / 2.142072 (-0.394655) | 0.634549 / 4.805227 (-4.170678) | 0.116373 / 6.500664 (-6.384291) | 0.041567 / 0.075469 (-0.033902) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.023086 / 1.841788 (-0.818702) | 13.091905 / 8.074308 (5.017597) | 10.572164 / 10.191392 (0.380772) | 0.142208 / 0.680424 (-0.538216) | 0.015692 / 0.534201 (-0.518509) | 0.284309 / 0.579283 (-0.294974) | 0.126467 / 0.434364 (-0.307897) | 0.322719 / 0.540337 (-0.217618) | 0.439952 / 1.386936 (-0.946985) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#98fdc9e78e6d057ca66e58a37f49d6618aab8130 \"CML watermark\")\n" ]
Release: 2.20.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6969/reactions" }
PR_kwDODunzps5yX_nC
{ "diff_url": "https://github.com/huggingface/datasets/pull/6969.diff", "html_url": "https://github.com/huggingface/datasets/pull/6969", "merged_at": "2024-06-13T14:55:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/6969.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6969" }
2024-06-13T14:48:20Z
https://api.github.com/repos/huggingface/datasets/issues/6969/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6969/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6969/timeline
closed
false
6,969
null
2024-06-13T14:55:53Z
null
true
2,351,331,417
https://api.github.com/repos/huggingface/datasets/issues/6968
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6968/events
[]
null
2024-06-13T17:31:37Z
[]
https://github.com/huggingface/datasets/pull/6968
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6968). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Oops, sorry for the style issue. Fixed in https://github.com/huggingface/datasets/pull/6968/commits/a4e2b28fa647b28190ae2615d7271e6ac63c8499.\r\n\r\nRegarding docs, I can't find mentions of `HF_DATASETS_OFFLINE` anywhere else in `datasets`/`hub-docs`. Once this is merged and released, I'm planning to update some `transformers` docs that briefly mention it.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005173 / 0.011353 (-0.006180) | 0.003485 / 0.011008 (-0.007524) | 0.063867 / 0.038508 (0.025359) | 0.031338 / 0.023109 (0.008229) | 0.242093 / 0.275898 (-0.033805) | 0.266606 / 0.323480 (-0.056874) | 0.003069 / 0.007986 (-0.004916) | 0.003307 / 0.004328 (-0.001022) | 0.051059 / 0.004250 (0.046808) | 0.044396 / 0.037052 (0.007344) | 0.254896 / 0.258489 (-0.003593) | 0.282835 / 0.293841 (-0.011006) | 0.027548 / 0.128546 (-0.100998) | 0.010520 / 0.075646 (-0.065126) | 0.201701 / 0.419271 (-0.217570) | 0.035613 / 0.043533 (-0.007920) | 0.240955 / 0.255139 (-0.014184) | 0.271902 / 0.283200 (-0.011298) | 0.019826 / 0.141683 (-0.121857) | 1.116994 / 1.452155 (-0.335161) | 1.162886 / 1.492716 (-0.329831) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093683 / 0.018006 (0.075677) | 0.297970 / 0.000490 (0.297480) | 0.000211 / 0.000200 (0.000011) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018952 / 0.037411 (-0.018459) | 0.062710 / 0.014526 (0.048184) | 0.073641 / 0.176557 (-0.102916) | 0.121200 / 0.737135 (-0.615935) | 0.075723 / 0.296338 (-0.220616) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286056 / 0.215209 (0.070847) | 2.811424 / 2.077655 (0.733770) | 1.448045 / 1.504120 (-0.056075) | 1.338309 / 1.541195 (-0.202885) | 1.328371 / 1.468490 (-0.140119) | 0.557282 / 4.584777 (-4.027495) | 2.362235 / 3.745712 (-1.383477) | 2.732108 / 5.269862 (-2.537754) | 1.730911 / 4.565676 (-2.834765) | 0.061689 / 0.424275 (-0.362586) | 0.004947 / 0.007607 (-0.002660) | 0.346700 / 0.226044 (0.120656) | 3.355989 / 2.268929 (1.087060) | 1.828078 / 55.444624 (-53.616546) | 1.511531 / 6.876477 (-5.364946) | 1.535897 / 2.142072 (-0.606175) | 0.630276 / 4.805227 (-4.174951) | 0.115808 / 6.500664 (-6.384857) | 0.042199 / 0.075469 (-0.033270) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969203 / 1.841788 (-0.872584) | 11.282997 / 8.074308 (3.208689) | 9.538914 / 10.191392 (-0.652478) | 0.140072 / 0.680424 (-0.540352) | 0.014021 / 0.534201 (-0.520180) | 0.283784 / 0.579283 (-0.295499) | 0.255973 / 0.434364 (-0.178391) | 0.320284 / 0.540337 (-0.220053) | 0.412689 / 1.386936 (-0.974247) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005201 / 0.011353 (-0.006152) | 0.003312 / 0.011008 (-0.007697) | 0.050044 / 0.038508 (0.011536) | 0.033610 / 0.023109 (0.010501) | 0.266429 / 0.275898 (-0.009469) | 0.287782 / 0.323480 (-0.035698) | 0.004316 / 0.007986 (-0.003670) | 0.002696 / 0.004328 (-0.001633) | 0.049667 / 0.004250 (0.045417) | 0.040244 / 0.037052 (0.003192) | 0.278870 / 0.258489 (0.020381) | 0.311415 / 0.293841 (0.017574) | 0.029150 / 0.128546 (-0.099396) | 0.010046 / 0.075646 (-0.065600) | 0.058527 / 0.419271 (-0.360744) | 0.032871 / 0.043533 (-0.010662) | 0.266582 / 0.255139 (0.011443) | 0.286157 / 0.283200 (0.002957) | 0.017197 / 0.141683 (-0.124486) | 1.120944 / 1.452155 (-0.331211) | 1.161111 / 1.492716 (-0.331606) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092679 / 0.018006 (0.074672) | 0.299195 / 0.000490 (0.298705) | 0.000204 / 0.000200 (0.000004) | 0.000048 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022212 / 0.037411 (-0.015199) | 0.076734 / 0.014526 (0.062208) | 0.088326 / 0.176557 (-0.088230) | 0.128209 / 0.737135 (-0.608926) | 0.088807 / 0.296338 (-0.207531) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291782 / 0.215209 (0.076573) | 2.882990 / 2.077655 (0.805335) | 1.601638 / 1.504120 (0.097518) | 1.457560 / 1.541195 (-0.083635) | 1.470517 / 1.468490 (0.002027) | 0.565738 / 4.584777 (-4.019039) | 0.949235 / 3.745712 (-2.796478) | 2.661927 / 5.269862 (-2.607934) | 1.722178 / 4.565676 (-2.843498) | 0.063680 / 0.424275 (-0.360595) | 0.005339 / 0.007607 (-0.002268) | 0.344280 / 0.226044 (0.118235) | 3.432998 / 2.268929 (1.164070) | 1.985516 / 55.444624 (-53.459108) | 1.651826 / 6.876477 (-5.224651) | 1.764541 / 2.142072 (-0.377531) | 0.640219 / 4.805227 (-4.165008) | 0.116541 / 6.500664 (-6.384124) | 0.041237 / 0.075469 (-0.034232) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.013927 / 1.841788 (-0.827861) | 11.876661 / 8.074308 (3.802353) | 10.264144 / 10.191392 (0.072752) | 0.131151 / 0.680424 (-0.549273) | 0.015774 / 0.534201 (-0.518427) | 0.284948 / 0.579283 (-0.294335) | 0.125924 / 0.434364 (-0.308439) | 0.319845 / 0.540337 (-0.220493) | 0.431978 / 1.386936 (-0.954958) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#68f67741ffde68c98d0a2f59ac4d8e3a7bc03065 \"CML watermark\")\n" ]
Use `HF_HUB_OFFLINE` instead of `HF_DATASETS_OFFLINE`
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6968/reactions" }
PR_kwDODunzps5yX7Qr
{ "diff_url": "https://github.com/huggingface/datasets/pull/6968.diff", "html_url": "https://github.com/huggingface/datasets/pull/6968", "merged_at": "2024-06-13T17:25:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/6968.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6968" }
2024-06-13T14:39:40Z
https://api.github.com/repos/huggingface/datasets/issues/6968/comments
To use `datasets` offline, one can use the `HF_DATASETS_OFFLINE` environment variable. This PR makes `HF_HUB_OFFLINE` the recommended environment variable for offline training. Goal is to be more consistent with the rest of HF ecosystem and have a single config value to set. The changes are backward-compatible meaning that: - `HF_DATASETS_OFFLINE` environment is still taken into account, though not documented - `datasets.config.HF_DATASETS_OFFLINE` still exists, though it is not used anymore (in favor of `datasets.config.HF_HUB_OFFLINE`) **Note:** it might break things in downstream libraries if they were monkeypatching `datasets.config.HF_DATASETS_OFFLINE` in their CI tests (for instance). Not much of a problem IMO.
{ "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Wauplin", "id": 11801849, "login": "Wauplin", "node_id": "MDQ6VXNlcjExODAxODQ5", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "repos_url": "https://api.github.com/users/Wauplin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "type": "User", "url": "https://api.github.com/users/Wauplin" }
https://api.github.com/repos/huggingface/datasets/issues/6968/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6968/timeline
closed
false
6,968
null
2024-06-13T17:25:37Z
null
true
2,349,146,398
https://api.github.com/repos/huggingface/datasets/issues/6967
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6967/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-06-12T16:04:04Z
[]
https://github.com/huggingface/datasets/issues/6967
NONE
null
null
null
[]
Method to load Laion400m
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6967/reactions" }
I_kwDODunzps6MBSEe
null
2024-06-12T16:04:04Z
https://api.github.com/repos/huggingface/datasets/issues/6967/comments
### Feature request Large datasets like Laion400m are provided as embeddings. The provided methods in load_dataset are not straightforward for loading embedding files, i.e. img_emb_XX.npy ; XX = 0 to 99 ### Motivation The trial and experimentation is the key pivot of HF. It would be great if HF can load embeddings files s,ealessly. ### Your contribution I cam write the loader with some help.
{ "avatar_url": "https://avatars.githubusercontent.com/u/6862868?v=4", "events_url": "https://api.github.com/users/humanely/events{/privacy}", "followers_url": "https://api.github.com/users/humanely/followers", "following_url": "https://api.github.com/users/humanely/following{/other_user}", "gists_url": "https://api.github.com/users/humanely/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/humanely", "id": 6862868, "login": "humanely", "node_id": "MDQ6VXNlcjY4NjI4Njg=", "organizations_url": "https://api.github.com/users/humanely/orgs", "received_events_url": "https://api.github.com/users/humanely/received_events", "repos_url": "https://api.github.com/users/humanely/repos", "site_admin": false, "starred_url": "https://api.github.com/users/humanely/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/humanely/subscriptions", "type": "User", "url": "https://api.github.com/users/humanely" }
https://api.github.com/repos/huggingface/datasets/issues/6967/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6967/timeline
open
false
6,967
null
null
null
false
2,348,934,466
https://api.github.com/repos/huggingface/datasets/issues/6966
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6966/events
[]
null
2024-06-19T14:16:21Z
[]
https://github.com/huggingface/datasets/pull/6966
CONTRIBUTOR
null
false
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005326 / 0.011353 (-0.006027) | 0.003448 / 0.011008 (-0.007560) | 0.062516 / 0.038508 (0.024008) | 0.030222 / 0.023109 (0.007113) | 0.237006 / 0.275898 (-0.038892) | 0.258224 / 0.323480 (-0.065256) | 0.003191 / 0.007986 (-0.004795) | 0.002768 / 0.004328 (-0.001560) | 0.048754 / 0.004250 (0.044504) | 0.043694 / 0.037052 (0.006641) | 0.248832 / 0.258489 (-0.009657) | 0.272217 / 0.293841 (-0.021624) | 0.029684 / 0.128546 (-0.098862) | 0.011997 / 0.075646 (-0.063650) | 0.204047 / 0.419271 (-0.215225) | 0.035944 / 0.043533 (-0.007589) | 0.242094 / 0.255139 (-0.013045) | 0.258897 / 0.283200 (-0.024303) | 0.019228 / 0.141683 (-0.122455) | 1.110193 / 1.452155 (-0.341961) | 1.166780 / 1.492716 (-0.325937) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097162 / 0.018006 (0.079156) | 0.303148 / 0.000490 (0.302659) | 0.000229 / 0.000200 (0.000029) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019981 / 0.037411 (-0.017431) | 0.062669 / 0.014526 (0.048144) | 0.074801 / 0.176557 (-0.101756) | 0.120509 / 0.737135 (-0.616626) | 0.075957 / 0.296338 (-0.220382) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279527 / 0.215209 (0.064318) | 2.722749 / 2.077655 (0.645094) | 1.441770 / 1.504120 (-0.062350) | 1.312172 / 1.541195 (-0.229023) | 1.329418 / 1.468490 (-0.139072) | 0.723939 / 4.584777 (-3.860838) | 2.359146 / 3.745712 (-1.386566) | 2.963445 / 5.269862 (-2.306416) | 1.881974 / 4.565676 (-2.683702) | 0.078189 / 0.424275 (-0.346086) | 0.005249 / 0.007607 (-0.002358) | 0.334508 / 0.226044 (0.108463) | 3.271961 / 2.268929 (1.003032) | 1.817365 / 55.444624 (-53.627259) | 1.522755 / 6.876477 (-5.353721) | 1.514203 / 2.142072 (-0.627870) | 0.803486 / 4.805227 (-4.001741) | 0.134189 / 6.500664 (-6.366475) | 0.042761 / 0.075469 (-0.032708) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971126 / 1.841788 (-0.870662) | 11.367159 / 8.074308 (3.292851) | 9.520174 / 10.191392 (-0.671218) | 0.142705 / 0.680424 (-0.537719) | 0.014586 / 0.534201 (-0.519615) | 0.300869 / 0.579283 (-0.278414) | 0.263161 / 0.434364 (-0.171203) | 0.336403 / 0.540337 (-0.203935) | 0.436088 / 1.386936 (-0.950848) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005800 / 0.011353 (-0.005553) | 0.003906 / 0.011008 (-0.007103) | 0.050197 / 0.038508 (0.011689) | 0.031348 / 0.023109 (0.008238) | 0.265636 / 0.275898 (-0.010262) | 0.286550 / 0.323480 (-0.036930) | 0.004502 / 0.007986 (-0.003484) | 0.002828 / 0.004328 (-0.001501) | 0.049668 / 0.004250 (0.045417) | 0.039552 / 0.037052 (0.002499) | 0.279091 / 0.258489 (0.020602) | 0.309987 / 0.293841 (0.016146) | 0.032104 / 0.128546 (-0.096442) | 0.011989 / 0.075646 (-0.063657) | 0.059875 / 0.419271 (-0.359397) | 0.033446 / 0.043533 (-0.010087) | 0.265256 / 0.255139 (0.010117) | 0.285649 / 0.283200 (0.002449) | 0.018330 / 0.141683 (-0.123353) | 1.140073 / 1.452155 (-0.312081) | 1.194538 / 1.492716 (-0.298178) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093692 / 0.018006 (0.075685) | 0.301422 / 0.000490 (0.300932) | 0.000216 / 0.000200 (0.000016) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022844 / 0.037411 (-0.014568) | 0.077129 / 0.014526 (0.062603) | 0.087948 / 0.176557 (-0.088608) | 0.129905 / 0.737135 (-0.607230) | 0.089872 / 0.296338 (-0.206466) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293135 / 0.215209 (0.077926) | 2.880280 / 2.077655 (0.802626) | 1.554250 / 1.504120 (0.050130) | 1.428005 / 1.541195 (-0.113190) | 1.520863 / 1.468490 (0.052373) | 0.759903 / 4.584777 (-3.824874) | 0.959674 / 3.745712 (-2.786038) | 2.848914 / 5.269862 (-2.420948) | 1.900355 / 4.565676 (-2.665322) | 0.079434 / 0.424275 (-0.344841) | 0.005487 / 0.007607 (-0.002121) | 0.344837 / 0.226044 (0.118793) | 3.401730 / 2.268929 (1.132802) | 1.887526 / 55.444624 (-53.557098) | 1.596821 / 6.876477 (-5.279655) | 1.732190 / 2.142072 (-0.409882) | 0.800929 / 4.805227 (-4.004299) | 0.132763 / 6.500664 (-6.367901) | 0.041185 / 0.075469 (-0.034284) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.994396 / 1.841788 (-0.847391) | 12.488692 / 8.074308 (4.414384) | 10.365952 / 10.191392 (0.174560) | 0.142951 / 0.680424 (-0.537472) | 0.015448 / 0.534201 (-0.518753) | 0.305577 / 0.579283 (-0.273706) | 0.126897 / 0.434364 (-0.307467) | 0.340784 / 0.540337 (-0.199554) | 0.461955 / 1.386936 (-0.924981) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1d65718438ac4bc401468e57d5358e69012ed0c8 \"CML watermark\")\n" ]
Remove underlines between badges
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6966/reactions" }
PR_kwDODunzps5yPwL4
{ "diff_url": "https://github.com/huggingface/datasets/pull/6966.diff", "html_url": "https://github.com/huggingface/datasets/pull/6966", "merged_at": "2024-06-19T14:10:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/6966.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6966" }
2024-06-12T14:32:11Z
https://api.github.com/repos/huggingface/datasets/issues/6966/comments
## Before: <img width="935" alt="image" src="https://github.com/huggingface/datasets/assets/35881688/93666e72-059b-4180-9e1d-ff176a3d9dac"> ## After: <img width="956" alt="image" src="https://github.com/huggingface/datasets/assets/35881688/75df7c3e-f473-44f0-a872-eeecf6a85fe2">
{ "avatar_url": "https://avatars.githubusercontent.com/u/35881688?v=4", "events_url": "https://api.github.com/users/novialriptide/events{/privacy}", "followers_url": "https://api.github.com/users/novialriptide/followers", "following_url": "https://api.github.com/users/novialriptide/following{/other_user}", "gists_url": "https://api.github.com/users/novialriptide/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/novialriptide", "id": 35881688, "login": "novialriptide", "node_id": "MDQ6VXNlcjM1ODgxNjg4", "organizations_url": "https://api.github.com/users/novialriptide/orgs", "received_events_url": "https://api.github.com/users/novialriptide/received_events", "repos_url": "https://api.github.com/users/novialriptide/repos", "site_admin": false, "starred_url": "https://api.github.com/users/novialriptide/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/novialriptide/subscriptions", "type": "User", "url": "https://api.github.com/users/novialriptide" }
https://api.github.com/repos/huggingface/datasets/issues/6966/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6966/timeline
closed
false
6,966
null
2024-06-19T14:10:11Z
null
true
2,348,653,895
https://api.github.com/repos/huggingface/datasets/issues/6965
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6965/events
[]
null
2024-06-24T15:22:21Z
[]
https://github.com/huggingface/datasets/pull/6965
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6965). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005879 / 0.011353 (-0.005474) | 0.004144 / 0.011008 (-0.006865) | 0.063327 / 0.038508 (0.024819) | 0.032577 / 0.023109 (0.009468) | 0.242936 / 0.275898 (-0.032962) | 0.269882 / 0.323480 (-0.053598) | 0.003339 / 0.007986 (-0.004647) | 0.002901 / 0.004328 (-0.001428) | 0.049163 / 0.004250 (0.044912) | 0.047072 / 0.037052 (0.010019) | 0.261120 / 0.258489 (0.002631) | 0.287857 / 0.293841 (-0.005984) | 0.029688 / 0.128546 (-0.098858) | 0.012702 / 0.075646 (-0.062944) | 0.204040 / 0.419271 (-0.215231) | 0.036012 / 0.043533 (-0.007521) | 0.244210 / 0.255139 (-0.010929) | 0.267600 / 0.283200 (-0.015599) | 0.019627 / 0.141683 (-0.122056) | 1.103770 / 1.452155 (-0.348385) | 1.197710 / 1.492716 (-0.295006) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101683 / 0.018006 (0.083677) | 0.311825 / 0.000490 (0.311335) | 0.000236 / 0.000200 (0.000036) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019642 / 0.037411 (-0.017769) | 0.061618 / 0.014526 (0.047092) | 0.075237 / 0.176557 (-0.101320) | 0.122250 / 0.737135 (-0.614886) | 0.076087 / 0.296338 (-0.220251) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285120 / 0.215209 (0.069911) | 2.811527 / 2.077655 (0.733872) | 1.457961 / 1.504120 (-0.046159) | 1.333819 / 1.541195 (-0.207376) | 1.387863 / 1.468490 (-0.080627) | 0.730828 / 4.584777 (-3.853949) | 2.417224 / 3.745712 (-1.328488) | 2.994842 / 5.269862 (-2.275020) | 1.922079 / 4.565676 (-2.643598) | 0.087486 / 0.424275 (-0.336789) | 0.005211 / 0.007607 (-0.002396) | 0.335585 / 0.226044 (0.109541) | 3.297664 / 2.268929 (1.028735) | 1.809391 / 55.444624 (-53.635233) | 1.501646 / 6.876477 (-5.374831) | 1.567573 / 2.142072 (-0.574500) | 0.800816 / 4.805227 (-4.004411) | 0.134204 / 6.500664 (-6.366460) | 0.043156 / 0.075469 (-0.032313) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982955 / 1.841788 (-0.858833) | 12.256850 / 8.074308 (4.182542) | 9.821500 / 10.191392 (-0.369892) | 0.143739 / 0.680424 (-0.536685) | 0.014425 / 0.534201 (-0.519776) | 0.302718 / 0.579283 (-0.276565) | 0.267746 / 0.434364 (-0.166618) | 0.340036 / 0.540337 (-0.200301) | 0.436211 / 1.386936 (-0.950725) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006136 / 0.011353 (-0.005217) | 0.004125 / 0.011008 (-0.006883) | 0.050341 / 0.038508 (0.011833) | 0.034547 / 0.023109 (0.011438) | 0.270237 / 0.275898 (-0.005661) | 0.294503 / 0.323480 (-0.028977) | 0.004528 / 0.007986 (-0.003458) | 0.003103 / 0.004328 (-0.001225) | 0.048817 / 0.004250 (0.044566) | 0.041301 / 0.037052 (0.004249) | 0.279461 / 0.258489 (0.020972) | 0.319376 / 0.293841 (0.025535) | 0.032733 / 0.128546 (-0.095813) | 0.012426 / 0.075646 (-0.063221) | 0.060543 / 0.419271 (-0.358729) | 0.034015 / 0.043533 (-0.009518) | 0.267387 / 0.255139 (0.012248) | 0.288590 / 0.283200 (0.005390) | 0.019697 / 0.141683 (-0.121986) | 1.145994 / 1.452155 (-0.306161) | 1.198122 / 1.492716 (-0.294595) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099091 / 0.018006 (0.081085) | 0.313767 / 0.000490 (0.313277) | 0.000220 / 0.000200 (0.000020) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023219 / 0.037411 (-0.014192) | 0.083609 / 0.014526 (0.069084) | 0.089529 / 0.176557 (-0.087028) | 0.131025 / 0.737135 (-0.606110) | 0.091947 / 0.296338 (-0.204391) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283711 / 0.215209 (0.068502) | 2.811702 / 2.077655 (0.734047) | 1.577720 / 1.504120 (0.073600) | 1.415700 / 1.541195 (-0.125495) | 1.436097 / 1.468490 (-0.032393) | 0.732090 / 4.584777 (-3.852687) | 0.990552 / 3.745712 (-2.755160) | 2.887319 / 5.269862 (-2.382543) | 1.923707 / 4.565676 (-2.641969) | 0.079361 / 0.424275 (-0.344915) | 0.005520 / 0.007607 (-0.002087) | 0.336684 / 0.226044 (0.110639) | 3.325342 / 2.268929 (1.056413) | 1.911853 / 55.444624 (-53.532771) | 1.621450 / 6.876477 (-5.255027) | 1.807964 / 2.142072 (-0.334109) | 0.813958 / 4.805227 (-3.991269) | 0.137564 / 6.500664 (-6.363100) | 0.043151 / 0.075469 (-0.032318) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.002775 / 1.841788 (-0.839013) | 12.526367 / 8.074308 (4.452058) | 10.426992 / 10.191392 (0.235600) | 0.134902 / 0.680424 (-0.545522) | 0.016726 / 0.534201 (-0.517475) | 0.303549 / 0.579283 (-0.275734) | 0.129334 / 0.434364 (-0.305030) | 0.339254 / 0.540337 (-0.201084) | 0.456845 / 1.386936 (-0.930091) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c5464b32ce03739431235c13f314732201abcfac \"CML watermark\")\n" ]
Improve skip take shuffling and distributed
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6965/reactions" }
PR_kwDODunzps5yOyNG
{ "diff_url": "https://github.com/huggingface/datasets/pull/6965.diff", "html_url": "https://github.com/huggingface/datasets/pull/6965", "merged_at": "2024-06-24T15:16:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/6965.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6965" }
2024-06-12T12:30:27Z
https://api.github.com/repos/huggingface/datasets/issues/6965/comments
set the right behavior of skip/take depending on whether it's called after or before shuffle/split_by_node
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6965/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6965/timeline
closed
false
6,965
null
2024-06-24T15:16:16Z
null
true
2,344,973,229
https://api.github.com/repos/huggingface/datasets/issues/6964
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6964/events
[]
null
2024-06-14T15:04:49Z
[]
https://github.com/huggingface/datasets/pull/6964
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6964). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005735 / 0.011353 (-0.005618) | 0.003746 / 0.011008 (-0.007263) | 0.063115 / 0.038508 (0.024606) | 0.033557 / 0.023109 (0.010447) | 0.247599 / 0.275898 (-0.028299) | 0.275310 / 0.323480 (-0.048170) | 0.004203 / 0.007986 (-0.003783) | 0.002770 / 0.004328 (-0.001558) | 0.050951 / 0.004250 (0.046700) | 0.046609 / 0.037052 (0.009557) | 0.256237 / 0.258489 (-0.002252) | 0.292050 / 0.293841 (-0.001791) | 0.027991 / 0.128546 (-0.100556) | 0.010367 / 0.075646 (-0.065279) | 0.202295 / 0.419271 (-0.216977) | 0.037287 / 0.043533 (-0.006246) | 0.250330 / 0.255139 (-0.004809) | 0.281250 / 0.283200 (-0.001950) | 0.018832 / 0.141683 (-0.122851) | 1.117303 / 1.452155 (-0.334852) | 1.141593 / 1.492716 (-0.351123) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097318 / 0.018006 (0.079312) | 0.304853 / 0.000490 (0.304364) | 0.000220 / 0.000200 (0.000020) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020353 / 0.037411 (-0.017058) | 0.065497 / 0.014526 (0.050971) | 0.076205 / 0.176557 (-0.100351) | 0.122471 / 0.737135 (-0.614665) | 0.079522 / 0.296338 (-0.216816) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282604 / 0.215209 (0.067395) | 2.743198 / 2.077655 (0.665543) | 1.480436 / 1.504120 (-0.023684) | 1.373935 / 1.541195 (-0.167260) | 1.388901 / 1.468490 (-0.079589) | 0.571961 / 4.584777 (-4.012816) | 2.431790 / 3.745712 (-1.313922) | 2.942126 / 5.269862 (-2.327736) | 1.857361 / 4.565676 (-2.708316) | 0.063535 / 0.424275 (-0.360740) | 0.005039 / 0.007607 (-0.002568) | 0.331726 / 0.226044 (0.105682) | 3.282504 / 2.268929 (1.013576) | 1.852303 / 55.444624 (-53.592321) | 1.506665 / 6.876477 (-5.369812) | 1.577524 / 2.142072 (-0.564548) | 0.646267 / 4.805227 (-4.158960) | 0.118706 / 6.500664 (-6.381958) | 0.043437 / 0.075469 (-0.032033) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.978073 / 1.841788 (-0.863714) | 12.028575 / 8.074308 (3.954267) | 10.066303 / 10.191392 (-0.125090) | 0.131763 / 0.680424 (-0.548661) | 0.016479 / 0.534201 (-0.517722) | 0.286012 / 0.579283 (-0.293271) | 0.266824 / 0.434364 (-0.167540) | 0.328452 / 0.540337 (-0.211885) | 0.414562 / 1.386936 (-0.972374) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005943 / 0.011353 (-0.005409) | 0.003992 / 0.011008 (-0.007016) | 0.051159 / 0.038508 (0.012651) | 0.033805 / 0.023109 (0.010695) | 0.268425 / 0.275898 (-0.007474) | 0.295662 / 0.323480 (-0.027818) | 0.004473 / 0.007986 (-0.003512) | 0.002910 / 0.004328 (-0.001418) | 0.048595 / 0.004250 (0.044345) | 0.043724 / 0.037052 (0.006671) | 0.280552 / 0.258489 (0.022063) | 0.319052 / 0.293841 (0.025211) | 0.031269 / 0.128546 (-0.097278) | 0.010976 / 0.075646 (-0.064671) | 0.060128 / 0.419271 (-0.359144) | 0.034198 / 0.043533 (-0.009335) | 0.269664 / 0.255139 (0.014525) | 0.292249 / 0.283200 (0.009049) | 0.019950 / 0.141683 (-0.121733) | 1.143073 / 1.452155 (-0.309082) | 1.188553 / 1.492716 (-0.304164) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095188 / 0.018006 (0.077182) | 0.300207 / 0.000490 (0.299717) | 0.000205 / 0.000200 (0.000005) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023610 / 0.037411 (-0.013802) | 0.082868 / 0.014526 (0.068342) | 0.089059 / 0.176557 (-0.087498) | 0.131735 / 0.737135 (-0.605401) | 0.091467 / 0.296338 (-0.204872) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302497 / 0.215209 (0.087287) | 2.985794 / 2.077655 (0.908140) | 1.590783 / 1.504120 (0.086663) | 1.468819 / 1.541195 (-0.072375) | 1.503115 / 1.468490 (0.034625) | 0.575109 / 4.584777 (-4.009668) | 0.972370 / 3.745712 (-2.773342) | 2.727976 / 5.269862 (-2.541886) | 1.793438 / 4.565676 (-2.772238) | 0.068840 / 0.424275 (-0.355435) | 0.005440 / 0.007607 (-0.002167) | 0.351843 / 0.226044 (0.125799) | 3.523108 / 2.268929 (1.254180) | 1.928576 / 55.444624 (-53.516049) | 1.627939 / 6.876477 (-5.248538) | 1.837618 / 2.142072 (-0.304454) | 0.669351 / 4.805227 (-4.135876) | 0.121822 / 6.500664 (-6.378842) | 0.042056 / 0.075469 (-0.033413) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.020081 / 1.841788 (-0.821707) | 13.417448 / 8.074308 (5.343140) | 10.974516 / 10.191392 (0.783124) | 0.135240 / 0.680424 (-0.545184) | 0.017581 / 0.534201 (-0.516620) | 0.289080 / 0.579283 (-0.290203) | 0.127679 / 0.434364 (-0.306685) | 0.331818 / 0.540337 (-0.208520) | 0.453143 / 1.386936 (-0.933793) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef2fb358433678b322d275c0bdee3239fa6485b2 \"CML watermark\")\n" ]
Fix resuming arrow format
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6964/reactions" }
PR_kwDODunzps5yCNGa
{ "diff_url": "https://github.com/huggingface/datasets/pull/6964.diff", "html_url": "https://github.com/huggingface/datasets/pull/6964", "merged_at": "2024-06-14T14:58:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/6964.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6964" }
2024-06-10T22:40:33Z
https://api.github.com/repos/huggingface/datasets/issues/6964/comments
following https://github.com/huggingface/datasets/pull/6658
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6964/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6964/timeline
closed
false
6,964
null
2024-06-14T14:58:37Z
null
true
2,344,269,477
https://api.github.com/repos/huggingface/datasets/issues/6963
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6963/events
[]
null
2024-06-28T09:53:11Z
[]
https://github.com/huggingface/datasets/pull/6963
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6963). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "ci failures are r-unrelated to this PR, merging", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005532 / 0.011353 (-0.005821) | 0.004018 / 0.011008 (-0.006991) | 0.064685 / 0.038508 (0.026177) | 0.031303 / 0.023109 (0.008194) | 0.254670 / 0.275898 (-0.021228) | 0.271357 / 0.323480 (-0.052123) | 0.003372 / 0.007986 (-0.004614) | 0.004153 / 0.004328 (-0.000175) | 0.050381 / 0.004250 (0.046131) | 0.046837 / 0.037052 (0.009784) | 0.253166 / 0.258489 (-0.005323) | 0.294257 / 0.293841 (0.000416) | 0.029746 / 0.128546 (-0.098800) | 0.012519 / 0.075646 (-0.063127) | 0.208822 / 0.419271 (-0.210449) | 0.036925 / 0.043533 (-0.006608) | 0.247636 / 0.255139 (-0.007503) | 0.269102 / 0.283200 (-0.014097) | 0.019021 / 0.141683 (-0.122662) | 1.138825 / 1.452155 (-0.313330) | 1.203301 / 1.492716 (-0.289415) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095950 / 0.018006 (0.077944) | 0.303347 / 0.000490 (0.302857) | 0.000221 / 0.000200 (0.000022) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019014 / 0.037411 (-0.018397) | 0.062220 / 0.014526 (0.047694) | 0.074811 / 0.176557 (-0.101745) | 0.122917 / 0.737135 (-0.614218) | 0.075765 / 0.296338 (-0.220574) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288359 / 0.215209 (0.073150) | 2.849491 / 2.077655 (0.771837) | 1.479448 / 1.504120 (-0.024672) | 1.350560 / 1.541195 (-0.190635) | 1.366079 / 1.468490 (-0.102411) | 0.733609 / 4.584777 (-3.851168) | 2.416014 / 3.745712 (-1.329698) | 2.954834 / 5.269862 (-2.315028) | 1.985703 / 4.565676 (-2.579974) | 0.080589 / 0.424275 (-0.343686) | 0.005581 / 0.007607 (-0.002026) | 0.343706 / 0.226044 (0.117661) | 3.416257 / 2.268929 (1.147329) | 1.865937 / 55.444624 (-53.578687) | 1.545911 / 6.876477 (-5.330566) | 1.711004 / 2.142072 (-0.431069) | 0.821231 / 4.805227 (-3.983996) | 0.138865 / 6.500664 (-6.361799) | 0.046466 / 0.075469 (-0.029003) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965632 / 1.841788 (-0.876155) | 11.812101 / 8.074308 (3.737792) | 9.399156 / 10.191392 (-0.792236) | 0.143325 / 0.680424 (-0.537099) | 0.014824 / 0.534201 (-0.519377) | 0.306143 / 0.579283 (-0.273140) | 0.264063 / 0.434364 (-0.170301) | 0.347820 / 0.540337 (-0.192517) | 0.476818 / 1.386936 (-0.910118) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005978 / 0.011353 (-0.005375) | 0.004482 / 0.011008 (-0.006526) | 0.053788 / 0.038508 (0.015280) | 0.033963 / 0.023109 (0.010853) | 0.267258 / 0.275898 (-0.008640) | 0.290916 / 0.323480 (-0.032563) | 0.004485 / 0.007986 (-0.003500) | 0.002876 / 0.004328 (-0.001453) | 0.048637 / 0.004250 (0.044386) | 0.042050 / 0.037052 (0.004997) | 0.278607 / 0.258489 (0.020118) | 0.315411 / 0.293841 (0.021570) | 0.032059 / 0.128546 (-0.096487) | 0.012851 / 0.075646 (-0.062795) | 0.061672 / 0.419271 (-0.357600) | 0.034545 / 0.043533 (-0.008988) | 0.262068 / 0.255139 (0.006929) | 0.291197 / 0.283200 (0.007997) | 0.019092 / 0.141683 (-0.122591) | 1.108690 / 1.452155 (-0.343464) | 1.161025 / 1.492716 (-0.331691) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096775 / 0.018006 (0.078768) | 0.306825 / 0.000490 (0.306335) | 0.000210 / 0.000200 (0.000010) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023160 / 0.037411 (-0.014251) | 0.078794 / 0.014526 (0.064268) | 0.088954 / 0.176557 (-0.087602) | 0.129488 / 0.737135 (-0.607648) | 0.091239 / 0.296338 (-0.205099) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292911 / 0.215209 (0.077702) | 2.910802 / 2.077655 (0.833148) | 1.569310 / 1.504120 (0.065191) | 1.433807 / 1.541195 (-0.107388) | 1.478619 / 1.468490 (0.010129) | 0.720982 / 4.584777 (-3.863795) | 0.972104 / 3.745712 (-2.773608) | 3.026941 / 5.269862 (-2.242921) | 1.919170 / 4.565676 (-2.646506) | 0.079292 / 0.424275 (-0.344983) | 0.005227 / 0.007607 (-0.002380) | 0.345363 / 0.226044 (0.119319) | 3.416149 / 2.268929 (1.147221) | 1.938377 / 55.444624 (-53.506248) | 1.626037 / 6.876477 (-5.250440) | 1.644405 / 2.142072 (-0.497668) | 0.802485 / 4.805227 (-4.002742) | 0.135114 / 6.500664 (-6.365550) | 0.042015 / 0.075469 (-0.033454) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.014812 / 1.841788 (-0.826976) | 12.583844 / 8.074308 (4.509536) | 10.522495 / 10.191392 (0.331103) | 0.143336 / 0.680424 (-0.537088) | 0.015843 / 0.534201 (-0.518357) | 0.306556 / 0.579283 (-0.272727) | 0.129654 / 0.434364 (-0.304710) | 0.340442 / 0.540337 (-0.199896) | 0.445220 / 1.386936 (-0.941716) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5cab892dcd26fb51938634e13e300c6611ab66e0 \"CML watermark\")\n" ]
[Streaming] retry on requests errors
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6963/reactions" }
PR_kwDODunzps5x_yu-
{ "diff_url": "https://github.com/huggingface/datasets/pull/6963.diff", "html_url": "https://github.com/huggingface/datasets/pull/6963", "merged_at": "2024-06-28T09:46:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/6963.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6963" }
2024-06-10T15:51:56Z
https://api.github.com/repos/huggingface/datasets/issues/6963/comments
reported in https://discuss.huggingface.co/t/speeding-up-streaming-of-large-datasets-fineweb/90714/6 when training using a streaming a dataloader cc @Wauplin it looks like the retries from `hfh` are not always enough. In this PR I let `datasets` do additional retries (that users can configure in `datasets.config`) since I couldn't find an easy way to increase the max_retries for `hfh` users in general.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6963/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6963/timeline
closed
false
6,963
null
2024-06-28T09:46:52Z
null
true
2,343,394,378
https://api.github.com/repos/huggingface/datasets/issues/6962
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6962/events
[]
null
2024-06-11T08:31:52Z
[]
https://github.com/huggingface/datasets/pull/6962
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6962). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005520 / 0.011353 (-0.005833) | 0.003989 / 0.011008 (-0.007019) | 0.064786 / 0.038508 (0.026278) | 0.031075 / 0.023109 (0.007966) | 0.241619 / 0.275898 (-0.034279) | 0.275341 / 0.323480 (-0.048139) | 0.003139 / 0.007986 (-0.004847) | 0.002820 / 0.004328 (-0.001508) | 0.049766 / 0.004250 (0.045515) | 0.045047 / 0.037052 (0.007995) | 0.251906 / 0.258489 (-0.006583) | 0.285889 / 0.293841 (-0.007952) | 0.028297 / 0.128546 (-0.100249) | 0.010683 / 0.075646 (-0.064963) | 0.206467 / 0.419271 (-0.212805) | 0.036267 / 0.043533 (-0.007266) | 0.250720 / 0.255139 (-0.004419) | 0.268565 / 0.283200 (-0.014635) | 0.020394 / 0.141683 (-0.121289) | 1.114283 / 1.452155 (-0.337872) | 1.163884 / 1.492716 (-0.328833) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.112698 / 0.018006 (0.094692) | 0.302740 / 0.000490 (0.302251) | 0.000209 / 0.000200 (0.000009) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019337 / 0.037411 (-0.018075) | 0.062854 / 0.014526 (0.048328) | 0.077088 / 0.176557 (-0.099468) | 0.120926 / 0.737135 (-0.616209) | 0.075594 / 0.296338 (-0.220744) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290787 / 0.215209 (0.075578) | 2.867894 / 2.077655 (0.790239) | 1.490043 / 1.504120 (-0.014076) | 1.356383 / 1.541195 (-0.184812) | 1.400229 / 1.468490 (-0.068261) | 0.582076 / 4.584777 (-4.002701) | 2.398270 / 3.745712 (-1.347442) | 2.856459 / 5.269862 (-2.413403) | 1.815545 / 4.565676 (-2.750131) | 0.063259 / 0.424275 (-0.361016) | 0.005056 / 0.007607 (-0.002551) | 0.347699 / 0.226044 (0.121655) | 3.466511 / 2.268929 (1.197582) | 1.862096 / 55.444624 (-53.582528) | 1.532324 / 6.876477 (-5.344152) | 1.599411 / 2.142072 (-0.542661) | 0.657350 / 4.805227 (-4.147878) | 0.118981 / 6.500664 (-6.381683) | 0.042224 / 0.075469 (-0.033245) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965649 / 1.841788 (-0.876139) | 11.896501 / 8.074308 (3.822193) | 9.873923 / 10.191392 (-0.317469) | 0.141165 / 0.680424 (-0.539258) | 0.013885 / 0.534201 (-0.520316) | 0.291464 / 0.579283 (-0.287819) | 0.273153 / 0.434364 (-0.161211) | 0.324395 / 0.540337 (-0.215942) | 0.422040 / 1.386936 (-0.964897) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005640 / 0.011353 (-0.005713) | 0.004035 / 0.011008 (-0.006973) | 0.050831 / 0.038508 (0.012323) | 0.032841 / 0.023109 (0.009732) | 0.272226 / 0.275898 (-0.003672) | 0.297880 / 0.323480 (-0.025599) | 0.004397 / 0.007986 (-0.003588) | 0.002762 / 0.004328 (-0.001566) | 0.049887 / 0.004250 (0.045637) | 0.040372 / 0.037052 (0.003320) | 0.286337 / 0.258489 (0.027848) | 0.320015 / 0.293841 (0.026174) | 0.029992 / 0.128546 (-0.098554) | 0.010781 / 0.075646 (-0.064865) | 0.059391 / 0.419271 (-0.359880) | 0.034410 / 0.043533 (-0.009123) | 0.273024 / 0.255139 (0.017885) | 0.288953 / 0.283200 (0.005754) | 0.018072 / 0.141683 (-0.123611) | 1.125742 / 1.452155 (-0.326413) | 1.175233 / 1.492716 (-0.317483) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093470 / 0.018006 (0.075463) | 0.313248 / 0.000490 (0.312758) | 0.000324 / 0.000200 (0.000124) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023529 / 0.037411 (-0.013882) | 0.077305 / 0.014526 (0.062779) | 0.088916 / 0.176557 (-0.087640) | 0.128792 / 0.737135 (-0.608344) | 0.090141 / 0.296338 (-0.206197) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291110 / 0.215209 (0.075901) | 2.848118 / 2.077655 (0.770464) | 1.581664 / 1.504120 (0.077544) | 1.446390 / 1.541195 (-0.094804) | 1.452594 / 1.468490 (-0.015896) | 0.571213 / 4.584777 (-4.013564) | 0.976382 / 3.745712 (-2.769330) | 2.756192 / 5.269862 (-2.513670) | 1.770274 / 4.565676 (-2.795403) | 0.064513 / 0.424275 (-0.359763) | 0.005334 / 0.007607 (-0.002273) | 0.347380 / 0.226044 (0.121335) | 3.424800 / 2.268929 (1.155871) | 1.942374 / 55.444624 (-53.502250) | 1.636069 / 6.876477 (-5.240407) | 1.795327 / 2.142072 (-0.346745) | 0.658942 / 4.805227 (-4.146285) | 0.119542 / 6.500664 (-6.381123) | 0.041826 / 0.075469 (-0.033643) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007230 / 1.841788 (-0.834558) | 12.293084 / 8.074308 (4.218776) | 10.618104 / 10.191392 (0.426712) | 0.133691 / 0.680424 (-0.546733) | 0.015725 / 0.534201 (-0.518476) | 0.288860 / 0.579283 (-0.290423) | 0.130546 / 0.434364 (-0.303818) | 0.327279 / 0.540337 (-0.213059) | 0.428768 / 1.386936 (-0.958168) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#af3acfdfcf76bb980dbac871540e30c2cade0cf9 \"CML watermark\")\n" ]
fix(ci): remove unnecessary permissions
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6962/reactions" }
PR_kwDODunzps5x8yHt
{ "diff_url": "https://github.com/huggingface/datasets/pull/6962.diff", "html_url": "https://github.com/huggingface/datasets/pull/6962", "merged_at": "2024-06-11T08:25:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/6962.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6962" }
2024-06-10T09:28:02Z
https://api.github.com/repos/huggingface/datasets/issues/6962/comments
### What does this PR do? Remove unnecessary permissions granted to the actions workflow. Sorry for the mishap.
{ "avatar_url": "https://avatars.githubusercontent.com/u/9112841?v=4", "events_url": "https://api.github.com/users/McPatate/events{/privacy}", "followers_url": "https://api.github.com/users/McPatate/followers", "following_url": "https://api.github.com/users/McPatate/following{/other_user}", "gists_url": "https://api.github.com/users/McPatate/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/McPatate", "id": 9112841, "login": "McPatate", "node_id": "MDQ6VXNlcjkxMTI4NDE=", "organizations_url": "https://api.github.com/users/McPatate/orgs", "received_events_url": "https://api.github.com/users/McPatate/received_events", "repos_url": "https://api.github.com/users/McPatate/repos", "site_admin": false, "starred_url": "https://api.github.com/users/McPatate/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/McPatate/subscriptions", "type": "User", "url": "https://api.github.com/users/McPatate" }
https://api.github.com/repos/huggingface/datasets/issues/6962/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6962/timeline
closed
false
6,962
null
2024-06-11T08:25:47Z
null
true
2,342,022,418
https://api.github.com/repos/huggingface/datasets/issues/6961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6961/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-06-13T16:05:00Z
[]
https://github.com/huggingface/datasets/issues/6961
NONE
null
null
null
[ "We're unlikely to add more features/support for datasets with python loading scripts, which include datasets with manual download. Sorry for the inconvenience" ]
Manual downloads should count as downloads
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6961/reactions" }
I_kwDODunzps6LmG0S
null
2024-06-09T04:52:06Z
https://api.github.com/repos/huggingface/datasets/issues/6961/comments
### Feature request I would like to request that manual downloads of data files from Hugging Face dataset repositories count as downloads of a dataset. According to the documentation for the Hugging Face Hub, that is currently not the case: https://huggingface.co/docs/hub/en/datasets-download-stats ### Motivation This would ensure that downloads are accurately reported to end users. ### Your contribution N/A
{ "avatar_url": "https://avatars.githubusercontent.com/u/8473183?v=4", "events_url": "https://api.github.com/users/umarbutler/events{/privacy}", "followers_url": "https://api.github.com/users/umarbutler/followers", "following_url": "https://api.github.com/users/umarbutler/following{/other_user}", "gists_url": "https://api.github.com/users/umarbutler/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/umarbutler", "id": 8473183, "login": "umarbutler", "node_id": "MDQ6VXNlcjg0NzMxODM=", "organizations_url": "https://api.github.com/users/umarbutler/orgs", "received_events_url": "https://api.github.com/users/umarbutler/received_events", "repos_url": "https://api.github.com/users/umarbutler/repos", "site_admin": false, "starred_url": "https://api.github.com/users/umarbutler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/umarbutler/subscriptions", "type": "User", "url": "https://api.github.com/users/umarbutler" }
https://api.github.com/repos/huggingface/datasets/issues/6961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6961/timeline
open
false
6,961
null
null
null
false
2,340,791,685
https://api.github.com/repos/huggingface/datasets/issues/6960
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6960/events
[]
null
2024-06-08T14:58:27Z
[]
https://github.com/huggingface/datasets/pull/6960
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6960). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Yes!", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005007 / 0.011353 (-0.006346) | 0.003603 / 0.011008 (-0.007405) | 0.062719 / 0.038508 (0.024211) | 0.029327 / 0.023109 (0.006217) | 0.250360 / 0.275898 (-0.025538) | 0.265095 / 0.323480 (-0.058385) | 0.004205 / 0.007986 (-0.003781) | 0.002713 / 0.004328 (-0.001616) | 0.049209 / 0.004250 (0.044958) | 0.045162 / 0.037052 (0.008110) | 0.260439 / 0.258489 (0.001950) | 0.287778 / 0.293841 (-0.006063) | 0.027458 / 0.128546 (-0.101088) | 0.010169 / 0.075646 (-0.065477) | 0.199487 / 0.419271 (-0.219784) | 0.036584 / 0.043533 (-0.006949) | 0.254523 / 0.255139 (-0.000616) | 0.269902 / 0.283200 (-0.013298) | 0.017138 / 0.141683 (-0.124545) | 1.099285 / 1.452155 (-0.352869) | 1.150878 / 1.492716 (-0.341839) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092868 / 0.018006 (0.074862) | 0.300421 / 0.000490 (0.299932) | 0.000213 / 0.000200 (0.000013) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018810 / 0.037411 (-0.018601) | 0.062341 / 0.014526 (0.047815) | 0.074779 / 0.176557 (-0.101777) | 0.120641 / 0.737135 (-0.616494) | 0.075020 / 0.296338 (-0.221318) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277782 / 0.215209 (0.062573) | 2.716427 / 2.077655 (0.638772) | 1.434204 / 1.504120 (-0.069916) | 1.335990 / 1.541195 (-0.205205) | 1.336636 / 1.468490 (-0.131854) | 0.557562 / 4.584777 (-4.027215) | 2.323517 / 3.745712 (-1.422196) | 2.647937 / 5.269862 (-2.621925) | 1.728735 / 4.565676 (-2.836941) | 0.061888 / 0.424275 (-0.362387) | 0.004981 / 0.007607 (-0.002627) | 0.329429 / 0.226044 (0.103385) | 3.324708 / 2.268929 (1.055779) | 1.832641 / 55.444624 (-53.611983) | 1.514386 / 6.876477 (-5.362091) | 1.656912 / 2.142072 (-0.485160) | 0.630706 / 4.805227 (-4.174521) | 0.116250 / 6.500664 (-6.384414) | 0.042598 / 0.075469 (-0.032871) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969217 / 1.841788 (-0.872570) | 11.232580 / 8.074308 (3.158272) | 9.541306 / 10.191392 (-0.650086) | 0.139544 / 0.680424 (-0.540880) | 0.014441 / 0.534201 (-0.519760) | 0.285834 / 0.579283 (-0.293449) | 0.261950 / 0.434364 (-0.172414) | 0.325449 / 0.540337 (-0.214889) | 0.415501 / 1.386936 (-0.971435) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005422 / 0.011353 (-0.005931) | 0.003528 / 0.011008 (-0.007480) | 0.049582 / 0.038508 (0.011074) | 0.032683 / 0.023109 (0.009574) | 0.277309 / 0.275898 (0.001411) | 0.298598 / 0.323480 (-0.024882) | 0.004325 / 0.007986 (-0.003661) | 0.002741 / 0.004328 (-0.001588) | 0.047933 / 0.004250 (0.043683) | 0.040778 / 0.037052 (0.003726) | 0.287492 / 0.258489 (0.029003) | 0.311408 / 0.293841 (0.017567) | 0.029482 / 0.128546 (-0.099064) | 0.010630 / 0.075646 (-0.065016) | 0.057745 / 0.419271 (-0.361526) | 0.033501 / 0.043533 (-0.010031) | 0.279880 / 0.255139 (0.024741) | 0.297421 / 0.283200 (0.014221) | 0.017907 / 0.141683 (-0.123776) | 1.152221 / 1.452155 (-0.299934) | 1.189332 / 1.492716 (-0.303385) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094464 / 0.018006 (0.076457) | 0.300769 / 0.000490 (0.300279) | 0.000196 / 0.000200 (-0.000004) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022232 / 0.037411 (-0.015179) | 0.076626 / 0.014526 (0.062100) | 0.087807 / 0.176557 (-0.088750) | 0.128847 / 0.737135 (-0.608288) | 0.092135 / 0.296338 (-0.204203) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299013 / 0.215209 (0.083804) | 2.929788 / 2.077655 (0.852133) | 1.614185 / 1.504120 (0.110065) | 1.486720 / 1.541195 (-0.054475) | 1.492473 / 1.468490 (0.023983) | 0.563699 / 4.584777 (-4.021078) | 0.928820 / 3.745712 (-2.816892) | 2.597271 / 5.269862 (-2.672590) | 1.716534 / 4.565676 (-2.849142) | 0.062568 / 0.424275 (-0.361707) | 0.005168 / 0.007607 (-0.002439) | 0.353781 / 0.226044 (0.127737) | 3.493732 / 2.268929 (1.224803) | 2.018343 / 55.444624 (-53.426282) | 1.694516 / 6.876477 (-5.181961) | 1.796950 / 2.142072 (-0.345123) | 0.634846 / 4.805227 (-4.170382) | 0.115230 / 6.500664 (-6.385434) | 0.040816 / 0.075469 (-0.034654) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.986212 / 1.841788 (-0.855575) | 11.954392 / 8.074308 (3.880084) | 10.299670 / 10.191392 (0.108278) | 0.128358 / 0.680424 (-0.552066) | 0.016313 / 0.534201 (-0.517888) | 0.289621 / 0.579283 (-0.289662) | 0.124708 / 0.434364 (-0.309656) | 0.325269 / 0.540337 (-0.215068) | 0.415133 / 1.386936 (-0.971803) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#97513be330114a8aa07e5199ec252ac662aeb76d \"CML watermark\")\n" ]
feat(ci): add trufflehog secrets detection
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6960/reactions" }
PR_kwDODunzps5x0R3T
{ "diff_url": "https://github.com/huggingface/datasets/pull/6960.diff", "html_url": "https://github.com/huggingface/datasets/pull/6960", "merged_at": "2024-06-08T14:52:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/6960.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6960" }
2024-06-07T16:18:23Z
https://api.github.com/repos/huggingface/datasets/issues/6960/comments
### What does this PR do? Adding a GH action to scan for leaked secrets on each commit.
{ "avatar_url": "https://avatars.githubusercontent.com/u/9112841?v=4", "events_url": "https://api.github.com/users/McPatate/events{/privacy}", "followers_url": "https://api.github.com/users/McPatate/followers", "following_url": "https://api.github.com/users/McPatate/following{/other_user}", "gists_url": "https://api.github.com/users/McPatate/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/McPatate", "id": 9112841, "login": "McPatate", "node_id": "MDQ6VXNlcjkxMTI4NDE=", "organizations_url": "https://api.github.com/users/McPatate/orgs", "received_events_url": "https://api.github.com/users/McPatate/received_events", "repos_url": "https://api.github.com/users/McPatate/repos", "site_admin": false, "starred_url": "https://api.github.com/users/McPatate/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/McPatate/subscriptions", "type": "User", "url": "https://api.github.com/users/McPatate" }
https://api.github.com/repos/huggingface/datasets/issues/6960/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6960/timeline
closed
false
6,960
null
2024-06-08T14:52:18Z
null
true
2,340,229,908
https://api.github.com/repos/huggingface/datasets/issues/6959
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6959/events
[]
null
2024-06-10T07:33:53Z
[]
https://github.com/huggingface/datasets/pull/6959
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6959). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Test should be fixed by https://github.com/huggingface/datasets/pull/6959/commits/ef8f7cee79ffb070d9b5190f21128fc523b3d3ee (tested locally). Let's see what CI says :crossed_fingers: ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005678 / 0.011353 (-0.005675) | 0.004119 / 0.011008 (-0.006889) | 0.063901 / 0.038508 (0.025393) | 0.032071 / 0.023109 (0.008961) | 0.243182 / 0.275898 (-0.032716) | 0.280709 / 0.323480 (-0.042770) | 0.004195 / 0.007986 (-0.003791) | 0.002810 / 0.004328 (-0.001518) | 0.048722 / 0.004250 (0.044472) | 0.049381 / 0.037052 (0.012328) | 0.257816 / 0.258489 (-0.000673) | 0.288460 / 0.293841 (-0.005381) | 0.028518 / 0.128546 (-0.100029) | 0.010775 / 0.075646 (-0.064871) | 0.203149 / 0.419271 (-0.216122) | 0.038792 / 0.043533 (-0.004741) | 0.248502 / 0.255139 (-0.006637) | 0.268251 / 0.283200 (-0.014949) | 0.019536 / 0.141683 (-0.122147) | 1.133935 / 1.452155 (-0.318220) | 1.182855 / 1.492716 (-0.309862) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097531 / 0.018006 (0.079525) | 0.303612 / 0.000490 (0.303122) | 0.000222 / 0.000200 (0.000022) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019670 / 0.037411 (-0.017741) | 0.063439 / 0.014526 (0.048913) | 0.075119 / 0.176557 (-0.101438) | 0.122419 / 0.737135 (-0.614717) | 0.076965 / 0.296338 (-0.219374) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286780 / 0.215209 (0.071571) | 2.811860 / 2.077655 (0.734206) | 1.485165 / 1.504120 (-0.018954) | 1.373296 / 1.541195 (-0.167898) | 1.412700 / 1.468490 (-0.055790) | 0.566442 / 4.584777 (-4.018335) | 2.382616 / 3.745712 (-1.363096) | 2.677214 / 5.269862 (-2.592647) | 1.760073 / 4.565676 (-2.805603) | 0.062673 / 0.424275 (-0.361602) | 0.005050 / 0.007607 (-0.002557) | 0.341701 / 0.226044 (0.115657) | 3.321182 / 2.268929 (1.052253) | 1.811715 / 55.444624 (-53.632909) | 1.554986 / 6.876477 (-5.321491) | 1.727448 / 2.142072 (-0.414624) | 0.642193 / 4.805227 (-4.163034) | 0.117878 / 6.500664 (-6.382786) | 0.042814 / 0.075469 (-0.032655) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985894 / 1.841788 (-0.855894) | 12.195975 / 8.074308 (4.121667) | 9.890180 / 10.191392 (-0.301212) | 0.142638 / 0.680424 (-0.537786) | 0.015207 / 0.534201 (-0.518994) | 0.283140 / 0.579283 (-0.296143) | 0.266016 / 0.434364 (-0.168348) | 0.325518 / 0.540337 (-0.214820) | 0.418994 / 1.386936 (-0.967942) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005978 / 0.011353 (-0.005374) | 0.003915 / 0.011008 (-0.007093) | 0.051592 / 0.038508 (0.013084) | 0.033338 / 0.023109 (0.010229) | 0.267925 / 0.275898 (-0.007973) | 0.296011 / 0.323480 (-0.027469) | 0.004503 / 0.007986 (-0.003483) | 0.002854 / 0.004328 (-0.001475) | 0.049958 / 0.004250 (0.045707) | 0.041708 / 0.037052 (0.004656) | 0.287185 / 0.258489 (0.028696) | 0.322715 / 0.293841 (0.028874) | 0.030088 / 0.128546 (-0.098458) | 0.010709 / 0.075646 (-0.064938) | 0.059736 / 0.419271 (-0.359536) | 0.034294 / 0.043533 (-0.009239) | 0.264316 / 0.255139 (0.009177) | 0.285471 / 0.283200 (0.002272) | 0.019197 / 0.141683 (-0.122486) | 1.135571 / 1.452155 (-0.316583) | 1.190019 / 1.492716 (-0.302698) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099251 / 0.018006 (0.081245) | 0.305357 / 0.000490 (0.304867) | 0.000215 / 0.000200 (0.000015) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023206 / 0.037411 (-0.014205) | 0.077835 / 0.014526 (0.063310) | 0.090242 / 0.176557 (-0.086315) | 0.131208 / 0.737135 (-0.605928) | 0.091726 / 0.296338 (-0.204612) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292487 / 0.215209 (0.077278) | 2.837044 / 2.077655 (0.759389) | 1.553155 / 1.504120 (0.049035) | 1.433645 / 1.541195 (-0.107550) | 1.476702 / 1.468490 (0.008212) | 0.561926 / 4.584777 (-4.022851) | 0.954630 / 3.745712 (-2.791082) | 2.752286 / 5.269862 (-2.517575) | 1.782746 / 4.565676 (-2.782931) | 0.062984 / 0.424275 (-0.361291) | 0.005056 / 0.007607 (-0.002551) | 0.341700 / 0.226044 (0.115656) | 3.343726 / 2.268929 (1.074798) | 1.953390 / 55.444624 (-53.491234) | 1.616989 / 6.876477 (-5.259488) | 1.785104 / 2.142072 (-0.356969) | 0.643465 / 4.805227 (-4.161763) | 0.115905 / 6.500664 (-6.384759) | 0.041678 / 0.075469 (-0.033791) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000237 / 1.841788 (-0.841550) | 12.633517 / 8.074308 (4.559208) | 10.553485 / 10.191392 (0.362092) | 0.143188 / 0.680424 (-0.537236) | 0.016020 / 0.534201 (-0.518181) | 0.286739 / 0.579283 (-0.292544) | 0.128488 / 0.434364 (-0.305876) | 0.321932 / 0.540337 (-0.218405) | 0.418635 / 1.386936 (-0.968301) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9510252f03fded02b8cc87ca6dfa3195d17594ba \"CML watermark\")\n" ]
Better error handling in `dataset_module_factory`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6959/reactions" }
PR_kwDODunzps5xyVt6
{ "diff_url": "https://github.com/huggingface/datasets/pull/6959.diff", "html_url": "https://github.com/huggingface/datasets/pull/6959", "merged_at": "2024-06-10T07:27:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/6959.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6959" }
2024-06-07T11:24:15Z
https://api.github.com/repos/huggingface/datasets/issues/6959/comments
cc @cakiki who reported it on [slack](https://huggingface.slack.com/archives/C039P47V1L5/p1717754405578539) (private link) This PR updates how errors are handled in `dataset_module_factory` when the `dataset_info` cannot be accessed: 1. Use multiple `except ... as e` instead of using `isinstance(e, ...)` 2. Always raise `DatasetNotFoundError` with `from e` so that the initial error is explicitly logged in the stacktrace. 3. Differentiate `RepoNotFoundError` / `GatedRepoError` / `RevisionNotFoundError` cases
{ "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Wauplin", "id": 11801849, "login": "Wauplin", "node_id": "MDQ6VXNlcjExODAxODQ5", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "repos_url": "https://api.github.com/users/Wauplin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "type": "User", "url": "https://api.github.com/users/Wauplin" }
https://api.github.com/repos/huggingface/datasets/issues/6959/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6959/timeline
closed
false
6,959
null
2024-06-10T07:27:43Z
null
true
2,337,476,383
https://api.github.com/repos/huggingface/datasets/issues/6958
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6958/events
[]
null
2024-07-01T11:27:46Z
[]
https://github.com/huggingface/datasets/issues/6958
NONE
completed
null
null
[ "I can load public dataset, but for my private dataset it fails", "https://huggingface.co/docs/datasets/upload_dataset", "I have checked the API HTTP link. Repository Not Found for url: https://huggingface.co/api/datasets/xxx/xxx.\r\n\r\n![image](https://github.com/huggingface/datasets/assets/39621324/4aceef59-0c65-4161-9665-676d25d73225)\r\n\r\nIt just works fine.", "It seems that everything is in a mass huh....\r\n\r\n![image](https://github.com/huggingface/datasets/assets/39621324/fb2fe12c-4f0a-4bf6-9656-63ba50347b10)\r\n", "https://huggingface.co/datasets/rajpurkar/squad/blob/main/squad.py fails again", "https://github.com/huggingface/datasets/blob/main/templates/new_dataset_script.py#L81 can not use this, too complex. I just need a def to load my file to a dict", "I am facing the same issue. Did you find a fix?", "You should authenticate to be able to access private or gated repos: https://huggingface.co/docs/hub/datasets-gated#access-gated-datasets-as-a-user" ]
My Private Dataset doesn't exist on the Hub or cannot be accessed
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6958/reactions" }
I_kwDODunzps6LUw8f
null
2024-06-06T06:52:19Z
https://api.github.com/repos/huggingface/datasets/issues/6958/comments
### Describe the bug ``` File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 1852, in dataset_module_factory raise DatasetNotFoundError(msg + f" at revision '{revision}'" if revision else msg) datasets.exceptions.DatasetNotFoundError: Dataset 'xxx' doesn't exist on the Hub or cannot be accessed >>> dataset = load_dataset("xxxx", token=True) 404 error 404 Client Error. (Request ID: Root=xxxx) Repository Not Found for url: https://huggingface.co/api/datasets/xxx/xxx. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 2593, in load_dataset builder_instance = load_dataset_builder( File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 2265, in load_dataset_builder dataset_module = dataset_module_factory( File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 1910, in dataset_module_factory raise e1 from None File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 1852, in dataset_module_factory raise DatasetNotFoundError(msg + f" at revision '{revision}'" if revision else msg) datasets.exceptions.DatasetNotFoundError: Dataset 'xxx' doesn't exist on the Hub or cannot be accessed ``` ### Steps to reproduce the bug 123 ### Expected behavior 123 ### Environment info 123
{ "avatar_url": "https://avatars.githubusercontent.com/u/39621324?v=4", "events_url": "https://api.github.com/users/wangguan1995/events{/privacy}", "followers_url": "https://api.github.com/users/wangguan1995/followers", "following_url": "https://api.github.com/users/wangguan1995/following{/other_user}", "gists_url": "https://api.github.com/users/wangguan1995/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wangguan1995", "id": 39621324, "login": "wangguan1995", "node_id": "MDQ6VXNlcjM5NjIxMzI0", "organizations_url": "https://api.github.com/users/wangguan1995/orgs", "received_events_url": "https://api.github.com/users/wangguan1995/received_events", "repos_url": "https://api.github.com/users/wangguan1995/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wangguan1995/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wangguan1995/subscriptions", "type": "User", "url": "https://api.github.com/users/wangguan1995" }
https://api.github.com/repos/huggingface/datasets/issues/6958/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6958/timeline
closed
false
6,958
null
2024-07-01T11:27:46Z
null
false
2,335,559,400
https://api.github.com/repos/huggingface/datasets/issues/6957
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6957/events
[]
null
2024-06-05T13:01:07Z
[]
https://github.com/huggingface/datasets/pull/6957
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6957). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005371 / 0.011353 (-0.005982) | 0.003834 / 0.011008 (-0.007174) | 0.063032 / 0.038508 (0.024524) | 0.031623 / 0.023109 (0.008514) | 0.250008 / 0.275898 (-0.025890) | 0.273998 / 0.323480 (-0.049482) | 0.004114 / 0.007986 (-0.003871) | 0.002821 / 0.004328 (-0.001508) | 0.049470 / 0.004250 (0.045220) | 0.046586 / 0.037052 (0.009534) | 0.276807 / 0.258489 (0.018318) | 0.288607 / 0.293841 (-0.005234) | 0.027427 / 0.128546 (-0.101119) | 0.010634 / 0.075646 (-0.065012) | 0.202451 / 0.419271 (-0.216821) | 0.036346 / 0.043533 (-0.007187) | 0.250426 / 0.255139 (-0.004713) | 0.274104 / 0.283200 (-0.009096) | 0.018461 / 0.141683 (-0.123222) | 1.120326 / 1.452155 (-0.331829) | 1.157635 / 1.492716 (-0.335081) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102287 / 0.018006 (0.084281) | 0.313145 / 0.000490 (0.312655) | 0.000255 / 0.000200 (0.000055) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019494 / 0.037411 (-0.017917) | 0.063252 / 0.014526 (0.048727) | 0.075318 / 0.176557 (-0.101239) | 0.122194 / 0.737135 (-0.614942) | 0.076837 / 0.296338 (-0.219501) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284098 / 0.215209 (0.068889) | 2.822301 / 2.077655 (0.744647) | 1.490185 / 1.504120 (-0.013935) | 1.366723 / 1.541195 (-0.174472) | 1.398832 / 1.468490 (-0.069658) | 0.563661 / 4.584777 (-4.021116) | 2.385129 / 3.745712 (-1.360583) | 2.689823 / 5.269862 (-2.580039) | 1.731271 / 4.565676 (-2.834405) | 0.063351 / 0.424275 (-0.360924) | 0.004974 / 0.007607 (-0.002633) | 0.332163 / 0.226044 (0.106119) | 3.314906 / 2.268929 (1.045977) | 1.811331 / 55.444624 (-53.633294) | 1.513357 / 6.876477 (-5.363120) | 1.718454 / 2.142072 (-0.423618) | 0.639663 / 4.805227 (-4.165564) | 0.120377 / 6.500664 (-6.380287) | 0.043254 / 0.075469 (-0.032215) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.978534 / 1.841788 (-0.863253) | 11.622313 / 8.074308 (3.548005) | 9.608732 / 10.191392 (-0.582660) | 0.131339 / 0.680424 (-0.549085) | 0.015226 / 0.534201 (-0.518975) | 0.287317 / 0.579283 (-0.291966) | 0.266647 / 0.434364 (-0.167717) | 0.324243 / 0.540337 (-0.216094) | 0.442025 / 1.386936 (-0.944911) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005673 / 0.011353 (-0.005680) | 0.003722 / 0.011008 (-0.007286) | 0.049483 / 0.038508 (0.010975) | 0.033308 / 0.023109 (0.010199) | 0.261912 / 0.275898 (-0.013986) | 0.291151 / 0.323480 (-0.032329) | 0.004389 / 0.007986 (-0.003596) | 0.002762 / 0.004328 (-0.001567) | 0.048970 / 0.004250 (0.044719) | 0.041509 / 0.037052 (0.004457) | 0.273288 / 0.258489 (0.014798) | 0.308351 / 0.293841 (0.014510) | 0.029958 / 0.128546 (-0.098589) | 0.010500 / 0.075646 (-0.065146) | 0.058253 / 0.419271 (-0.361019) | 0.033820 / 0.043533 (-0.009713) | 0.261089 / 0.255139 (0.005950) | 0.282179 / 0.283200 (-0.001021) | 0.018543 / 0.141683 (-0.123140) | 1.121303 / 1.452155 (-0.330852) | 1.166141 / 1.492716 (-0.326575) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099209 / 0.018006 (0.081203) | 0.316920 / 0.000490 (0.316430) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023339 / 0.037411 (-0.014072) | 0.077127 / 0.014526 (0.062602) | 0.088160 / 0.176557 (-0.088396) | 0.129449 / 0.737135 (-0.607686) | 0.093159 / 0.296338 (-0.203180) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281262 / 0.215209 (0.066053) | 2.797504 / 2.077655 (0.719850) | 1.513354 / 1.504120 (0.009234) | 1.383034 / 1.541195 (-0.158161) | 1.395202 / 1.468490 (-0.073288) | 0.563180 / 4.584777 (-4.021597) | 0.979330 / 3.745712 (-2.766383) | 2.674008 / 5.269862 (-2.595853) | 1.762174 / 4.565676 (-2.803502) | 0.062333 / 0.424275 (-0.361942) | 0.004991 / 0.007607 (-0.002616) | 0.336043 / 0.226044 (0.109999) | 3.313500 / 2.268929 (1.044571) | 1.848083 / 55.444624 (-53.596541) | 1.554723 / 6.876477 (-5.321754) | 1.743485 / 2.142072 (-0.398587) | 0.657117 / 4.805227 (-4.148111) | 0.115736 / 6.500664 (-6.384928) | 0.040527 / 0.075469 (-0.034942) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005876 / 1.841788 (-0.835911) | 12.525895 / 8.074308 (4.451587) | 10.492961 / 10.191392 (0.301569) | 0.143443 / 0.680424 (-0.536981) | 0.016652 / 0.534201 (-0.517548) | 0.288236 / 0.579283 (-0.291047) | 0.131401 / 0.434364 (-0.302963) | 0.322885 / 0.540337 (-0.217452) | 0.416048 / 1.386936 (-0.970888) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6548e0e282aeeda7bfb18beafbc65ebecd780c63 \"CML watermark\")\n" ]
Fix typos in docs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6957/reactions" }
PR_kwDODunzps5xiTwJ
{ "diff_url": "https://github.com/huggingface/datasets/pull/6957.diff", "html_url": "https://github.com/huggingface/datasets/pull/6957", "merged_at": "2024-06-05T12:43:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/6957.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6957" }
2024-06-05T10:46:47Z
https://api.github.com/repos/huggingface/datasets/issues/6957/comments
Fix typos in docs introduced by: - #6956 Typos: - `comparisions` => `comparisons` - two consecutive sentences both ending in colon - split one sentence into two Sorry, I did not have time to review that PR. CC: @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6957/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6957/timeline
closed
false
6,957
null
2024-06-05T12:43:26Z
null
true
2,333,940,021
https://api.github.com/repos/huggingface/datasets/issues/6956
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6956/events
[]
null
2024-06-04T16:46:34Z
[]
https://github.com/huggingface/datasets/pull/6956
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6956). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005348 / 0.011353 (-0.006005) | 0.003785 / 0.011008 (-0.007223) | 0.061674 / 0.038508 (0.023166) | 0.032127 / 0.023109 (0.009017) | 0.247095 / 0.275898 (-0.028803) | 0.276466 / 0.323480 (-0.047014) | 0.004197 / 0.007986 (-0.003789) | 0.002734 / 0.004328 (-0.001594) | 0.049604 / 0.004250 (0.045354) | 0.048553 / 0.037052 (0.011500) | 0.253230 / 0.258489 (-0.005259) | 0.286954 / 0.293841 (-0.006887) | 0.028181 / 0.128546 (-0.100365) | 0.010602 / 0.075646 (-0.065044) | 0.200719 / 0.419271 (-0.218552) | 0.037278 / 0.043533 (-0.006254) | 0.251565 / 0.255139 (-0.003574) | 0.269026 / 0.283200 (-0.014174) | 0.017632 / 0.141683 (-0.124050) | 1.136216 / 1.452155 (-0.315939) | 1.181158 / 1.492716 (-0.311559) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004892 / 0.018006 (-0.013114) | 0.312921 / 0.000490 (0.312431) | 0.000247 / 0.000200 (0.000047) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019303 / 0.037411 (-0.018108) | 0.062699 / 0.014526 (0.048174) | 0.075227 / 0.176557 (-0.101329) | 0.122919 / 0.737135 (-0.614217) | 0.076506 / 0.296338 (-0.219833) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277299 / 0.215209 (0.062090) | 2.754771 / 2.077655 (0.677116) | 1.457164 / 1.504120 (-0.046956) | 1.318878 / 1.541195 (-0.222317) | 1.374245 / 1.468490 (-0.094245) | 0.566253 / 4.584777 (-4.018524) | 2.352589 / 3.745712 (-1.393123) | 2.764263 / 5.269862 (-2.505599) | 1.843141 / 4.565676 (-2.722535) | 0.063996 / 0.424275 (-0.360279) | 0.005045 / 0.007607 (-0.002562) | 0.336703 / 0.226044 (0.110658) | 3.342538 / 2.268929 (1.073609) | 1.836664 / 55.444624 (-53.607960) | 1.528901 / 6.876477 (-5.347576) | 1.769562 / 2.142072 (-0.372511) | 0.674192 / 4.805227 (-4.131035) | 0.122421 / 6.500664 (-6.378243) | 0.043714 / 0.075469 (-0.031756) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.989432 / 1.841788 (-0.852356) | 12.178341 / 8.074308 (4.104033) | 9.730838 / 10.191392 (-0.460554) | 0.146751 / 0.680424 (-0.533673) | 0.014720 / 0.534201 (-0.519481) | 0.285821 / 0.579283 (-0.293462) | 0.266474 / 0.434364 (-0.167889) | 0.327886 / 0.540337 (-0.212451) | 0.455672 / 1.386936 (-0.931264) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005691 / 0.011353 (-0.005662) | 0.004089 / 0.011008 (-0.006919) | 0.049878 / 0.038508 (0.011370) | 0.033578 / 0.023109 (0.010469) | 0.268295 / 0.275898 (-0.007603) | 0.288918 / 0.323480 (-0.034561) | 0.005092 / 0.007986 (-0.002894) | 0.002916 / 0.004328 (-0.001412) | 0.049489 / 0.004250 (0.045239) | 0.042495 / 0.037052 (0.005442) | 0.276253 / 0.258489 (0.017764) | 0.313321 / 0.293841 (0.019480) | 0.029386 / 0.128546 (-0.099160) | 0.010926 / 0.075646 (-0.064720) | 0.071747 / 0.419271 (-0.347525) | 0.033642 / 0.043533 (-0.009891) | 0.264950 / 0.255139 (0.009811) | 0.282962 / 0.283200 (-0.000238) | 0.018878 / 0.141683 (-0.122805) | 1.170685 / 1.452155 (-0.281470) | 1.198321 / 1.492716 (-0.294396) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100422 / 0.018006 (0.082415) | 0.311750 / 0.000490 (0.311260) | 0.000235 / 0.000200 (0.000035) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023093 / 0.037411 (-0.014318) | 0.076934 / 0.014526 (0.062408) | 0.088959 / 0.176557 (-0.087598) | 0.129511 / 0.737135 (-0.607624) | 0.090151 / 0.296338 (-0.206187) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301646 / 0.215209 (0.086437) | 2.961780 / 2.077655 (0.884126) | 1.656051 / 1.504120 (0.151931) | 1.533154 / 1.541195 (-0.008041) | 1.585152 / 1.468490 (0.116662) | 0.582157 / 4.584777 (-4.002620) | 0.954881 / 3.745712 (-2.790831) | 2.813174 / 5.269862 (-2.456688) | 1.842840 / 4.565676 (-2.722837) | 0.065598 / 0.424275 (-0.358677) | 0.005306 / 0.007607 (-0.002301) | 0.359610 / 0.226044 (0.133565) | 3.575320 / 2.268929 (1.306391) | 2.015327 / 55.444624 (-53.429297) | 1.734086 / 6.876477 (-5.142391) | 1.919081 / 2.142072 (-0.222991) | 0.671178 / 4.805227 (-4.134049) | 0.120109 / 6.500664 (-6.380555) | 0.042353 / 0.075469 (-0.033116) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.011726 / 1.841788 (-0.830062) | 13.007806 / 8.074308 (4.933498) | 10.632486 / 10.191392 (0.441094) | 0.148535 / 0.680424 (-0.531889) | 0.015988 / 0.534201 (-0.518213) | 0.290023 / 0.579283 (-0.289260) | 0.130685 / 0.434364 (-0.303679) | 0.322912 / 0.540337 (-0.217425) | 0.420596 / 1.386936 (-0.966340) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#336512dcba4fdb4c349d5ecb632b6ced80e038d5 \"CML watermark\")\n" ]
update docs on N-dim arrays
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6956/reactions" }
PR_kwDODunzps5xcwXz
{ "diff_url": "https://github.com/huggingface/datasets/pull/6956.diff", "html_url": "https://github.com/huggingface/datasets/pull/6956", "merged_at": "2024-06-04T16:40:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/6956.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6956" }
2024-06-04T16:32:19Z
https://api.github.com/repos/huggingface/datasets/issues/6956/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6956/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6956/timeline
closed
false
6,956
null
2024-06-04T16:40:27Z
null
true
2,333,802,815
https://api.github.com/repos/huggingface/datasets/issues/6955
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6955/events
[]
null
2024-06-05T10:18:56Z
[]
https://github.com/huggingface/datasets/pull/6955
CONTRIBUTOR
null
false
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005507 / 0.011353 (-0.005845) | 0.003757 / 0.011008 (-0.007251) | 0.063274 / 0.038508 (0.024766) | 0.029720 / 0.023109 (0.006610) | 0.247974 / 0.275898 (-0.027924) | 0.272283 / 0.323480 (-0.051197) | 0.004186 / 0.007986 (-0.003799) | 0.002820 / 0.004328 (-0.001508) | 0.049070 / 0.004250 (0.044820) | 0.050026 / 0.037052 (0.012973) | 0.256501 / 0.258489 (-0.001988) | 0.297082 / 0.293841 (0.003241) | 0.028549 / 0.128546 (-0.099997) | 0.010361 / 0.075646 (-0.065285) | 0.213202 / 0.419271 (-0.206070) | 0.038117 / 0.043533 (-0.005416) | 0.258878 / 0.255139 (0.003739) | 0.282980 / 0.283200 (-0.000220) | 0.018911 / 0.141683 (-0.122772) | 1.118857 / 1.452155 (-0.333298) | 1.157763 / 1.492716 (-0.334953) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004499 / 0.018006 (-0.013507) | 0.310445 / 0.000490 (0.309956) | 0.000218 / 0.000200 (0.000018) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019275 / 0.037411 (-0.018137) | 0.063257 / 0.014526 (0.048731) | 0.075833 / 0.176557 (-0.100724) | 0.122323 / 0.737135 (-0.614812) | 0.079046 / 0.296338 (-0.217292) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292811 / 0.215209 (0.077602) | 2.903501 / 2.077655 (0.825846) | 1.592434 / 1.504120 (0.088314) | 1.450833 / 1.541195 (-0.090362) | 1.481285 / 1.468490 (0.012795) | 0.570150 / 4.584777 (-4.014627) | 2.388618 / 3.745712 (-1.357094) | 2.699322 / 5.269862 (-2.570540) | 1.781405 / 4.565676 (-2.784272) | 0.063451 / 0.424275 (-0.360824) | 0.004979 / 0.007607 (-0.002628) | 0.353346 / 0.226044 (0.127302) | 3.541217 / 2.268929 (1.272289) | 1.972335 / 55.444624 (-53.472289) | 1.634780 / 6.876477 (-5.241697) | 1.815944 / 2.142072 (-0.326128) | 0.651559 / 4.805227 (-4.153669) | 0.118398 / 6.500664 (-6.382266) | 0.041962 / 0.075469 (-0.033507) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971435 / 1.841788 (-0.870352) | 11.843740 / 8.074308 (3.769431) | 9.716333 / 10.191392 (-0.475059) | 0.145923 / 0.680424 (-0.534501) | 0.015073 / 0.534201 (-0.519128) | 0.293307 / 0.579283 (-0.285976) | 0.265505 / 0.434364 (-0.168859) | 0.327578 / 0.540337 (-0.212760) | 0.436409 / 1.386936 (-0.950527) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005647 / 0.011353 (-0.005706) | 0.003669 / 0.011008 (-0.007339) | 0.050234 / 0.038508 (0.011726) | 0.033033 / 0.023109 (0.009924) | 0.269303 / 0.275898 (-0.006595) | 0.282472 / 0.323480 (-0.041008) | 0.004283 / 0.007986 (-0.003703) | 0.002821 / 0.004328 (-0.001507) | 0.050887 / 0.004250 (0.046637) | 0.041618 / 0.037052 (0.004565) | 0.277628 / 0.258489 (0.019139) | 0.310539 / 0.293841 (0.016698) | 0.030036 / 0.128546 (-0.098511) | 0.010401 / 0.075646 (-0.065245) | 0.058845 / 0.419271 (-0.360427) | 0.033676 / 0.043533 (-0.009857) | 0.261148 / 0.255139 (0.006009) | 0.295232 / 0.283200 (0.012032) | 0.018603 / 0.141683 (-0.123080) | 1.132182 / 1.452155 (-0.319972) | 1.173763 / 1.492716 (-0.318953) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100594 / 0.018006 (0.082588) | 0.308101 / 0.000490 (0.307611) | 0.000217 / 0.000200 (0.000017) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023040 / 0.037411 (-0.014371) | 0.080676 / 0.014526 (0.066150) | 0.094687 / 0.176557 (-0.081870) | 0.129780 / 0.737135 (-0.607356) | 0.092241 / 0.296338 (-0.204097) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294799 / 0.215209 (0.079590) | 2.957570 / 2.077655 (0.879915) | 1.576795 / 1.504120 (0.072675) | 1.446869 / 1.541195 (-0.094326) | 1.463133 / 1.468490 (-0.005357) | 0.568511 / 4.584777 (-4.016266) | 1.011502 / 3.745712 (-2.734211) | 2.759571 / 5.269862 (-2.510291) | 1.771738 / 4.565676 (-2.793939) | 0.064104 / 0.424275 (-0.360171) | 0.005160 / 0.007607 (-0.002448) | 0.347554 / 0.226044 (0.121510) | 3.463905 / 2.268929 (1.194976) | 1.931843 / 55.444624 (-53.512781) | 1.622765 / 6.876477 (-5.253712) | 1.809146 / 2.142072 (-0.332926) | 0.653388 / 4.805227 (-4.151839) | 0.122703 / 6.500664 (-6.377961) | 0.041680 / 0.075469 (-0.033790) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000428 / 1.841788 (-0.841359) | 12.503003 / 8.074308 (4.428695) | 10.434802 / 10.191392 (0.243410) | 0.144684 / 0.680424 (-0.535740) | 0.015988 / 0.534201 (-0.518213) | 0.287179 / 0.579283 (-0.292104) | 0.124811 / 0.434364 (-0.309553) | 0.327855 / 0.540337 (-0.212482) | 0.425144 / 1.386936 (-0.961792) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f7170067f819222153fcd45682db61279bdfe673 \"CML watermark\")\n" ]
Fix small typo
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6955/reactions" }
PR_kwDODunzps5xcSYm
{ "diff_url": "https://github.com/huggingface/datasets/pull/6955.diff", "html_url": "https://github.com/huggingface/datasets/pull/6955", "merged_at": "2024-06-04T15:20:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/6955.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6955" }
2024-06-04T15:19:02Z
https://api.github.com/repos/huggingface/datasets/issues/6955/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/17081356?v=4", "events_url": "https://api.github.com/users/marcenacp/events{/privacy}", "followers_url": "https://api.github.com/users/marcenacp/followers", "following_url": "https://api.github.com/users/marcenacp/following{/other_user}", "gists_url": "https://api.github.com/users/marcenacp/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marcenacp", "id": 17081356, "login": "marcenacp", "node_id": "MDQ6VXNlcjE3MDgxMzU2", "organizations_url": "https://api.github.com/users/marcenacp/orgs", "received_events_url": "https://api.github.com/users/marcenacp/received_events", "repos_url": "https://api.github.com/users/marcenacp/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marcenacp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marcenacp/subscriptions", "type": "User", "url": "https://api.github.com/users/marcenacp" }
https://api.github.com/repos/huggingface/datasets/issues/6955/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6955/timeline
closed
false
6,955
null
2024-06-04T15:20:55Z
null
true
2,333,530,558
https://api.github.com/repos/huggingface/datasets/issues/6954
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6954/events
[]
null
2024-06-17T16:32:24Z
[]
https://github.com/huggingface/datasets/pull/6954
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6954). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "yay! πŸŽ‰ ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004881 / 0.011353 (-0.006472) | 0.003246 / 0.011008 (-0.007762) | 0.062496 / 0.038508 (0.023988) | 0.030760 / 0.023109 (0.007651) | 0.241500 / 0.275898 (-0.034398) | 0.272073 / 0.323480 (-0.051407) | 0.004123 / 0.007986 (-0.003863) | 0.002796 / 0.004328 (-0.001533) | 0.049015 / 0.004250 (0.044764) | 0.047095 / 0.037052 (0.010043) | 0.257002 / 0.258489 (-0.001487) | 0.287602 / 0.293841 (-0.006239) | 0.027281 / 0.128546 (-0.101265) | 0.010132 / 0.075646 (-0.065514) | 0.203699 / 0.419271 (-0.215572) | 0.036553 / 0.043533 (-0.006980) | 0.246221 / 0.255139 (-0.008918) | 0.268137 / 0.283200 (-0.015062) | 0.017260 / 0.141683 (-0.124423) | 1.100677 / 1.452155 (-0.351478) | 1.148367 / 1.492716 (-0.344349) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102519 / 0.018006 (0.084513) | 0.301929 / 0.000490 (0.301439) | 0.000223 / 0.000200 (0.000023) | 0.000046 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018590 / 0.037411 (-0.018821) | 0.061615 / 0.014526 (0.047089) | 0.074579 / 0.176557 (-0.101978) | 0.121415 / 0.737135 (-0.615720) | 0.075696 / 0.296338 (-0.220642) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283842 / 0.215209 (0.068633) | 2.788321 / 2.077655 (0.710666) | 1.481376 / 1.504120 (-0.022743) | 1.356064 / 1.541195 (-0.185131) | 1.380592 / 1.468490 (-0.087898) | 0.575577 / 4.584777 (-4.009199) | 2.471858 / 3.745712 (-1.273854) | 2.760769 / 5.269862 (-2.509093) | 1.808638 / 4.565676 (-2.757038) | 0.064930 / 0.424275 (-0.359345) | 0.005056 / 0.007607 (-0.002551) | 0.337794 / 0.226044 (0.111750) | 3.359444 / 2.268929 (1.090515) | 1.829540 / 55.444624 (-53.615084) | 1.518660 / 6.876477 (-5.357817) | 1.671612 / 2.142072 (-0.470460) | 0.664286 / 4.805227 (-4.140941) | 0.119593 / 6.500664 (-6.381071) | 0.042519 / 0.075469 (-0.032950) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993152 / 1.841788 (-0.848636) | 11.733054 / 8.074308 (3.658746) | 9.746734 / 10.191392 (-0.444658) | 0.143026 / 0.680424 (-0.537398) | 0.014900 / 0.534201 (-0.519301) | 0.292243 / 0.579283 (-0.287040) | 0.261301 / 0.434364 (-0.173063) | 0.330838 / 0.540337 (-0.209500) | 0.523719 / 1.386936 (-0.863217) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005707 / 0.011353 (-0.005646) | 0.003523 / 0.011008 (-0.007485) | 0.052265 / 0.038508 (0.013757) | 0.034296 / 0.023109 (0.011187) | 0.266589 / 0.275898 (-0.009309) | 0.288441 / 0.323480 (-0.035039) | 0.004507 / 0.007986 (-0.003478) | 0.002745 / 0.004328 (-0.001583) | 0.049417 / 0.004250 (0.045167) | 0.042679 / 0.037052 (0.005627) | 0.278518 / 0.258489 (0.020029) | 0.328751 / 0.293841 (0.034911) | 0.029530 / 0.128546 (-0.099016) | 0.010373 / 0.075646 (-0.065274) | 0.058207 / 0.419271 (-0.361064) | 0.033434 / 0.043533 (-0.010099) | 0.267902 / 0.255139 (0.012763) | 0.288192 / 0.283200 (0.004993) | 0.018866 / 0.141683 (-0.122817) | 1.132734 / 1.452155 (-0.319421) | 1.172879 / 1.492716 (-0.319837) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097787 / 0.018006 (0.079780) | 0.305509 / 0.000490 (0.305019) | 0.000268 / 0.000200 (0.000068) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023230 / 0.037411 (-0.014181) | 0.076637 / 0.014526 (0.062111) | 0.088386 / 0.176557 (-0.088171) | 0.131079 / 0.737135 (-0.606057) | 0.091142 / 0.296338 (-0.205197) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295586 / 0.215209 (0.080377) | 2.872090 / 2.077655 (0.794435) | 1.538152 / 1.504120 (0.034032) | 1.405695 / 1.541195 (-0.135500) | 1.421058 / 1.468490 (-0.047432) | 0.561179 / 4.584777 (-4.023598) | 0.943954 / 3.745712 (-2.801758) | 2.684381 / 5.269862 (-2.585481) | 1.757457 / 4.565676 (-2.808220) | 0.062903 / 0.424275 (-0.361372) | 0.004998 / 0.007607 (-0.002610) | 0.370290 / 0.226044 (0.144245) | 3.374988 / 2.268929 (1.106059) | 1.899282 / 55.444624 (-53.545342) | 1.598787 / 6.876477 (-5.277690) | 1.735371 / 2.142072 (-0.406702) | 0.647367 / 4.805227 (-4.157860) | 0.116975 / 6.500664 (-6.383689) | 0.040811 / 0.075469 (-0.034658) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996380 / 1.841788 (-0.845408) | 12.225657 / 8.074308 (4.151349) | 10.291221 / 10.191392 (0.099829) | 0.142791 / 0.680424 (-0.537633) | 0.016087 / 0.534201 (-0.518114) | 0.299978 / 0.579283 (-0.279305) | 0.149444 / 0.434364 (-0.284920) | 0.321354 / 0.540337 (-0.218984) | 0.414492 / 1.386936 (-0.972444) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a2dc287cbef5311cf1a32ad4e3685f4052db227c \"CML watermark\")\n", "@lhoestq Thanks for the PR, Is there a way to detect if `trust_remote_code=True` will be required for loading the dataset, without loading it? It would be great if you could please point me to the relevant documentation.", "You can check the presence of a python loading script in the repository.\r\n\r\nIf there is a .py file named after the repository name, then it requires trust_remote_code.", "Thanks @lhoestq for the reference." ]
Remove default `trust_remote_code=True`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6954/reactions" }
PR_kwDODunzps5xbWtU
{ "diff_url": "https://github.com/huggingface/datasets/pull/6954.diff", "html_url": "https://github.com/huggingface/datasets/pull/6954", "merged_at": "2024-06-07T12:20:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/6954.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6954" }
2024-06-04T13:22:56Z
https://api.github.com/repos/huggingface/datasets/issues/6954/comments
TODO: - [x] fix tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6954/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6954/timeline
closed
false
6,954
null
2024-06-07T12:20:29Z
null
true
2,333,366,120
https://api.github.com/repos/huggingface/datasets/issues/6953
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6953/events
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
null
2024-07-01T11:31:25Z
[]
https://github.com/huggingface/datasets/issues/6953
MEMBER
completed
null
null
[ "Canonical datasets are no longer mentioned in the docs." ]
Remove canonical datasets from docs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6953/reactions" }
I_kwDODunzps6LFFdo
null
2024-06-04T12:09:03Z
https://api.github.com/repos/huggingface/datasets/issues/6953/comments
Remove canonical datasets from docs, now that we no longer have canonical datasets.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6953/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6953/timeline
closed
false
6,953
null
2024-07-01T11:31:25Z
null
false
2,333,320,411
https://api.github.com/repos/huggingface/datasets/issues/6952
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6952/events
[]
null
2024-06-10T14:09:59Z
[]
https://github.com/huggingface/datasets/pull/6952
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6952). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005232 / 0.011353 (-0.006121) | 0.003744 / 0.011008 (-0.007264) | 0.064089 / 0.038508 (0.025581) | 0.032409 / 0.023109 (0.009300) | 0.255886 / 0.275898 (-0.020013) | 0.276033 / 0.323480 (-0.047447) | 0.004165 / 0.007986 (-0.003821) | 0.002741 / 0.004328 (-0.001588) | 0.052145 / 0.004250 (0.047894) | 0.043863 / 0.037052 (0.006811) | 0.258844 / 0.258489 (0.000355) | 0.290108 / 0.293841 (-0.003733) | 0.027390 / 0.128546 (-0.101156) | 0.010543 / 0.075646 (-0.065103) | 0.206936 / 0.419271 (-0.212335) | 0.036778 / 0.043533 (-0.006755) | 0.254331 / 0.255139 (-0.000808) | 0.279037 / 0.283200 (-0.004163) | 0.018564 / 0.141683 (-0.123119) | 1.112765 / 1.452155 (-0.339390) | 1.160099 / 1.492716 (-0.332617) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092148 / 0.018006 (0.074142) | 0.297156 / 0.000490 (0.296667) | 0.000211 / 0.000200 (0.000011) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018797 / 0.037411 (-0.018615) | 0.062992 / 0.014526 (0.048466) | 0.076361 / 0.176557 (-0.100195) | 0.121168 / 0.737135 (-0.615968) | 0.075845 / 0.296338 (-0.220494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293842 / 0.215209 (0.078633) | 2.880720 / 2.077655 (0.803065) | 1.477779 / 1.504120 (-0.026341) | 1.345136 / 1.541195 (-0.196059) | 1.352153 / 1.468490 (-0.116337) | 0.574722 / 4.584777 (-4.010055) | 2.373925 / 3.745712 (-1.371787) | 2.750704 / 5.269862 (-2.519157) | 1.725979 / 4.565676 (-2.839697) | 0.063006 / 0.424275 (-0.361269) | 0.005019 / 0.007607 (-0.002588) | 0.341228 / 0.226044 (0.115184) | 3.352576 / 2.268929 (1.083647) | 1.821363 / 55.444624 (-53.623261) | 1.529441 / 6.876477 (-5.347036) | 1.543401 / 2.142072 (-0.598671) | 0.634282 / 4.805227 (-4.170945) | 0.115565 / 6.500664 (-6.385099) | 0.042514 / 0.075469 (-0.032956) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.987532 / 1.841788 (-0.854255) | 11.483853 / 8.074308 (3.409545) | 9.565657 / 10.191392 (-0.625735) | 0.141247 / 0.680424 (-0.539176) | 0.015026 / 0.534201 (-0.519175) | 0.299905 / 0.579283 (-0.279378) | 0.267667 / 0.434364 (-0.166697) | 0.320661 / 0.540337 (-0.219676) | 0.427368 / 1.386936 (-0.959568) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005448 / 0.011353 (-0.005905) | 0.003726 / 0.011008 (-0.007283) | 0.049776 / 0.038508 (0.011268) | 0.032733 / 0.023109 (0.009624) | 0.261387 / 0.275898 (-0.014511) | 0.280087 / 0.323480 (-0.043393) | 0.004351 / 0.007986 (-0.003634) | 0.002842 / 0.004328 (-0.001487) | 0.049440 / 0.004250 (0.045190) | 0.039585 / 0.037052 (0.002533) | 0.266331 / 0.258489 (0.007842) | 0.299643 / 0.293841 (0.005802) | 0.029649 / 0.128546 (-0.098897) | 0.010381 / 0.075646 (-0.065265) | 0.058596 / 0.419271 (-0.360676) | 0.033271 / 0.043533 (-0.010262) | 0.251070 / 0.255139 (-0.004069) | 0.272850 / 0.283200 (-0.010349) | 0.016728 / 0.141683 (-0.124955) | 1.146952 / 1.452155 (-0.305202) | 1.182602 / 1.492716 (-0.310114) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091673 / 0.018006 (0.073667) | 0.297228 / 0.000490 (0.296738) | 0.000197 / 0.000200 (-0.000003) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023174 / 0.037411 (-0.014237) | 0.078866 / 0.014526 (0.064341) | 0.088436 / 0.176557 (-0.088121) | 0.129650 / 0.737135 (-0.607485) | 0.091100 / 0.296338 (-0.205238) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293882 / 0.215209 (0.078673) | 2.882667 / 2.077655 (0.805012) | 1.562949 / 1.504120 (0.058829) | 1.435104 / 1.541195 (-0.106090) | 1.450815 / 1.468490 (-0.017675) | 0.584090 / 4.584777 (-4.000687) | 0.984176 / 3.745712 (-2.761536) | 2.668740 / 5.269862 (-2.601121) | 1.766993 / 4.565676 (-2.798683) | 0.064710 / 0.424275 (-0.359565) | 0.005329 / 0.007607 (-0.002278) | 0.346008 / 0.226044 (0.119964) | 3.414576 / 2.268929 (1.145647) | 1.911388 / 55.444624 (-53.533236) | 1.660357 / 6.876477 (-5.216120) | 1.818628 / 2.142072 (-0.323444) | 0.659585 / 4.805227 (-4.145643) | 0.116980 / 6.500664 (-6.383684) | 0.041364 / 0.075469 (-0.034105) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005659 / 1.841788 (-0.836129) | 12.023761 / 8.074308 (3.949453) | 10.351086 / 10.191392 (0.159694) | 0.143261 / 0.680424 (-0.537162) | 0.016143 / 0.534201 (-0.518058) | 0.287793 / 0.579283 (-0.291490) | 0.123698 / 0.434364 (-0.310666) | 0.325241 / 0.540337 (-0.215097) | 0.418772 / 1.386936 (-0.968164) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#37a603679f451826cfafd8aae00738b01dcb9d58 \"CML watermark\")\n" ]
Move info_utils errors to exceptions module
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6952/reactions" }
PR_kwDODunzps5xaosH
{ "diff_url": "https://github.com/huggingface/datasets/pull/6952.diff", "html_url": "https://github.com/huggingface/datasets/pull/6952", "merged_at": "2024-06-10T14:03:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/6952.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6952" }
2024-06-04T11:48:32Z
https://api.github.com/repos/huggingface/datasets/issues/6952/comments
Move `info_utils` errors to `exceptions` module. Additionally rename some of them, deprecate the former ones, and make the deprecation backward compatible (by making the new errors inherit from the former ones).
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6952/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6952/timeline
closed
false
6,952
null
2024-06-10T14:03:55Z
null
true
2,333,231,042
https://api.github.com/repos/huggingface/datasets/issues/6951
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6951/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-07-01T11:33:10Z
[]
https://github.com/huggingface/datasets/issues/6951
NONE
not_planned
null
null
[ "@xianbaoqian ", "Feel free to open a PR in `m-a-p/COIG-CQIA` to define a default subset. Currently there is no default.\r\n\r\nYou can find some documentation at https://huggingface.co/docs/hub/datasets-manual-configuration#multiple-configurations", "@lhoestq \r\n\r\nWhilst having a default subset readily available (e.g. `all`) by the dataset author is an ideal solution, it is not always the reality.\r\n\r\nWithout the ability to fork the dataset, this can be problematic.\r\n\r\nAs far as I know, it is not possible at all to specify multiple subsets in a generalized programmatic way without hard coding subset names for a specific dataset.\r\n\r\nEven the ability to fetch subset names and loop over them would be sufficient.", "Please note that each subset can have different feature columns, thus making it impossible to load them all into a unique Dataset instance.\r\n\r\nThat is why subsets were created: to support different but related datasets to coexist in a single dataset repository.\r\n\r\nIf you would like to programmatically get the list of subset names, you can use `datasets.get_dataset_config_names`: https://huggingface.co/docs/datasets/v2.20.0/en/load_hub#configurations" ]
load_dataset() should load all subsets, if no specific subset is specified
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6951/reactions" }
I_kwDODunzps6LEkfC
null
2024-06-04T11:02:33Z
https://api.github.com/repos/huggingface/datasets/issues/6951/comments
### Feature request Currently load_dataset() is forcing users to specify a subset. Example `from datasets import load_dataset dataset = load_dataset("m-a-p/COIG-CQIA")` ```--------------------------------------------------------------------------- ValueError Traceback (most recent call last) [<ipython-input-10-c0cb49385da6>](https://localhost:8080/#) in <cell line: 2>() 1 from datasets import load_dataset ----> 2 dataset = load_dataset("m-a-p/COIG-CQIA") 3 frames [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _create_builder_config(self, config_name, custom_features, **config_kwargs) 582 if not config_kwargs: 583 example_of_usage = f"load_dataset('{self.dataset_name}', '{self.BUILDER_CONFIGS[0].name}')" --> 584 raise ValueError( 585 "Config name is missing." 586 f"\nPlease pick one among the available configs: {list(self.builder_configs.keys())}" ValueError: Config name is missing. Please pick one among the available configs: ['chinese_traditional', 'coig_pc', 'exam', 'finance', 'douban', 'human_value', 'logi_qa', 'ruozhiba', 'segmentfault', 'wiki', 'wikihow', 'xhs', 'zhihu'] Example of usage: `load_dataset('coig-cqia', 'chinese_traditional')` ``` This means a dataset cannot contain all the subsets at the same time. I guess one workaround is to manually specify the subset files like in [here](https://huggingface.co/datasets/m-a-p/COIG-CQIA/discussions/1#658698b44bb41498f75c5622), which is clumsy. ### Motivation Ideally, if not subset is specified, the API should just try to load all subsets. This makes it much easier to handle datasets w/ subsets. ### Your contribution Not sure since I'm not familiar w/ the lib src.
{ "avatar_url": "https://avatars.githubusercontent.com/u/5577741?v=4", "events_url": "https://api.github.com/users/windmaple/events{/privacy}", "followers_url": "https://api.github.com/users/windmaple/followers", "following_url": "https://api.github.com/users/windmaple/following{/other_user}", "gists_url": "https://api.github.com/users/windmaple/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/windmaple", "id": 5577741, "login": "windmaple", "node_id": "MDQ6VXNlcjU1Nzc3NDE=", "organizations_url": "https://api.github.com/users/windmaple/orgs", "received_events_url": "https://api.github.com/users/windmaple/received_events", "repos_url": "https://api.github.com/users/windmaple/repos", "site_admin": false, "starred_url": "https://api.github.com/users/windmaple/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/windmaple/subscriptions", "type": "User", "url": "https://api.github.com/users/windmaple" }
https://api.github.com/repos/huggingface/datasets/issues/6951/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6951/timeline
closed
false
6,951
null
2024-07-01T11:33:10Z
null
false
2,333,005,974
https://api.github.com/repos/huggingface/datasets/issues/6950
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6950/events
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
null
2024-06-25T08:05:49Z
[]
https://github.com/huggingface/datasets/issues/6950
NONE
completed
null
null
[ "Hi ! It seems the documentation was outdated in this paragraph\r\n\r\nI fixed it here: https://github.com/huggingface/datasets/pull/6956", "Fixed." ]
`Dataset.with_format` behaves inconsistently with documentation
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6950/reactions" }
I_kwDODunzps6LDtiW
null
2024-06-04T09:18:32Z
https://api.github.com/repos/huggingface/datasets/issues/6950/comments
### Describe the bug The actual behavior of the interface `Dataset.with_format` is inconsistent with the documentation. https://huggingface.co/docs/datasets/use_with_pytorch#n-dimensional-arrays https://huggingface.co/docs/datasets/v2.19.0/en/use_with_tensorflow#n-dimensional-arrays > If your dataset consists of N-dimensional arrays, you will see that by default they are considered as nested lists. > In particular, a PyTorch formatted dataset outputs nested lists instead of a single tensor. > A TensorFlow formatted dataset outputs a RaggedTensor instead of a single tensor. But I get a single tensor by default, which is inconsistent with the description. Actually the current behavior seems more reasonable to me. Therefore, the document needs to be modified. ### Steps to reproduce the bug ```python >>> from datasets import Dataset >>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]] >>> ds = Dataset.from_dict({"data": data}) >>> ds = ds.with_format("torch") >>> ds[0] {'data': tensor([[1, 2], [3, 4]])} >>> ds = ds.with_format("tf") >>> ds[0] {'data': <tf.Tensor: shape=(2, 2), dtype=int64, numpy= array([[1, 2], [3, 4]])>} ``` ### Expected behavior ```python >>> from datasets import Dataset >>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]] >>> ds = Dataset.from_dict({"data": data}) >>> ds = ds.with_format("torch") >>> ds[0] {'data': [tensor([1, 2]), tensor([3, 4])]} >>> ds = ds.with_format("tf") >>> ds[0] {'data': <tf.RaggedTensor [[1, 2], [3, 4]]>} ``` ### Environment info datasets==2.19.1 torch==2.1.0 tensorflow==2.13.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/42494185?v=4", "events_url": "https://api.github.com/users/iansheng/events{/privacy}", "followers_url": "https://api.github.com/users/iansheng/followers", "following_url": "https://api.github.com/users/iansheng/following{/other_user}", "gists_url": "https://api.github.com/users/iansheng/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iansheng", "id": 42494185, "login": "iansheng", "node_id": "MDQ6VXNlcjQyNDk0MTg1", "organizations_url": "https://api.github.com/users/iansheng/orgs", "received_events_url": "https://api.github.com/users/iansheng/received_events", "repos_url": "https://api.github.com/users/iansheng/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iansheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iansheng/subscriptions", "type": "User", "url": "https://api.github.com/users/iansheng" }
https://api.github.com/repos/huggingface/datasets/issues/6950/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6950/timeline
closed
false
6,950
null
2024-06-25T08:05:49Z
null
false
2,332,336,573
https://api.github.com/repos/huggingface/datasets/issues/6949
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6949/events
[]
null
2024-07-01T11:33:46Z
[]
https://github.com/huggingface/datasets/issues/6949
NONE
completed
null
null
[ "Hi, @lion-ops.\r\n\r\nIn our Continuous Integration we have many tests on loading JSON files and all of them work properly.\r\n\r\nCould you please share your \"train.json\" file, so that we can try to reproduce the issue you have? ", "> Hi, @lion-ops.\r\n> \r\n> In our Continuous Integration we have many tests on loading JSON files and all of them work properly.\r\n> \r\n> Could you please share your \"train.json\" file, so that we can try to reproduce the issue you have?\r\n\r\nThank you for your reply. I can load it normally in another server. Is it possible that the disk of my server is a network disk in the LAN, so it will be downloaded from the LAN and get stuck?" ]
load_dataset error
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6949/reactions" }
I_kwDODunzps6LBKG9
null
2024-06-04T01:24:45Z
https://api.github.com/repos/huggingface/datasets/issues/6949/comments
### Describe the bug Why does the program get stuck when I use load_dataset method, and it still gets stuck after loading for several hours? In fact, my json file is only 21m, and I can load it in one go using open('', 'r'). ### Steps to reproduce the bug 1. pip install datasets==2.19.2 2. from datasets import Dataset, DatasetDict, NamedSplit, Split, load_dataset 3. data = load_dataset('json', data_files='train.json') ### Expected behavior It is able to load my json correctly ### Environment info datasets==2.19.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/27952522?v=4", "events_url": "https://api.github.com/users/lion-ops/events{/privacy}", "followers_url": "https://api.github.com/users/lion-ops/followers", "following_url": "https://api.github.com/users/lion-ops/following{/other_user}", "gists_url": "https://api.github.com/users/lion-ops/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lion-ops", "id": 27952522, "login": "lion-ops", "node_id": "MDQ6VXNlcjI3OTUyNTIy", "organizations_url": "https://api.github.com/users/lion-ops/orgs", "received_events_url": "https://api.github.com/users/lion-ops/received_events", "repos_url": "https://api.github.com/users/lion-ops/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lion-ops/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lion-ops/subscriptions", "type": "User", "url": "https://api.github.com/users/lion-ops" }
https://api.github.com/repos/huggingface/datasets/issues/6949/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6949/timeline
closed
false
6,949
null
2024-07-01T11:33:46Z
null
false
2,331,758,300
https://api.github.com/repos/huggingface/datasets/issues/6948
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6948/events
[]
null
2024-06-03T18:10:57Z
[]
https://github.com/huggingface/datasets/issues/6948
NONE
null
null
null
[]
to_tf_dataset: Visible devices cannot be modified after being initialized
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6948/reactions" }
I_kwDODunzps6K-87c
null
2024-06-03T18:10:57Z
https://api.github.com/repos/huggingface/datasets/issues/6948/comments
### Describe the bug When trying to use to_tf_dataset with a custom data_loader collate_fn when I use parallelism I am met with the following error as many times as number of workers there were in ``num_workers``. File "/opt/miniconda/envs/env/lib/python3.11/site-packages/multiprocess/process.py", line 314, in _bootstrap self.run() File "/opt/miniconda/envs/env/lib/python3.11/site-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/opt/miniconda/envs/env/lib/python3.11/site-packages/datasets/utils/tf_utils.py", line 438, in worker_loop tf.config.set_visible_devices([], "GPU") # Make sure workers don't try to allocate GPU memory ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/miniconda/envs/env/lib/python3.11/site-packages/tensorflow/python/framework/config.py", line 566, in set_visible_devices context.context().set_visible_devices(devices, device_type) File "/opt/miniconda/envs/env/lib/python3.11/site-packages/tensorflow/python/eager/context.py", line 1737, in set_visible_devices raise RuntimeError( RuntimeError: Visible devices cannot be modified after being initialized ### Steps to reproduce the bug 1. Download a dataset using HuggingFace load_dataset 2. Define a function that transforms the data in some way to be used in the collate_fn argument 3. Provide a ``batch_size`` and ``num_workers`` value in the ``to_tf_dataset`` function 4. Either retrieve directly or use tfds benchmark to test the dataset ``` python from datasets import load_datasets import tensorflow_datasets as tfds from keras_cv.layers import Resizing def data_loader(examples): x = Resizing(examples[0]['image'], 256, 256, crop_to_aspect_ratio=True) return {X[0]: x} ds = load_datasets("logasja/FDF", split="test") ds = ds.to_tf_dataset(collate_fn=data_loader, batch_size=16, num_workers=2) tfds.benchmark(ds) ``` ### Expected behavior Use multiple processes to apply transformations from the collate_fn to the tf dataset on the CPU. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-6.5.0-1023-oracle-x86_64-with-glibc2.35 - Python version: 3.11.8 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.2.1 - `fsspec` version: 2024.2.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/7151661?v=4", "events_url": "https://api.github.com/users/logasja/events{/privacy}", "followers_url": "https://api.github.com/users/logasja/followers", "following_url": "https://api.github.com/users/logasja/following{/other_user}", "gists_url": "https://api.github.com/users/logasja/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/logasja", "id": 7151661, "login": "logasja", "node_id": "MDQ6VXNlcjcxNTE2NjE=", "organizations_url": "https://api.github.com/users/logasja/orgs", "received_events_url": "https://api.github.com/users/logasja/received_events", "repos_url": "https://api.github.com/users/logasja/repos", "site_admin": false, "starred_url": "https://api.github.com/users/logasja/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/logasja/subscriptions", "type": "User", "url": "https://api.github.com/users/logasja" }
https://api.github.com/repos/huggingface/datasets/issues/6948/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6948/timeline
open
false
6,948
null
null
null
false
2,331,114,055
https://api.github.com/repos/huggingface/datasets/issues/6947
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6947/events
[]
null
2024-06-25T06:21:28Z
[]
https://github.com/huggingface/datasets/issues/6947
NONE
completed
null
null
[ "same problem here", "Hello,\r\n\r\nAre you sure you are really using datasets version 2.19.2? We just made the patch release yesterday specifically to fix this issue:\r\n- #6925\r\n\r\nI can't reproduce the error:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation')\r\nDownloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 41.1k/41.1k [00:00<00:00, 596kB/s]\r\nDownloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 40.7M/40.7M [00:04<00:00, 8.50MB/s]\r\nGenerating validation split: 45576 examples [00:01, 44956.75 examples/s]\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDataset({\r\n features: ['text', 'timestamp', 'url'],\r\n num_rows: 45576\r\n})\r\n```", "> Hello,\r\n> \r\n> Are you sure you are really using datasets version 2.19.2? We just made the patch release yesterday specifically to fix this issue:\r\n> \r\n> * [Fix NonMatchingSplitsSizesError/ExpectedMoreSplits when passing data_dir/data_files in no-code Hub datasetsΒ #6925](https://github.com/huggingface/datasets/pull/6925)\r\n> \r\n> I can't reproduce the error:\r\n> \r\n> ```python\r\n> In [1]: from datasets import load_dataset\r\n> \r\n> In [2]: ds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation')\r\n> Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 41.1k/41.1k [00:00<00:00, 596kB/s]\r\n> Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 40.7M/40.7M [00:04<00:00, 8.50MB/s]\r\n> Generating validation split: 45576 examples [00:01, 44956.75 examples/s]\r\n> \r\n> In [3]: ds\r\n> Out[3]: \r\n> Dataset({\r\n> features: ['text', 'timestamp', 'url'],\r\n> num_rows: 45576\r\n> })\r\n> ```\r\nThank you for your reply,ExpectedMoreSplits was encountered in datasets version 2.12.2. After I updated the version, that is, datasets version 2.19.2, I encountered the FileNotFoundError problem mentioned above.", "That might be due to a corrupted cache.\r\n\r\nPlease, retry loading the dataset passing: `download_mode=\"force_redownload\"`\r\n```python\r\nds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation', download_mode=\"force_redownload\")\r\n```\r\n\r\nIt the above command does not fix the issue, then you will need to fix the cache manually, by removing the corresponding directory inside `~/.cache/huggingface/`.\r\n", "> That might be due to a corrupted cache.\r\n> \r\n> Please, retry loading the dataset passing: `download_mode=\"force_redownload\"`\r\n> \r\n> ```python\r\n> ds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation', download_mode=\"force_redownload\")\r\n> ```\r\n> \r\n> It the above command does not fix the issue, then you will need to fix the cache manually, by removing the corresponding directory inside `~/.cache/huggingface/`.\r\n\r\nThe two methods you mentioned above can not solve this problem, but the command line interface shows Downloading readme: 41.1kB [00:00, 281kB/s], and then FileNotFoundError appears. It is worth noting that I have no problem loading other datasets with the initial method, such as wikitext datasets", "> The two methods you mentioned above can not solve this problem, but the command line interface shows Downloading readme: 41.1kB [00:00, 281kB/s], and then FileNotFoundError appears.\r\n\r\nSame issue encountered.\r\n", "I really think the issue is caused by a corrupted cache, between versions 2.12.0 (there does not exist 2.12.2 version) and 2.19.2.\r\n\r\nAre you sure you removed all the corresponding corrupted directories within the cache?\r\n\r\nYou can easily check if the issue is caused by a corrupted cache by removing the entire cache:\r\n```shell\r\nmv ~/.cache/huggingface ~/.cache/huggingface.bak\r\n```\r\nand then reloading the dataset:\r\n```python\r\nds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation', download_mode=\"force_redownload\")\r\n```", "@albertvillanova Thanks for the reply. I tried removing the entire cache and reloading the dataset as you suggest. However, the same issue still exists. \r\n\r\nAs a test, I switch to a new platform, which (is a Windows system and) hasn't downloaded huggingface dataset before, and the dataset is loaded successfully. So I think \"a corrupted cache\" explanation makes sense. I wonder, besides `~/.cache/huggingface`, is there any other directory that may save the cache thing?\r\n\r\nAs a side note, I am using `datasets==2.20.0` and proxy `export HF_ENDPOINT=https://hf-mirror.com`.", "Ho @ZhangGe6,\r\n\r\nAs far as I know, that directory is the only one where the cache is saved, unless you configured another one. You can check it:\r\n```python\r\nimport datasets.config\r\n\r\nprint(datasets.config.HF_CACHE_HOME)\r\n# ~/.cache/huggingface\r\n\r\nprint(datasets.config.HF_DATASETS_CACHE)\r\n# ~/.cache/huggingface/datasets\r\n\r\nprint(datasets.config.HF_MODULES_CACHE)\r\n# ~/.cache/huggingface/modules\r\n\r\nprint(datasets.config.DOWNLOADED_DATASETS_PATH)\r\n# ~/.cache/huggingface/datasets/downloads\r\n\r\nprint(datasets.config.EXTRACTED_DATASETS_PATH)\r\n# ~/.cache/huggingface/datasets/downloads/extracted\r\n```\r\n\r\nAdditionally, `datasets` uses `huggingface_hub`, but its cache directory should also be inside `~/.cache/huggingface`, unless you configured another one. You can check it:\r\n```python\r\nimport huggingface_hub.constants\r\n\r\nprint(huggingface_hub.constants.HF_HOME)\r\n# ~/.cache/huggingface\r\n\r\nprint(huggingface_hub.constants.HF_HUB_CACHE)\r\n# ~/.cache/huggingface/hub\r\n```", "@albertvillanova I checked the directories you listed, and find that they are the same as the ones you provided. I am going to find more clues and will update what I find here.", "I've had a similar problem, and for some reason decreasing the number of workers in the dataloader solved it", "Same issue.\r\n", "Hi folks. Finally, I find it is a network issue that causes huggingface hub unreachable (in China).\r\n\r\nTo run the following script \r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation', download_mode=\"force_redownload\")\r\n```\r\nWithout setting `export HF_ENDPOINT=https://hf-mirror.com`, I get the following error log\r\n```bash\r\nTraceback (most recent call last):\r\n File \".\\demo.py\", line 8, in <module>\r\n ds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation', download_mode=\"force_redownload\")\r\n File \"D:\\SoftwareInstall\\Python\\lib\\site-packages\\datasets\\load.py\", line 2594, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"D:\\SoftwareInstall\\Python\\lib\\site-packages\\datasets\\load.py\", line 2266, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"D:\\SoftwareInstall\\Python\\lib\\site-packages\\datasets\\load.py\", line 1914, in dataset_module_factory\r\n raise e1 from None\r\n File \"D:\\SoftwareInstall\\Python\\lib\\site-packages\\datasets\\load.py\", line 1845, in dataset_module_factory\r\n raise ConnectionError(f\"Couldn't reach '{path}' on the Hub ({e.__class__.__name__})\") from e\r\nConnectionError: Couldn't reach 'allenai/c4' on the Hub (ConnectionError)\r\n```\r\nAfter setting `export HF_ENDPOINT=https://hf-mirror.com`, I get the following error, which is exactly the same as what we are debugging in this issue\r\n```bash\r\nDownloading readme: 41.1kB [00:00, 41.1MB/s]\r\nTraceback (most recent call last):\r\n File \".\\demo.py\", line 8, in <module>\r\n ds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation', download_mode=\"force_redownload\")\r\n File \"D:\\SoftwareInstall\\Python\\lib\\site-packages\\datasets\\load.py\", line 2594, in loa builder_instance = load_dataset_builder(\r\n File \"D:\\SoftwareInstall\\Python\\lib\\site-packages\\datasets\\load.py\", line 2266, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n raise FileNotFoundError(\r\nFileNotFoundError: Couldn't find a dataset script at C:\\Users\\ZhangGe\\Desktop\\allenai\\c4\\c4.py or any data file in the same directory. Couldn't find 'allenai/c4' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/allenai/c4@1588ec454eed extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', \r\n'.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns',pm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', \r\n'.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']\r\n```\r\n\r\n**Using a proxy software that avoids the internet access restrictions imposed by China, I can download the dataset using the same script**\r\n```bash\r\nDownloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 41.1k/41.1k [00:00<00:00, 312kB/s] \r\nDownloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 40.7M/40.7M [00:19<00:00, 2.07MB/s] \r\nGenerating validation split: 45576 examples [00:00, 54883.48 examples/s]\r\n```\r\nSo `allenai/c4` is still unreachable even after setting `export HF_ENDPOINT=https://hf-mirror.com`.", "I have created an issue to inform the maintainers of `hf-mirror`:https://github.com/padeoe/hf-mirror-site/issues/30", "Thanks for the investigation: so finally it is an issue with the specific endpoint you are using.\r\n\r\nYou properly opened an issue in their repo, so they can fix it.\r\n\r\nI am closing this issue here." ]
FileNotFoundError:error when loading C4 dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6947/reactions" }
I_kwDODunzps6K8fpH
null
2024-06-03T13:06:33Z
https://api.github.com/repos/huggingface/datasets/issues/6947/comments
### Describe the bug can't load c4 datasets When I replace the datasets package to 2.12.2 I get raise datasets.utils.info_utils.ExpectedMoreSplits: {'train'} How can I fix this? ### Steps to reproduce the bug 1.from datasets import load_dataset 2.dataset = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation') 3. raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at local_path/c4_val/allenai/c4/c4.py or any data file in the same directory. Couldn't find 'allenai/c4' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/allenai/c4@1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-validation.00003-of-00008.json.gz' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip'] ### Expected behavior The data was successfully imported ### Environment info python version 3.9 datasets version 2.19.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/62374585?v=4", "events_url": "https://api.github.com/users/W-215/events{/privacy}", "followers_url": "https://api.github.com/users/W-215/followers", "following_url": "https://api.github.com/users/W-215/following{/other_user}", "gists_url": "https://api.github.com/users/W-215/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/W-215", "id": 62374585, "login": "W-215", "node_id": "MDQ6VXNlcjYyMzc0NTg1", "organizations_url": "https://api.github.com/users/W-215/orgs", "received_events_url": "https://api.github.com/users/W-215/received_events", "repos_url": "https://api.github.com/users/W-215/repos", "site_admin": false, "starred_url": "https://api.github.com/users/W-215/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/W-215/subscriptions", "type": "User", "url": "https://api.github.com/users/W-215" }
https://api.github.com/repos/huggingface/datasets/issues/6947/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6947/timeline
closed
false
6,947
null
2024-06-25T06:21:28Z
null
false
2,330,276,848
https://api.github.com/repos/huggingface/datasets/issues/6946
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6946/events
[]
null
2024-06-04T10:00:08Z
[]
https://github.com/huggingface/datasets/pull/6946
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6946). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004847 / 0.011353 (-0.006506) | 0.003199 / 0.011008 (-0.007810) | 0.060677 / 0.038508 (0.022169) | 0.030544 / 0.023109 (0.007435) | 0.240870 / 0.275898 (-0.035028) | 0.261320 / 0.323480 (-0.062160) | 0.002816 / 0.007986 (-0.005170) | 0.002483 / 0.004328 (-0.001845) | 0.048527 / 0.004250 (0.044277) | 0.045496 / 0.037052 (0.008444) | 0.251296 / 0.258489 (-0.007193) | 0.285746 / 0.293841 (-0.008095) | 0.025076 / 0.128546 (-0.103470) | 0.009417 / 0.075646 (-0.066229) | 0.191361 / 0.419271 (-0.227911) | 0.033778 / 0.043533 (-0.009755) | 0.235581 / 0.255139 (-0.019558) | 0.261069 / 0.283200 (-0.022131) | 0.018255 / 0.141683 (-0.123428) | 1.098437 / 1.452155 (-0.353718) | 1.127124 / 1.492716 (-0.365592) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004479 / 0.018006 (-0.013527) | 0.283706 / 0.000490 (0.283216) | 0.000214 / 0.000200 (0.000014) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018364 / 0.037411 (-0.019048) | 0.058398 / 0.014526 (0.043872) | 0.073056 / 0.176557 (-0.103501) | 0.117147 / 0.737135 (-0.619989) | 0.073683 / 0.296338 (-0.222656) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.265121 / 0.215209 (0.049912) | 2.636981 / 2.077655 (0.559327) | 1.380192 / 1.504120 (-0.123928) | 1.270779 / 1.541195 (-0.270416) | 1.295729 / 1.468490 (-0.172762) | 0.523768 / 4.584777 (-4.061009) | 2.295720 / 3.745712 (-1.449992) | 2.519211 / 5.269862 (-2.750650) | 1.618712 / 4.565676 (-2.946965) | 0.058321 / 0.424275 (-0.365954) | 0.004492 / 0.007607 (-0.003115) | 0.316101 / 0.226044 (0.090057) | 3.169913 / 2.268929 (0.900984) | 1.793412 / 55.444624 (-53.651213) | 1.473784 / 6.876477 (-5.402693) | 1.565325 / 2.142072 (-0.576748) | 0.592734 / 4.805227 (-4.212493) | 0.109333 / 6.500664 (-6.391331) | 0.039063 / 0.075469 (-0.036406) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.935504 / 1.841788 (-0.906284) | 10.865520 / 8.074308 (2.791212) | 9.219337 / 10.191392 (-0.972055) | 0.135284 / 0.680424 (-0.545140) | 0.013664 / 0.534201 (-0.520537) | 0.271601 / 0.579283 (-0.307682) | 0.260456 / 0.434364 (-0.173908) | 0.302931 / 0.540337 (-0.237406) | 0.414643 / 1.386936 (-0.972293) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004801 / 0.011353 (-0.006552) | 0.003092 / 0.011008 (-0.007917) | 0.046471 / 0.038508 (0.007963) | 0.031337 / 0.023109 (0.008228) | 0.258920 / 0.275898 (-0.016978) | 0.269842 / 0.323480 (-0.053638) | 0.003976 / 0.007986 (-0.004009) | 0.002661 / 0.004328 (-0.001668) | 0.045676 / 0.004250 (0.041426) | 0.038199 / 0.037052 (0.001146) | 0.277382 / 0.258489 (0.018893) | 0.289351 / 0.293841 (-0.004490) | 0.028452 / 0.128546 (-0.100094) | 0.009737 / 0.075646 (-0.065910) | 0.055201 / 0.419271 (-0.364071) | 0.032686 / 0.043533 (-0.010847) | 0.259617 / 0.255139 (0.004478) | 0.277163 / 0.283200 (-0.006037) | 0.017825 / 0.141683 (-0.123858) | 1.102797 / 1.452155 (-0.349357) | 1.105018 / 1.492716 (-0.387699) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094844 / 0.018006 (0.076838) | 0.290519 / 0.000490 (0.290029) | 0.000211 / 0.000200 (0.000012) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021917 / 0.037411 (-0.015494) | 0.075278 / 0.014526 (0.060753) | 0.085971 / 0.176557 (-0.090586) | 0.127072 / 0.737135 (-0.610063) | 0.088244 / 0.296338 (-0.208095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276704 / 0.215209 (0.061495) | 2.736960 / 2.077655 (0.659305) | 1.519634 / 1.504120 (0.015514) | 1.403026 / 1.541195 (-0.138168) | 1.418465 / 1.468490 (-0.050025) | 0.552425 / 4.584777 (-4.032352) | 0.955244 / 3.745712 (-2.790468) | 2.556563 / 5.269862 (-2.713298) | 1.705095 / 4.565676 (-2.860582) | 0.061212 / 0.424275 (-0.363063) | 0.004707 / 0.007607 (-0.002900) | 0.326284 / 0.226044 (0.100239) | 3.253911 / 2.268929 (0.984983) | 1.868649 / 55.444624 (-53.575976) | 1.598697 / 6.876477 (-5.277780) | 1.682617 / 2.142072 (-0.459455) | 0.606379 / 4.805227 (-4.198848) | 0.114126 / 6.500664 (-6.386538) | 0.038869 / 0.075469 (-0.036601) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.966354 / 1.841788 (-0.875433) | 11.575918 / 8.074308 (3.501609) | 9.816597 / 10.191392 (-0.374795) | 0.141492 / 0.680424 (-0.538932) | 0.015375 / 0.534201 (-0.518826) | 0.276027 / 0.579283 (-0.303256) | 0.118979 / 0.434364 (-0.315385) | 0.313467 / 0.540337 (-0.226870) | 0.403539 / 1.386936 (-0.983397) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1b59c75856d765e60b66a5216062102d001c6612 \"CML watermark\")\n" ]
Re-enable import sorting disabled by flake8:noqa directive when using ruff linter
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6946/reactions" }
PR_kwDODunzps5xQNao
{ "diff_url": "https://github.com/huggingface/datasets/pull/6946.diff", "html_url": "https://github.com/huggingface/datasets/pull/6946", "merged_at": "2024-06-04T09:54:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/6946.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6946" }
2024-06-03T06:24:47Z
https://api.github.com/repos/huggingface/datasets/issues/6946/comments
Re-enable import sorting that was wrongly disabled by `flake8: noqa` directive after switching to `ruff` linter in datasets-2.10.0 PR: - #5519 Note that after the linter switch, we wrongly replaced `flake8: noqa` with `ruff: noqa` in datasets-2.17.0 PR: - #6619 That replacement was wrong because we kept the `isort: skip` directives although they were indeed disabled by `flake8: noqa` first and by `ruff: noqa` afterwards. See for example `__init__.py` file after the linter switch: - We kept the `flake8: noqa` directive https://github.com/huggingface/datasets/blob/06ae3f678651bfbb3ca7dd3274ee2f38e0e0237e/src/datasets/__init__.py#L1 - Whereas we also kept the `isort: skip` directives (that were disabled) https://github.com/huggingface/datasets/blob/06ae3f678651bfbb3ca7dd3274ee2f38e0e0237e/src/datasets/__init__.py#L82-L84 Fix #6942.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6946/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6946/timeline
closed
false
6,946
null
2024-06-04T09:54:23Z
null
true
2,330,224,869
https://api.github.com/repos/huggingface/datasets/issues/6945
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6945/events
[]
null
2024-06-18T07:36:15Z
[]
https://github.com/huggingface/datasets/pull/6945
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6945). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005725 / 0.011353 (-0.005627) | 0.003788 / 0.011008 (-0.007220) | 0.063059 / 0.038508 (0.024551) | 0.031364 / 0.023109 (0.008255) | 0.259209 / 0.275898 (-0.016689) | 0.278805 / 0.323480 (-0.044675) | 0.003032 / 0.007986 (-0.004953) | 0.002633 / 0.004328 (-0.001696) | 0.049804 / 0.004250 (0.045554) | 0.046717 / 0.037052 (0.009665) | 0.267246 / 0.258489 (0.008757) | 0.299271 / 0.293841 (0.005430) | 0.027687 / 0.128546 (-0.100860) | 0.010524 / 0.075646 (-0.065123) | 0.201736 / 0.419271 (-0.217536) | 0.036192 / 0.043533 (-0.007341) | 0.264492 / 0.255139 (0.009353) | 0.280809 / 0.283200 (-0.002391) | 0.018187 / 0.141683 (-0.123496) | 1.170751 / 1.452155 (-0.281404) | 1.223450 / 1.492716 (-0.269266) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096610 / 0.018006 (0.078604) | 0.297122 / 0.000490 (0.296632) | 0.000211 / 0.000200 (0.000011) | 0.000046 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018380 / 0.037411 (-0.019031) | 0.062214 / 0.014526 (0.047688) | 0.075833 / 0.176557 (-0.100723) | 0.121825 / 0.737135 (-0.615310) | 0.075475 / 0.296338 (-0.220864) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275601 / 0.215209 (0.060392) | 2.698014 / 2.077655 (0.620359) | 1.434043 / 1.504120 (-0.070077) | 1.313217 / 1.541195 (-0.227978) | 1.339014 / 1.468490 (-0.129476) | 0.566703 / 4.584777 (-4.018074) | 2.367794 / 3.745712 (-1.377918) | 2.660787 / 5.269862 (-2.609074) | 1.738503 / 4.565676 (-2.827174) | 0.061693 / 0.424275 (-0.362582) | 0.004978 / 0.007607 (-0.002629) | 0.334719 / 0.226044 (0.108675) | 3.300889 / 2.268929 (1.031960) | 1.764493 / 55.444624 (-53.680131) | 1.475956 / 6.876477 (-5.400521) | 1.635988 / 2.142072 (-0.506084) | 0.643906 / 4.805227 (-4.161321) | 0.118002 / 6.500664 (-6.382662) | 0.042593 / 0.075469 (-0.032876) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.953511 / 1.841788 (-0.888276) | 11.489727 / 8.074308 (3.415419) | 9.775017 / 10.191392 (-0.416375) | 0.139864 / 0.680424 (-0.540560) | 0.014219 / 0.534201 (-0.519982) | 0.284389 / 0.579283 (-0.294894) | 0.264250 / 0.434364 (-0.170113) | 0.323471 / 0.540337 (-0.216866) | 0.415189 / 1.386936 (-0.971747) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005437 / 0.011353 (-0.005916) | 0.003710 / 0.011008 (-0.007298) | 0.049940 / 0.038508 (0.011432) | 0.032565 / 0.023109 (0.009456) | 0.266374 / 0.275898 (-0.009524) | 0.288069 / 0.323480 (-0.035411) | 0.004140 / 0.007986 (-0.003845) | 0.002669 / 0.004328 (-0.001660) | 0.049646 / 0.004250 (0.045395) | 0.040926 / 0.037052 (0.003874) | 0.278805 / 0.258489 (0.020316) | 0.311396 / 0.293841 (0.017555) | 0.029363 / 0.128546 (-0.099183) | 0.010260 / 0.075646 (-0.065386) | 0.058222 / 0.419271 (-0.361049) | 0.033063 / 0.043533 (-0.010470) | 0.266798 / 0.255139 (0.011659) | 0.283091 / 0.283200 (-0.000109) | 0.017904 / 0.141683 (-0.123779) | 1.139531 / 1.452155 (-0.312624) | 1.163909 / 1.492716 (-0.328808) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089063 / 0.018006 (0.071057) | 0.296757 / 0.000490 (0.296268) | 0.000202 / 0.000200 (0.000002) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022843 / 0.037411 (-0.014568) | 0.076032 / 0.014526 (0.061507) | 0.087545 / 0.176557 (-0.089012) | 0.128870 / 0.737135 (-0.608266) | 0.089359 / 0.296338 (-0.206980) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285213 / 0.215209 (0.070004) | 2.854950 / 2.077655 (0.777295) | 1.539311 / 1.504120 (0.035191) | 1.413753 / 1.541195 (-0.127442) | 1.440819 / 1.468490 (-0.027671) | 0.564734 / 4.584777 (-4.020043) | 0.944924 / 3.745712 (-2.800788) | 2.703612 / 5.269862 (-2.566249) | 1.749429 / 4.565676 (-2.816247) | 0.063239 / 0.424275 (-0.361036) | 0.005024 / 0.007607 (-0.002583) | 0.340866 / 0.226044 (0.114821) | 3.359511 / 2.268929 (1.090582) | 1.895794 / 55.444624 (-53.548831) | 1.606613 / 6.876477 (-5.269864) | 1.756539 / 2.142072 (-0.385533) | 0.646553 / 4.805227 (-4.158675) | 0.121278 / 6.500664 (-6.379386) | 0.041066 / 0.075469 (-0.034403) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005548 / 1.841788 (-0.836240) | 12.080103 / 8.074308 (4.005794) | 10.444822 / 10.191392 (0.253430) | 0.145024 / 0.680424 (-0.535400) | 0.015287 / 0.534201 (-0.518914) | 0.288567 / 0.579283 (-0.290716) | 0.118034 / 0.434364 (-0.316330) | 0.333474 / 0.540337 (-0.206864) | 0.421716 / 1.386936 (-0.965220) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3d95159dbd918009e1ff710dba0cd15d96d4264e \"CML watermark\")\n", "@albertvillanova could I ask why we should use latest `requests` here? we are using `docker` and `datasets` in the same time. However, docker requires requests<2.32.0.", "Hi @pingsutw,\r\n\r\nWe updated the minimum required `requests` version for security reasons: https://www.cve.org/CVERecord?id=CVE-2024-35195\r\n- affected versions < 2.32.0 \r\n\r\nLatest version of `docker` should normally support `requests` >= 2.32.0: https://github.com/docker/docker-py/releases/tag/7.1.0\r\n> Fixed an issue due to an update in the [requests](https://github.com/psf/requests) package breaking docker-py by applying the https://github.com/psf/requests/pull/6710\r\n- https://github.com/docker/docker-py/pull/3257\r\n\r\nI guess you need to update your `docker` library as well:\r\n```\r\npip install -U docker\r\n```", "> I guess you need to update your docker library as well:\r\n\r\nThank you! it works for me πŸ‘ " ]
Update yanked version of minimum requests requirement
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6945/reactions" }
PR_kwDODunzps5xQCCx
{ "diff_url": "https://github.com/huggingface/datasets/pull/6945.diff", "html_url": "https://github.com/huggingface/datasets/pull/6945", "merged_at": "2024-06-03T06:09:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/6945.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6945" }
2024-06-03T05:45:50Z
https://api.github.com/repos/huggingface/datasets/issues/6945/comments
Update yanked version of minimum requests requirement. Version 2.32.1 was yanked: https://pypi.org/project/requests/2.32.1/
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6945/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6945/timeline
closed
false
6,945
null
2024-06-03T06:09:43Z
null
true
2,330,207,120
https://api.github.com/repos/huggingface/datasets/issues/6944
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6944/events
[]
null
2024-06-03T05:37:51Z
[]
https://github.com/huggingface/datasets/pull/6944
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6944). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005150 / 0.011353 (-0.006203) | 0.003663 / 0.011008 (-0.007346) | 0.062832 / 0.038508 (0.024324) | 0.031928 / 0.023109 (0.008819) | 0.246455 / 0.275898 (-0.029443) | 0.272121 / 0.323480 (-0.051359) | 0.004220 / 0.007986 (-0.003765) | 0.002756 / 0.004328 (-0.001573) | 0.050071 / 0.004250 (0.045821) | 0.046074 / 0.037052 (0.009022) | 0.259676 / 0.258489 (0.001187) | 0.290674 / 0.293841 (-0.003167) | 0.027822 / 0.128546 (-0.100724) | 0.010791 / 0.075646 (-0.064855) | 0.202827 / 0.419271 (-0.216445) | 0.037057 / 0.043533 (-0.006476) | 0.256128 / 0.255139 (0.000989) | 0.269422 / 0.283200 (-0.013777) | 0.017395 / 0.141683 (-0.124288) | 1.125919 / 1.452155 (-0.326236) | 1.177708 / 1.492716 (-0.315008) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098466 / 0.018006 (0.080460) | 0.305508 / 0.000490 (0.305018) | 0.000232 / 0.000200 (0.000032) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018866 / 0.037411 (-0.018545) | 0.062079 / 0.014526 (0.047553) | 0.074670 / 0.176557 (-0.101886) | 0.121025 / 0.737135 (-0.616111) | 0.075883 / 0.296338 (-0.220455) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291880 / 0.215209 (0.076671) | 2.874064 / 2.077655 (0.796409) | 1.477040 / 1.504120 (-0.027080) | 1.356198 / 1.541195 (-0.184997) | 1.354676 / 1.468490 (-0.113814) | 0.559731 / 4.584777 (-4.025046) | 2.362746 / 3.745712 (-1.382966) | 2.678838 / 5.269862 (-2.591024) | 1.752633 / 4.565676 (-2.813044) | 0.064023 / 0.424275 (-0.360252) | 0.005035 / 0.007607 (-0.002572) | 0.354807 / 0.226044 (0.128762) | 3.424463 / 2.268929 (1.155534) | 1.810476 / 55.444624 (-53.634149) | 1.519031 / 6.876477 (-5.357446) | 1.693957 / 2.142072 (-0.448116) | 0.647987 / 4.805227 (-4.157240) | 0.118993 / 6.500664 (-6.381671) | 0.042186 / 0.075469 (-0.033283) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982565 / 1.841788 (-0.859223) | 11.645075 / 8.074308 (3.570767) | 9.588360 / 10.191392 (-0.603032) | 0.142369 / 0.680424 (-0.538055) | 0.014025 / 0.534201 (-0.520176) | 0.285668 / 0.579283 (-0.293616) | 0.265825 / 0.434364 (-0.168539) | 0.323371 / 0.540337 (-0.216966) | 0.421227 / 1.386936 (-0.965709) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005587 / 0.011353 (-0.005766) | 0.003664 / 0.011008 (-0.007345) | 0.050411 / 0.038508 (0.011903) | 0.033268 / 0.023109 (0.010159) | 0.266631 / 0.275898 (-0.009267) | 0.291135 / 0.323480 (-0.032345) | 0.004275 / 0.007986 (-0.003710) | 0.002822 / 0.004328 (-0.001506) | 0.049349 / 0.004250 (0.045099) | 0.040653 / 0.037052 (0.003601) | 0.282641 / 0.258489 (0.024152) | 0.315460 / 0.293841 (0.021619) | 0.029343 / 0.128546 (-0.099203) | 0.010606 / 0.075646 (-0.065040) | 0.058783 / 0.419271 (-0.360489) | 0.033205 / 0.043533 (-0.010327) | 0.266805 / 0.255139 (0.011666) | 0.288907 / 0.283200 (0.005707) | 0.017817 / 0.141683 (-0.123866) | 1.128132 / 1.452155 (-0.324023) | 1.175120 / 1.492716 (-0.317597) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095653 / 0.018006 (0.077647) | 0.304825 / 0.000490 (0.304335) | 0.000212 / 0.000200 (0.000012) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022766 / 0.037411 (-0.014645) | 0.076598 / 0.014526 (0.062072) | 0.088314 / 0.176557 (-0.088242) | 0.127888 / 0.737135 (-0.609247) | 0.090391 / 0.296338 (-0.205947) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293384 / 0.215209 (0.078175) | 2.883742 / 2.077655 (0.806087) | 1.533868 / 1.504120 (0.029748) | 1.391964 / 1.541195 (-0.149231) | 1.423732 / 1.468490 (-0.044759) | 0.575457 / 4.584777 (-4.009320) | 0.970860 / 3.745712 (-2.774852) | 2.711405 / 5.269862 (-2.558457) | 1.774468 / 4.565676 (-2.791208) | 0.064611 / 0.424275 (-0.359664) | 0.005120 / 0.007607 (-0.002487) | 0.343892 / 0.226044 (0.117847) | 3.362579 / 2.268929 (1.093650) | 1.880200 / 55.444624 (-53.564424) | 1.587435 / 6.876477 (-5.289042) | 1.756464 / 2.142072 (-0.385609) | 0.661469 / 4.805227 (-4.143759) | 0.119030 / 6.500664 (-6.381634) | 0.041704 / 0.075469 (-0.033765) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.025008 / 1.841788 (-0.816780) | 12.146244 / 8.074308 (4.071936) | 10.397267 / 10.191392 (0.205875) | 0.145917 / 0.680424 (-0.534507) | 0.015779 / 0.534201 (-0.518422) | 0.287122 / 0.579283 (-0.292161) | 0.125464 / 0.434364 (-0.308900) | 0.323315 / 0.540337 (-0.217023) | 0.416761 / 1.386936 (-0.970175) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e2d15a6b1871f3998986853298e4338d72891491 \"CML watermark\")\n" ]
Set dev version
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6944/reactions" }
PR_kwDODunzps5xP-KD
{ "diff_url": "https://github.com/huggingface/datasets/pull/6944.diff", "html_url": "https://github.com/huggingface/datasets/pull/6944", "merged_at": "2024-06-03T05:31:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/6944.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6944" }
2024-06-03T05:29:59Z
https://api.github.com/repos/huggingface/datasets/issues/6944/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6944/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6944/timeline
closed
false
6,944
null
2024-06-03T05:31:47Z
null
true
2,330,176,890
https://api.github.com/repos/huggingface/datasets/issues/6943
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6943/events
[]
null
2024-06-03T05:17:41Z
[]
https://github.com/huggingface/datasets/pull/6943
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6943). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
Release 2.19.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6943/reactions" }
PR_kwDODunzps5xP3jp
{ "diff_url": "https://github.com/huggingface/datasets/pull/6943.diff", "html_url": "https://github.com/huggingface/datasets/pull/6943", "merged_at": "2024-06-03T05:17:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/6943.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6943" }
2024-06-03T05:01:50Z
https://api.github.com/repos/huggingface/datasets/issues/6943/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6943/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6943/timeline
closed
false
6,943
null
2024-06-03T05:17:40Z
null
true
2,329,562,382
https://api.github.com/repos/huggingface/datasets/issues/6942
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6942/events
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
null
2024-06-04T09:54:24Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6942
MEMBER
completed
null
null
[]
Import sorting is disabled by flake8 noqa directive after switching to ruff linter
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6942/reactions" }
I_kwDODunzps6K2k0O
null
2024-06-02T09:43:34Z
https://api.github.com/repos/huggingface/datasets/issues/6942/comments
When we switched to `ruff` linter in PR: - #5519 import sorting was disabled in all files containing the `# flake8: noqa` directive - https://github.com/astral-sh/ruff/issues/11679 We should re-enable import sorting on those files.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6942/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6942/timeline
closed
false
6,942
null
2024-06-04T09:54:24Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,328,930,165
https://api.github.com/repos/huggingface/datasets/issues/6941
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6941/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-06-01T05:34:52Z
[]
https://github.com/huggingface/datasets/issues/6941
NONE
null
null
null
[]
Supporting FFCV: Fast Forward Computer Vision
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6941/reactions" }
I_kwDODunzps6K0Kd1
null
2024-06-01T05:34:52Z
https://api.github.com/repos/huggingface/datasets/issues/6941/comments
### Feature request Supporting FFCV, https://github.com/libffcv/ffcv ### Motivation According to the benchmark, FFCV seems to be fastest image loading method. ### Your contribution no
{ "avatar_url": "https://avatars.githubusercontent.com/u/20135317?v=4", "events_url": "https://api.github.com/users/Luciennnnnnn/events{/privacy}", "followers_url": "https://api.github.com/users/Luciennnnnnn/followers", "following_url": "https://api.github.com/users/Luciennnnnnn/following{/other_user}", "gists_url": "https://api.github.com/users/Luciennnnnnn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Luciennnnnnn", "id": 20135317, "login": "Luciennnnnnn", "node_id": "MDQ6VXNlcjIwMTM1MzE3", "organizations_url": "https://api.github.com/users/Luciennnnnnn/orgs", "received_events_url": "https://api.github.com/users/Luciennnnnnn/received_events", "repos_url": "https://api.github.com/users/Luciennnnnnn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Luciennnnnnn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Luciennnnnnn/subscriptions", "type": "User", "url": "https://api.github.com/users/Luciennnnnnn" }
https://api.github.com/repos/huggingface/datasets/issues/6941/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6941/timeline
open
false
6,941
null
null
null
false
2,328,637,831
https://api.github.com/repos/huggingface/datasets/issues/6940
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6940/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-06-01T07:34:12Z
[]
https://github.com/huggingface/datasets/issues/6940
NONE
null
null
null
[]
Enable Sharding to Equal Sized Shards
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6940/reactions" }
I_kwDODunzps6KzDGH
null
2024-05-31T21:55:50Z
https://api.github.com/repos/huggingface/datasets/issues/6940/comments
### Feature request Add an option when sharding a dataset to have all shards the same size. Will be good to provide both an option of duplication, and by truncation. ### Motivation Currently the behavior of sharding is "If n % i == l, then the first l shards will have length (n // i) + 1, and the remaining shards will have length (n // i).". However, when using FSDP we want the shards to have the same size. This requires the user to manually handle this situation, but it will be nice if we had an option to shard the dataset into equally sized shards. ### Your contribution For now just a PR. I can also add code that does what is needed, but probably not efficient. Shard to equal size by duplication: ``` remainder = len(dataset) % num_shards num_missing_examples = num_shards - remainder duplicated = dataset.select(list(range(num_missing_examples))) dataset = concatenate_datasets([dataset, duplicated]) shard = dataset.shard(num_shards, shard_idx) ``` Or by truncation: ``` shard = dataset.shard(num_shards, shard_idx) num_examples_per_shard = len(dataset) // num_shards shard = shard.select(list(range(num_examples_per_shard))) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yuvalkirstain", "id": 57996478, "login": "yuvalkirstain", "node_id": "MDQ6VXNlcjU3OTk2NDc4", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "type": "User", "url": "https://api.github.com/users/yuvalkirstain" }
https://api.github.com/repos/huggingface/datasets/issues/6940/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6940/timeline
open
false
6,940
null
null
null
false
2,328,059,386
https://api.github.com/repos/huggingface/datasets/issues/6939
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6939/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2024-05-31T17:10:39Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6939
MEMBER
completed
null
null
[]
ExpectedMoreSplits error when using data_dir
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6939/reactions" }
I_kwDODunzps6Kw136
null
2024-05-31T15:08:42Z
https://api.github.com/repos/huggingface/datasets/issues/6939/comments
As reported by @regisss, an `ExpectedMoreSplits` error is raised when passing `data_dir`: ```python from datasets import load_dataset dataset = load_dataset( "lvwerra/stack-exchange-paired", split="train", cache_dir=None, data_dir="data/rl", ) ``` ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2609, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1140, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/usr/local/lib/python3.10/dist-packages/datasets/utils/info_utils.py", line 92, in verify_splits raise ExpectedMoreSplits(str(set(expected_splits) - set(recorded_splits))) datasets.utils.info_utils.ExpectedMoreSplits: {'test'} ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6939/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6939/timeline
closed
false
6,939
null
2024-05-31T17:10:39Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,327,568,281
https://api.github.com/repos/huggingface/datasets/issues/6938
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6938/events
[]
null
2024-05-31T15:28:03Z
[]
https://github.com/huggingface/datasets/pull/6938
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6938). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "fix is included in https://github.com/huggingface/datasets/pull/6925" ]
Fix expected splits when passing data_files or dir
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6938/reactions" }
PR_kwDODunzps5xHNKm
{ "diff_url": "https://github.com/huggingface/datasets/pull/6938.diff", "html_url": "https://github.com/huggingface/datasets/pull/6938", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6938.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6938" }
2024-05-31T11:04:22Z
https://api.github.com/repos/huggingface/datasets/issues/6938/comments
reported on slack: The following code snippet gives an error with v2.19 but not with v2.18: from datasets import load_dataset ``` dataset = load_dataset( "lvwerra/stack-exchange-paired", split="train", cache_dir=None, data_dir="data/rl", ) ``` and the error is: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2609, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1140, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/usr/local/lib/python3.10/dist-packages/datasets/utils/info_utils.py", line 92, in verify_splits raise ExpectedMoreSplits(str(set(expected_splits) - set(recorded_splits))) datasets.utils.info_utils.ExpectedMoreSplits: {'test'} ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6938/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6938/timeline
closed
false
6,938
null
2024-05-31T15:28:02Z
null
true
2,327,212,611
https://api.github.com/repos/huggingface/datasets/issues/6937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6937/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2024-05-31T08:11:57Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6937
MEMBER
null
null
null
[]
JSON loader implicitly coerces floats to integers
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6937/reactions" }
I_kwDODunzps6KtnJD
null
2024-05-31T08:09:12Z
https://api.github.com/repos/huggingface/datasets/issues/6937/comments
The JSON loader implicitly coerces floats to integers. The column values `[0.0, 1.0, 2.0]` are coerced to `[0, 1, 2]`. See CI error in dataset-viewer: https://github.com/huggingface/dataset-viewer/actions/runs/9290164936/job/25576926446 ``` =================================== FAILURES =================================== ___________________________ test_statistics_endpoint ___________________________ normal_user_public_json_dataset = 'DVUser/tmp-dataset-17170199043860' def test_statistics_endpoint(normal_user_public_json_dataset: str) -> None: dataset = normal_user_public_json_dataset config, split = get_default_config_split() statistics_response = poll_until_ready_and_assert( relative_url=f"/statistics?dataset={dataset}&config={config}&split={split}", check_x_revision=True, dataset=dataset, ) content = statistics_response.json() assert len(content) == 3 assert sorted(content) == ["num_examples", "partial", "statistics"], statistics_response statistics = content["statistics"] num_examples = content["num_examples"] partial = content["partial"] assert isinstance(statistics, list), statistics assert len(statistics) == 6 assert num_examples == 4 assert partial is False string_label_column = statistics[0] assert "column_name" in string_label_column assert "column_statistics" in string_label_column assert "column_type" in string_label_column assert string_label_column["column_name"] == "col_1" assert string_label_column["column_type"] == "string_label" # 4 unique values -> label assert isinstance(string_label_column["column_statistics"], dict) assert string_label_column["column_statistics"] == { "nan_count": 0, "nan_proportion": 0.0, "no_label_count": 0, "no_label_proportion": 0.0, "n_unique": 4, "frequencies": { "There goes another one.": 1, "Vader turns round and round in circles as his ship spins into space.": 1, "We count thirty Rebel ships, Lord Vader.": 1, "The wingman spots the pirateship coming at him and warns the Dark Lord": 1, }, } int_column = statistics[1] assert "column_name" in int_column assert "column_statistics" in int_column assert "column_type" in int_column assert int_column["column_name"] == "col_2" assert int_column["column_type"] == "int" assert isinstance(int_column["column_statistics"], dict) assert int_column["column_statistics"] == { "histogram": {"bin_edges": [0, 1, 2, 3, 3], "hist": [1, 1, 1, 1]}, "max": 3, "mean": 1.5, "median": 1.5, "min": 0, "nan_count": 0, "nan_proportion": 0.0, "std": 1.29099, } float_column = statistics[2] assert "column_name" in float_column assert "column_statistics" in float_column assert "column_type" in float_column assert float_column["column_name"] == "col_3" > assert float_column["column_type"] == "float" E AssertionError: assert 'int' == 'float' E - float E + int tests/test_14_statistics.py:72: AssertionError =========================== short test summary info ============================ FAILED tests/test_14_statistics.py::test_statistics_endpoint - AssertionError: assert 'int' == 'float' - float + int ``` This bug was introduced after: - #6914 We have reported the issue to pandas: - https://github.com/pandas-dev/pandas/issues/58866
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6937/timeline
open
false
6,937
null
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,326,119,853
https://api.github.com/repos/huggingface/datasets/issues/6936
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6936/events
[]
null
2024-07-22T23:08:42Z
[]
https://github.com/huggingface/datasets/issues/6936
NONE
null
null
null
[ "I got the same issue. Any updates so far for this issue?" ]
save_to_disk() freezes when saving on s3 bucket with multiprocessing
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6936/reactions" }
I_kwDODunzps6KpcWt
null
2024-05-30T16:48:39Z
https://api.github.com/repos/huggingface/datasets/issues/6936/comments
### Describe the bug I'm trying to save a `Dataset` using the `save_to_disk()` function with: - `num_proc > 1` - `dataset_path` being a s3 bucket path e.g. "s3://{bucket_name}/{dataset_folder}/" The hf progress bar shows up but the saving does not seem to start. When using one processor only (`num_proc=1`), everything works fine. When saving the dataset on local disk (as opposed to s3 bucket) with `num_proc > 1`, everything works fine. Thank you for your help! :) ### Steps to reproduce the bug I tried without any storage options: ``` from datasets import load_dataset sandbox_ds = load_dataset("openai_humaneval") sandbox_ds["test"].save_to_disk( "s3://bucket-name/test_multiprocessing_saving/", num_proc=4, ) ``` and with the specific s3fs storage options: ``` from datasets import load_dataset from s3fs import S3FileSystem def get_s3fs(): return S3FileSystem() sandbox_ds = load_dataset("openai_humaneval") sandbox_ds["test"].save_to_disk( "s3://bucket-name/test_multiprocessing_saving/", num_proc=4, storage_options=get_s3fs().storage_options, # also tried: storage_options=S3FileSystem().storage_options ) ``` I'm guessing I might use `storage_options` parameter wrongly, but I didn't find anything online that made it work. **NB**: Behavior is the same when trying to save the whole `DatasetDict`. ### Expected behavior Progress bar fills in and saving is carried out. ### Environment info `datasets==2.18.0`
{ "avatar_url": "https://avatars.githubusercontent.com/u/54974949?v=4", "events_url": "https://api.github.com/users/ycattan/events{/privacy}", "followers_url": "https://api.github.com/users/ycattan/followers", "following_url": "https://api.github.com/users/ycattan/following{/other_user}", "gists_url": "https://api.github.com/users/ycattan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ycattan", "id": 54974949, "login": "ycattan", "node_id": "MDQ6VXNlcjU0OTc0OTQ5", "organizations_url": "https://api.github.com/users/ycattan/orgs", "received_events_url": "https://api.github.com/users/ycattan/received_events", "repos_url": "https://api.github.com/users/ycattan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ycattan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ycattan/subscriptions", "type": "User", "url": "https://api.github.com/users/ycattan" }
https://api.github.com/repos/huggingface/datasets/issues/6936/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6936/timeline
open
false
6,936
null
null
null
false
2,325,612,022
https://api.github.com/repos/huggingface/datasets/issues/6935
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6935/events
[]
null
2024-05-30T12:53:36Z
[]
https://github.com/huggingface/datasets/issues/6935
NONE
null
null
null
[]
Support for pathlib.Path in datasets 2.19.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6935/reactions" }
I_kwDODunzps6KngX2
null
2024-05-30T12:53:36Z
https://api.github.com/repos/huggingface/datasets/issues/6935/comments
### Describe the bug After the recent update of `datasets`, Dataset.save_to_disk does not accept a pathlib.Path anymore. It was supported in 2.18.0 and previous versions. Is this intentional? Was it supported before only because of a Python dusk-typing miracle? ### Steps to reproduce the bug ``` from datasets import Dataset import pathlib path = pathlib.Path("./my_out_path") Dataset.from_dict( {"text": ["hello world"], "label": [777], "split": ["train"]} .save_to_disk(path) ``` This results in an error when using datasets 2.19: ``` Traceback (most recent call last): File "<stdin>", line 3, in <module> File "/Users/jb/scratch/venv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 1515, in save_to_disk fs, _ = url_to_fs(dataset_path, **(storage_options or {})) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/jb/scratch/venv/lib/python3.11/site-packages/fsspec/core.py", line 383, in url_to_fs chain = _un_chain(url, kwargs) ^^^^^^^^^^^^^^^^^^^^^^ File "/Users/jb/scratch/venv/lib/python3.11/site-packages/fsspec/core.py", line 323, in _un_chain if "::" in path ^^^^^^^^^^^^ TypeError: argument of type 'PosixPath' is not iterable ``` Converting to str works, however. ``` Dataset.from_dict( {"text": ["hello world"], "label": [777], "split": ["train"]} ).save_to_disk(str(path)) ``` ### Expected behavior My dataset gets saved to disk without an error. ### Environment info aiohttp==3.9.5 aiosignal==1.3.1 attrs==23.2.0 certifi==2024.2.2 charset-normalizer==3.3.2 datasets==2.19.0 dill==0.3.8 filelock==3.14.0 frozenlist==1.4.1 fsspec==2024.3.1 huggingface-hub==0.23.2 idna==3.7 multidict==6.0.5 multiprocess==0.70.16 numpy==1.26.4 packaging==24.0 pandas==2.2.2 pyarrow==16.1.0 pyarrow-hotfix==0.6 python-dateutil==2.9.0.post0 pytz==2024.1 PyYAML==6.0.1 requests==2.32.3 six==1.16.0 tqdm==4.66.4 typing_extensions==4.12.0 tzdata==2024.1 urllib3==2.2.1 xxhash==3.4.1 yarl==1.9.4
{ "avatar_url": "https://avatars.githubusercontent.com/u/12202811?v=4", "events_url": "https://api.github.com/users/lamyiowce/events{/privacy}", "followers_url": "https://api.github.com/users/lamyiowce/followers", "following_url": "https://api.github.com/users/lamyiowce/following{/other_user}", "gists_url": "https://api.github.com/users/lamyiowce/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lamyiowce", "id": 12202811, "login": "lamyiowce", "node_id": "MDQ6VXNlcjEyMjAyODEx", "organizations_url": "https://api.github.com/users/lamyiowce/orgs", "received_events_url": "https://api.github.com/users/lamyiowce/received_events", "repos_url": "https://api.github.com/users/lamyiowce/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lamyiowce/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lamyiowce/subscriptions", "type": "User", "url": "https://api.github.com/users/lamyiowce" }
https://api.github.com/repos/huggingface/datasets/issues/6935/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6935/timeline
open
false
6,935
null
null
null
false
2,325,341,717
https://api.github.com/repos/huggingface/datasets/issues/6934
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6934/events
[]
null
2024-05-31T10:25:08Z
[]
https://github.com/huggingface/datasets/pull/6934
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6934). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005218 / 0.011353 (-0.006135) | 0.003313 / 0.011008 (-0.007695) | 0.062992 / 0.038508 (0.024484) | 0.029621 / 0.023109 (0.006512) | 0.244421 / 0.275898 (-0.031477) | 0.267178 / 0.323480 (-0.056302) | 0.002986 / 0.007986 (-0.005000) | 0.002607 / 0.004328 (-0.001721) | 0.049149 / 0.004250 (0.044898) | 0.045362 / 0.037052 (0.008310) | 0.252862 / 0.258489 (-0.005627) | 0.286326 / 0.293841 (-0.007515) | 0.027888 / 0.128546 (-0.100658) | 0.010295 / 0.075646 (-0.065352) | 0.205525 / 0.419271 (-0.213746) | 0.036696 / 0.043533 (-0.006837) | 0.248716 / 0.255139 (-0.006423) | 0.263803 / 0.283200 (-0.019397) | 0.016926 / 0.141683 (-0.124757) | 1.123093 / 1.452155 (-0.329062) | 1.155434 / 1.492716 (-0.337282) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092349 / 0.018006 (0.074343) | 0.298154 / 0.000490 (0.297664) | 0.000213 / 0.000200 (0.000013) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018496 / 0.037411 (-0.018915) | 0.061983 / 0.014526 (0.047457) | 0.075043 / 0.176557 (-0.101514) | 0.120678 / 0.737135 (-0.616457) | 0.074917 / 0.296338 (-0.221422) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290558 / 0.215209 (0.075349) | 2.842635 / 2.077655 (0.764981) | 1.485761 / 1.504120 (-0.018359) | 1.346948 / 1.541195 (-0.194247) | 1.352424 / 1.468490 (-0.116066) | 0.564567 / 4.584777 (-4.020210) | 2.393583 / 3.745712 (-1.352129) | 2.654061 / 5.269862 (-2.615800) | 1.729154 / 4.565676 (-2.836523) | 0.064652 / 0.424275 (-0.359623) | 0.004973 / 0.007607 (-0.002634) | 0.334924 / 0.226044 (0.108879) | 3.330518 / 2.268929 (1.061590) | 1.773848 / 55.444624 (-53.670776) | 1.513796 / 6.876477 (-5.362681) | 1.676492 / 2.142072 (-0.465580) | 0.650551 / 4.805227 (-4.154677) | 0.118423 / 6.500664 (-6.382241) | 0.042700 / 0.075469 (-0.032769) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.943394 / 1.841788 (-0.898394) | 11.235766 / 8.074308 (3.161458) | 9.896586 / 10.191392 (-0.294806) | 0.130174 / 0.680424 (-0.550249) | 0.014148 / 0.534201 (-0.520053) | 0.284002 / 0.579283 (-0.295281) | 0.261354 / 0.434364 (-0.173010) | 0.320839 / 0.540337 (-0.219499) | 0.422399 / 1.386936 (-0.964537) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005496 / 0.011353 (-0.005857) | 0.003603 / 0.011008 (-0.007406) | 0.050104 / 0.038508 (0.011596) | 0.032939 / 0.023109 (0.009830) | 0.265643 / 0.275898 (-0.010255) | 0.291819 / 0.323480 (-0.031661) | 0.004273 / 0.007986 (-0.003713) | 0.002715 / 0.004328 (-0.001613) | 0.049191 / 0.004250 (0.044941) | 0.040782 / 0.037052 (0.003730) | 0.276562 / 0.258489 (0.018072) | 0.314307 / 0.293841 (0.020466) | 0.029878 / 0.128546 (-0.098669) | 0.010134 / 0.075646 (-0.065513) | 0.058686 / 0.419271 (-0.360585) | 0.033562 / 0.043533 (-0.009971) | 0.265961 / 0.255139 (0.010822) | 0.282009 / 0.283200 (-0.001191) | 0.018956 / 0.141683 (-0.122727) | 1.149668 / 1.452155 (-0.302487) | 1.192242 / 1.492716 (-0.300474) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089449 / 0.018006 (0.071443) | 0.300346 / 0.000490 (0.299856) | 0.000198 / 0.000200 (-0.000001) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022094 / 0.037411 (-0.015317) | 0.075987 / 0.014526 (0.061461) | 0.088191 / 0.176557 (-0.088365) | 0.127698 / 0.737135 (-0.609437) | 0.089642 / 0.296338 (-0.206696) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299127 / 0.215209 (0.083918) | 2.961219 / 2.077655 (0.883565) | 1.589108 / 1.504120 (0.084988) | 1.464060 / 1.541195 (-0.077135) | 1.475249 / 1.468490 (0.006759) | 0.569041 / 4.584777 (-4.015736) | 0.966965 / 3.745712 (-2.778747) | 2.653049 / 5.269862 (-2.616813) | 1.733650 / 4.565676 (-2.832026) | 0.062537 / 0.424275 (-0.361738) | 0.005003 / 0.007607 (-0.002605) | 0.353345 / 0.226044 (0.127301) | 3.432888 / 2.268929 (1.163960) | 1.953217 / 55.444624 (-53.491407) | 1.651995 / 6.876477 (-5.224482) | 1.764549 / 2.142072 (-0.377523) | 0.647255 / 4.805227 (-4.157973) | 0.116827 / 6.500664 (-6.383837) | 0.040765 / 0.075469 (-0.034704) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985490 / 1.841788 (-0.856298) | 11.965147 / 8.074308 (3.890839) | 10.488286 / 10.191392 (0.296894) | 0.142134 / 0.680424 (-0.538290) | 0.015415 / 0.534201 (-0.518786) | 0.289864 / 0.579283 (-0.289419) | 0.122778 / 0.434364 (-0.311586) | 0.328691 / 0.540337 (-0.211647) | 0.422677 / 1.386936 (-0.964259) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#456f790d2c2e9181bc305ab3d54fe2ca58742b9b \"CML watermark\")\n", "There was an incident in hub-ci that invalidated our token. It's been fixed so I reverted this change" ]
Revert ci user
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6934/reactions" }
PR_kwDODunzps5w_laB
{ "diff_url": "https://github.com/huggingface/datasets/pull/6934.diff", "html_url": "https://github.com/huggingface/datasets/pull/6934", "merged_at": "2024-05-30T10:45:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/6934.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6934" }
2024-05-30T10:45:26Z
https://api.github.com/repos/huggingface/datasets/issues/6934/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6934/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6934/timeline
closed
false
6,934
null
2024-05-30T10:45:37Z
null
true
2,325,300,800
https://api.github.com/repos/huggingface/datasets/issues/6933
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6933/events
[]
null
2024-05-30T10:30:54Z
[]
https://github.com/huggingface/datasets/pull/6933
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6933). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004937 / 0.011353 (-0.006416) | 0.003706 / 0.011008 (-0.007302) | 0.062627 / 0.038508 (0.024119) | 0.031372 / 0.023109 (0.008263) | 0.246616 / 0.275898 (-0.029282) | 0.272196 / 0.323480 (-0.051284) | 0.004129 / 0.007986 (-0.003856) | 0.002766 / 0.004328 (-0.001562) | 0.049975 / 0.004250 (0.045725) | 0.045098 / 0.037052 (0.008046) | 0.261802 / 0.258489 (0.003313) | 0.290088 / 0.293841 (-0.003753) | 0.027082 / 0.128546 (-0.101465) | 0.010442 / 0.075646 (-0.065205) | 0.201795 / 0.419271 (-0.217477) | 0.037081 / 0.043533 (-0.006452) | 0.249500 / 0.255139 (-0.005639) | 0.268800 / 0.283200 (-0.014399) | 0.017556 / 0.141683 (-0.124127) | 1.137201 / 1.452155 (-0.314953) | 1.186993 / 1.492716 (-0.305723) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097426 / 0.018006 (0.079419) | 0.303653 / 0.000490 (0.303163) | 0.000235 / 0.000200 (0.000035) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020206 / 0.037411 (-0.017206) | 0.063673 / 0.014526 (0.049147) | 0.076173 / 0.176557 (-0.100383) | 0.122459 / 0.737135 (-0.614676) | 0.076958 / 0.296338 (-0.219380) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282146 / 0.215209 (0.066937) | 2.785682 / 2.077655 (0.708027) | 1.468847 / 1.504120 (-0.035273) | 1.346731 / 1.541195 (-0.194464) | 1.378459 / 1.468490 (-0.090031) | 0.564961 / 4.584777 (-4.019816) | 2.400095 / 3.745712 (-1.345617) | 2.658285 / 5.269862 (-2.611577) | 1.747873 / 4.565676 (-2.817803) | 0.063763 / 0.424275 (-0.360512) | 0.004969 / 0.007607 (-0.002638) | 0.337764 / 0.226044 (0.111720) | 3.309568 / 2.268929 (1.040639) | 1.812516 / 55.444624 (-53.632109) | 1.521519 / 6.876477 (-5.354957) | 1.690091 / 2.142072 (-0.451982) | 0.640922 / 4.805227 (-4.164305) | 0.119291 / 6.500664 (-6.381373) | 0.042195 / 0.075469 (-0.033274) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965327 / 1.841788 (-0.876461) | 11.538832 / 8.074308 (3.464523) | 9.594644 / 10.191392 (-0.596748) | 0.144687 / 0.680424 (-0.535737) | 0.014049 / 0.534201 (-0.520152) | 0.296873 / 0.579283 (-0.282410) | 0.269281 / 0.434364 (-0.165083) | 0.325091 / 0.540337 (-0.215246) | 0.420917 / 1.386936 (-0.966019) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005239 / 0.011353 (-0.006114) | 0.003168 / 0.011008 (-0.007840) | 0.049301 / 0.038508 (0.010793) | 0.032248 / 0.023109 (0.009139) | 0.266463 / 0.275898 (-0.009435) | 0.293311 / 0.323480 (-0.030168) | 0.004185 / 0.007986 (-0.003800) | 0.002681 / 0.004328 (-0.001647) | 0.048644 / 0.004250 (0.044393) | 0.040366 / 0.037052 (0.003314) | 0.280345 / 0.258489 (0.021856) | 0.312745 / 0.293841 (0.018904) | 0.029616 / 0.128546 (-0.098930) | 0.010001 / 0.075646 (-0.065646) | 0.057365 / 0.419271 (-0.361906) | 0.033189 / 0.043533 (-0.010344) | 0.267601 / 0.255139 (0.012462) | 0.285647 / 0.283200 (0.002448) | 0.017119 / 0.141683 (-0.124564) | 1.139776 / 1.452155 (-0.312378) | 1.172451 / 1.492716 (-0.320266) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095462 / 0.018006 (0.077455) | 0.303009 / 0.000490 (0.302519) | 0.000227 / 0.000200 (0.000027) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023026 / 0.037411 (-0.014385) | 0.077905 / 0.014526 (0.063380) | 0.087275 / 0.176557 (-0.089282) | 0.127355 / 0.737135 (-0.609780) | 0.088940 / 0.296338 (-0.207399) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298267 / 0.215209 (0.083058) | 2.894679 / 2.077655 (0.817024) | 1.568663 / 1.504120 (0.064543) | 1.438342 / 1.541195 (-0.102853) | 1.456110 / 1.468490 (-0.012380) | 0.556337 / 4.584777 (-4.028440) | 0.969795 / 3.745712 (-2.775917) | 2.667348 / 5.269862 (-2.602513) | 1.767169 / 4.565676 (-2.798507) | 0.060969 / 0.424275 (-0.363306) | 0.005009 / 0.007607 (-0.002598) | 0.343299 / 0.226044 (0.117255) | 3.396529 / 2.268929 (1.127601) | 1.889816 / 55.444624 (-53.554808) | 1.635077 / 6.876477 (-5.241400) | 1.795238 / 2.142072 (-0.346835) | 0.631876 / 4.805227 (-4.173352) | 0.115483 / 6.500664 (-6.385181) | 0.041772 / 0.075469 (-0.033697) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.008423 / 1.841788 (-0.833364) | 12.432488 / 8.074308 (4.358180) | 10.418002 / 10.191392 (0.226610) | 0.142395 / 0.680424 (-0.538029) | 0.015718 / 0.534201 (-0.518483) | 0.281917 / 0.579283 (-0.297366) | 0.132619 / 0.434364 (-0.301745) | 0.318500 / 0.540337 (-0.221838) | 0.410798 / 1.386936 (-0.976138) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3d6cd158d2e3bb9030fea7c5a9580b9d34d721ac \"CML watermark\")\n" ]
update ci user
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6933/reactions" }
PR_kwDODunzps5w_cW4
{ "diff_url": "https://github.com/huggingface/datasets/pull/6933.diff", "html_url": "https://github.com/huggingface/datasets/pull/6933", "merged_at": "2024-05-30T10:23:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/6933.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6933" }
2024-05-30T10:23:02Z
https://api.github.com/repos/huggingface/datasets/issues/6933/comments
token is ok to be public since it's only for the hub-ci
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6933/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6933/timeline
closed
false
6,933
null
2024-05-30T10:23:12Z
null
true
2,324,729,267
https://api.github.com/repos/huggingface/datasets/issues/6932
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6932/events
[]
null
2024-06-04T12:56:20Z
[]
https://github.com/huggingface/datasets/pull/6932
CONTRIBUTOR
null
false
null
[ "thanks !", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005050 / 0.011353 (-0.006303) | 0.003786 / 0.011008 (-0.007222) | 0.062406 / 0.038508 (0.023898) | 0.029459 / 0.023109 (0.006349) | 0.262388 / 0.275898 (-0.013510) | 0.274119 / 0.323480 (-0.049361) | 0.004085 / 0.007986 (-0.003901) | 0.002754 / 0.004328 (-0.001574) | 0.048779 / 0.004250 (0.044529) | 0.046187 / 0.037052 (0.009135) | 0.263513 / 0.258489 (0.005024) | 0.294260 / 0.293841 (0.000419) | 0.027391 / 0.128546 (-0.101155) | 0.010567 / 0.075646 (-0.065080) | 0.200225 / 0.419271 (-0.219046) | 0.036165 / 0.043533 (-0.007367) | 0.251757 / 0.255139 (-0.003382) | 0.268271 / 0.283200 (-0.014928) | 0.018446 / 0.141683 (-0.123237) | 1.125787 / 1.452155 (-0.326368) | 1.163172 / 1.492716 (-0.329544) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004428 / 0.018006 (-0.013578) | 0.301730 / 0.000490 (0.301241) | 0.000215 / 0.000200 (0.000015) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019424 / 0.037411 (-0.017987) | 0.062269 / 0.014526 (0.047743) | 0.074289 / 0.176557 (-0.102268) | 0.121069 / 0.737135 (-0.616067) | 0.076485 / 0.296338 (-0.219853) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277315 / 0.215209 (0.062106) | 2.742027 / 2.077655 (0.664372) | 1.472970 / 1.504120 (-0.031150) | 1.350065 / 1.541195 (-0.191130) | 1.378806 / 1.468490 (-0.089684) | 0.567742 / 4.584777 (-4.017035) | 2.376752 / 3.745712 (-1.368960) | 2.662459 / 5.269862 (-2.607402) | 1.750396 / 4.565676 (-2.815280) | 0.063589 / 0.424275 (-0.360686) | 0.004987 / 0.007607 (-0.002620) | 0.326441 / 0.226044 (0.100397) | 3.224125 / 2.268929 (0.955197) | 1.801623 / 55.444624 (-53.643001) | 1.534712 / 6.876477 (-5.341765) | 1.652365 / 2.142072 (-0.489708) | 0.647624 / 4.805227 (-4.157603) | 0.117161 / 6.500664 (-6.383504) | 0.041908 / 0.075469 (-0.033561) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.954879 / 1.841788 (-0.886909) | 11.571875 / 8.074308 (3.497567) | 9.489146 / 10.191392 (-0.702246) | 0.141630 / 0.680424 (-0.538794) | 0.014764 / 0.534201 (-0.519437) | 0.285003 / 0.579283 (-0.294280) | 0.266138 / 0.434364 (-0.168226) | 0.323527 / 0.540337 (-0.216810) | 0.419658 / 1.386936 (-0.967278) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005359 / 0.011353 (-0.005994) | 0.003615 / 0.011008 (-0.007393) | 0.050692 / 0.038508 (0.012184) | 0.033632 / 0.023109 (0.010522) | 0.273614 / 0.275898 (-0.002284) | 0.303780 / 0.323480 (-0.019700) | 0.004171 / 0.007986 (-0.003814) | 0.002687 / 0.004328 (-0.001642) | 0.050002 / 0.004250 (0.045751) | 0.040824 / 0.037052 (0.003772) | 0.287759 / 0.258489 (0.029270) | 0.324144 / 0.293841 (0.030303) | 0.029101 / 0.128546 (-0.099445) | 0.010244 / 0.075646 (-0.065402) | 0.059599 / 0.419271 (-0.359672) | 0.033146 / 0.043533 (-0.010387) | 0.276592 / 0.255139 (0.021453) | 0.293670 / 0.283200 (0.010470) | 0.018270 / 0.141683 (-0.123413) | 1.126216 / 1.452155 (-0.325939) | 1.155658 / 1.492716 (-0.337058) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093537 / 0.018006 (0.075530) | 0.302706 / 0.000490 (0.302216) | 0.000216 / 0.000200 (0.000016) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023118 / 0.037411 (-0.014293) | 0.076995 / 0.014526 (0.062469) | 0.089476 / 0.176557 (-0.087080) | 0.130705 / 0.737135 (-0.606430) | 0.090258 / 0.296338 (-0.206081) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285920 / 0.215209 (0.070710) | 2.830581 / 2.077655 (0.752927) | 1.561695 / 1.504120 (0.057575) | 1.522791 / 1.541195 (-0.018403) | 1.429875 / 1.468490 (-0.038615) | 0.566683 / 4.584777 (-4.018094) | 0.957157 / 3.745712 (-2.788555) | 2.663718 / 5.269862 (-2.606143) | 1.748885 / 4.565676 (-2.816791) | 0.063697 / 0.424275 (-0.360578) | 0.004996 / 0.007607 (-0.002611) | 0.340042 / 0.226044 (0.113998) | 3.352792 / 2.268929 (1.083863) | 1.907189 / 55.444624 (-53.537435) | 1.608177 / 6.876477 (-5.268300) | 1.775438 / 2.142072 (-0.366634) | 0.645264 / 4.805227 (-4.159963) | 0.116441 / 6.500664 (-6.384223) | 0.040671 / 0.075469 (-0.034798) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005050 / 1.841788 (-0.836738) | 12.040057 / 8.074308 (3.965749) | 10.213560 / 10.191392 (0.022168) | 0.138383 / 0.680424 (-0.542041) | 0.015409 / 0.534201 (-0.518792) | 0.283509 / 0.579283 (-0.295774) | 0.125501 / 0.434364 (-0.308863) | 0.318816 / 0.540337 (-0.221521) | 0.415454 / 1.386936 (-0.971482) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cbb29cea0e21dc0eb8f7de01d0c6ed5718d6ce4e \"CML watermark\")\n" ]
Update dataset_dict.py
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6932/reactions" }
PR_kwDODunzps5w9d7w
{ "diff_url": "https://github.com/huggingface/datasets/pull/6932.diff", "html_url": "https://github.com/huggingface/datasets/pull/6932", "merged_at": "2024-06-04T12:50:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/6932.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6932" }
2024-05-30T05:22:35Z
https://api.github.com/repos/huggingface/datasets/issues/6932/comments
shape returns (number of rows, number of columns)
{ "avatar_url": "https://avatars.githubusercontent.com/u/20263729?v=4", "events_url": "https://api.github.com/users/Arunprakash-A/events{/privacy}", "followers_url": "https://api.github.com/users/Arunprakash-A/followers", "following_url": "https://api.github.com/users/Arunprakash-A/following{/other_user}", "gists_url": "https://api.github.com/users/Arunprakash-A/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Arunprakash-A", "id": 20263729, "login": "Arunprakash-A", "node_id": "MDQ6VXNlcjIwMjYzNzI5", "organizations_url": "https://api.github.com/users/Arunprakash-A/orgs", "received_events_url": "https://api.github.com/users/Arunprakash-A/received_events", "repos_url": "https://api.github.com/users/Arunprakash-A/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Arunprakash-A/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Arunprakash-A/subscriptions", "type": "User", "url": "https://api.github.com/users/Arunprakash-A" }
https://api.github.com/repos/huggingface/datasets/issues/6932/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6932/timeline
closed
false
6,932
null
2024-06-04T12:50:13Z
null
true
2,323,457,525
https://api.github.com/repos/huggingface/datasets/issues/6931
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6931/events
[]
null
2024-05-29T16:33:18Z
[]
https://github.com/huggingface/datasets/pull/6931
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6931). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005362 / 0.011353 (-0.005991) | 0.003969 / 0.011008 (-0.007039) | 0.063390 / 0.038508 (0.024882) | 0.030814 / 0.023109 (0.007705) | 0.246891 / 0.275898 (-0.029007) | 0.271047 / 0.323480 (-0.052432) | 0.004036 / 0.007986 (-0.003950) | 0.002732 / 0.004328 (-0.001597) | 0.049466 / 0.004250 (0.045216) | 0.047227 / 0.037052 (0.010175) | 0.255978 / 0.258489 (-0.002511) | 0.297956 / 0.293841 (0.004115) | 0.028641 / 0.128546 (-0.099905) | 0.010510 / 0.075646 (-0.065136) | 0.204268 / 0.419271 (-0.215004) | 0.037093 / 0.043533 (-0.006440) | 0.247287 / 0.255139 (-0.007852) | 0.263830 / 0.283200 (-0.019370) | 0.018335 / 0.141683 (-0.123348) | 1.116074 / 1.452155 (-0.336081) | 1.182589 / 1.492716 (-0.310128) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094435 / 0.018006 (0.076429) | 0.310422 / 0.000490 (0.309932) | 0.000215 / 0.000200 (0.000015) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019220 / 0.037411 (-0.018192) | 0.062090 / 0.014526 (0.047564) | 0.074511 / 0.176557 (-0.102046) | 0.121825 / 0.737135 (-0.615310) | 0.075406 / 0.296338 (-0.220933) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281185 / 0.215209 (0.065976) | 2.770157 / 2.077655 (0.692502) | 1.472095 / 1.504120 (-0.032025) | 1.339342 / 1.541195 (-0.201853) | 1.374621 / 1.468490 (-0.093869) | 0.566607 / 4.584777 (-4.018170) | 2.357642 / 3.745712 (-1.388070) | 2.735034 / 5.269862 (-2.534827) | 1.782779 / 4.565676 (-2.782897) | 0.063046 / 0.424275 (-0.361229) | 0.005015 / 0.007607 (-0.002592) | 0.336690 / 0.226044 (0.110646) | 3.360955 / 2.268929 (1.092027) | 1.804424 / 55.444624 (-53.640200) | 1.517334 / 6.876477 (-5.359143) | 1.665254 / 2.142072 (-0.476818) | 0.627185 / 4.805227 (-4.178042) | 0.114388 / 6.500664 (-6.386276) | 0.041788 / 0.075469 (-0.033681) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975270 / 1.841788 (-0.866517) | 11.647633 / 8.074308 (3.573325) | 9.872873 / 10.191392 (-0.318519) | 0.141744 / 0.680424 (-0.538680) | 0.014524 / 0.534201 (-0.519677) | 0.286697 / 0.579283 (-0.292586) | 0.266837 / 0.434364 (-0.167527) | 0.328513 / 0.540337 (-0.211825) | 0.424676 / 1.386936 (-0.962260) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005654 / 0.011353 (-0.005699) | 0.004058 / 0.011008 (-0.006950) | 0.051030 / 0.038508 (0.012522) | 0.033085 / 0.023109 (0.009976) | 0.307532 / 0.275898 (0.031634) | 0.335672 / 0.323480 (0.012192) | 0.004244 / 0.007986 (-0.003742) | 0.002842 / 0.004328 (-0.001487) | 0.050131 / 0.004250 (0.045880) | 0.040709 / 0.037052 (0.003656) | 0.319514 / 0.258489 (0.061025) | 0.357153 / 0.293841 (0.063312) | 0.029014 / 0.128546 (-0.099532) | 0.010999 / 0.075646 (-0.064648) | 0.058789 / 0.419271 (-0.360482) | 0.033284 / 0.043533 (-0.010249) | 0.310783 / 0.255139 (0.055644) | 0.331466 / 0.283200 (0.048266) | 0.018998 / 0.141683 (-0.122685) | 1.138822 / 1.452155 (-0.313332) | 1.180731 / 1.492716 (-0.311985) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095725 / 0.018006 (0.077719) | 0.302788 / 0.000490 (0.302298) | 0.000206 / 0.000200 (0.000006) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023247 / 0.037411 (-0.014164) | 0.077619 / 0.014526 (0.063093) | 0.090489 / 0.176557 (-0.086067) | 0.132033 / 0.737135 (-0.605102) | 0.090964 / 0.296338 (-0.205374) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297912 / 0.215209 (0.082703) | 2.954107 / 2.077655 (0.876452) | 1.591155 / 1.504120 (0.087035) | 1.469217 / 1.541195 (-0.071978) | 1.513315 / 1.468490 (0.044825) | 0.562728 / 4.584777 (-4.022049) | 0.960093 / 3.745712 (-2.785620) | 2.852106 / 5.269862 (-2.417756) | 1.861668 / 4.565676 (-2.704009) | 0.063530 / 0.424275 (-0.360745) | 0.005194 / 0.007607 (-0.002413) | 0.351116 / 0.226044 (0.125072) | 3.498787 / 2.268929 (1.229859) | 1.952223 / 55.444624 (-53.492401) | 1.696208 / 6.876477 (-5.180269) | 1.861650 / 2.142072 (-0.280422) | 0.653494 / 4.805227 (-4.151733) | 0.123797 / 6.500664 (-6.376868) | 0.042696 / 0.075469 (-0.032773) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006657 / 1.841788 (-0.835131) | 12.659771 / 8.074308 (4.585463) | 10.672140 / 10.191392 (0.480748) | 0.143726 / 0.680424 (-0.536698) | 0.015895 / 0.534201 (-0.518306) | 0.285952 / 0.579283 (-0.293331) | 0.126078 / 0.434364 (-0.308286) | 0.325943 / 0.540337 (-0.214395) | 0.410774 / 1.386936 (-0.976162) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#88d53d1ae762bec6736fffb000e6540e52bf1998 \"CML watermark\")\n" ]
[WebDataset] Support compressed files
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6931/reactions" }
PR_kwDODunzps5w5I-Y
{ "diff_url": "https://github.com/huggingface/datasets/pull/6931.diff", "html_url": "https://github.com/huggingface/datasets/pull/6931", "merged_at": "2024-05-29T16:24:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/6931.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6931" }
2024-05-29T14:19:06Z
https://api.github.com/repos/huggingface/datasets/issues/6931/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6931/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6931/timeline
closed
false
6,931
null
2024-05-29T16:24:21Z
null
true
2,323,225,922
https://api.github.com/repos/huggingface/datasets/issues/6930
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6930/events
[]
null
2024-07-23T06:25:24Z
[]
https://github.com/huggingface/datasets/issues/6930
NONE
null
null
null
[ "How do you solve it ?\r\n", "> How do you solve it ?\r\n\r\nPlease check your Python environment and dataset version. I have just resolved the issue, which was caused by a Python environment switching error\r\n" ]
ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})}
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6930/reactions" }
I_kwDODunzps6KeZ1C
null
2024-05-29T12:40:05Z
https://api.github.com/repos/huggingface/datasets/issues/6930/comments
### Describe the bug When I run the code en = load_dataset("allenai/c4", "en", streaming=True), I encounter an error: raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})}. However, running dataset = load_dataset('allenai/c4', streaming=True, data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation') works fine. What is the issue here? ### Steps to reproduce the bug run code: import os os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com' from datasets import load_dataset en = load_dataset("allenai/c4", "en", streaming=True) ### Expected behavior Successfully loaded the dataset. ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-6.5.0-28-generic-x86_64-with-glibc2.17 - Python version: 3.8.19 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.0.3 - `fsspec` version: 2024.2.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/41767521?v=4", "events_url": "https://api.github.com/users/CLL112/events{/privacy}", "followers_url": "https://api.github.com/users/CLL112/followers", "following_url": "https://api.github.com/users/CLL112/following{/other_user}", "gists_url": "https://api.github.com/users/CLL112/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/CLL112", "id": 41767521, "login": "CLL112", "node_id": "MDQ6VXNlcjQxNzY3NTIx", "organizations_url": "https://api.github.com/users/CLL112/orgs", "received_events_url": "https://api.github.com/users/CLL112/received_events", "repos_url": "https://api.github.com/users/CLL112/repos", "site_admin": false, "starred_url": "https://api.github.com/users/CLL112/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CLL112/subscriptions", "type": "User", "url": "https://api.github.com/users/CLL112" }
https://api.github.com/repos/huggingface/datasets/issues/6930/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6930/timeline
open
false
6,930
null
null
null
false
2,322,980,077
https://api.github.com/repos/huggingface/datasets/issues/6929
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6929/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-05-29T20:51:56Z
[]
https://github.com/huggingface/datasets/issues/6929
NONE
null
null
null
[ "you're right, we're tackling this here: https://github.com/huggingface/dataset-viewer/issues/2757", "@severo : great !" ]
Avoid downloading the whole dataset when only README.me has been touched on hub.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6929/reactions" }
I_kwDODunzps6Kddzt
null
2024-05-29T10:36:06Z
https://api.github.com/repos/huggingface/datasets/issues/6929/comments
### Feature request `datasets.load_dataset()` triggers a new download of the **whole dataset** when the README.md file has been touched on huggingface hub, even if data files / parquet files are the exact same. I think the current behaviour of the load_dataset function is triggered whenever a change of the hash of latest commit on huggingface hub, but is there a clever way to only download again the dataset **if and only if** data is modified ? ### Motivation The current behaviour is a waste of network bandwidth / disk space / research time. ### Your contribution I don't have time to submit a PR, but I hope a simple solution will emerge from this issue !
{ "avatar_url": "https://avatars.githubusercontent.com/u/73740254?v=4", "events_url": "https://api.github.com/users/zinc75/events{/privacy}", "followers_url": "https://api.github.com/users/zinc75/followers", "following_url": "https://api.github.com/users/zinc75/following{/other_user}", "gists_url": "https://api.github.com/users/zinc75/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zinc75", "id": 73740254, "login": "zinc75", "node_id": "MDQ6VXNlcjczNzQwMjU0", "organizations_url": "https://api.github.com/users/zinc75/orgs", "received_events_url": "https://api.github.com/users/zinc75/received_events", "repos_url": "https://api.github.com/users/zinc75/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zinc75/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zinc75/subscriptions", "type": "User", "url": "https://api.github.com/users/zinc75" }
https://api.github.com/repos/huggingface/datasets/issues/6929/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6929/timeline
open
false
6,929
null
null
null
false
2,322,267,727
https://api.github.com/repos/huggingface/datasets/issues/6928
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6928/events
[]
null
2024-06-04T13:08:19Z
[]
https://github.com/huggingface/datasets/pull/6928
CONTRIBUTOR
null
false
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005062 / 0.011353 (-0.006291) | 0.003410 / 0.011008 (-0.007598) | 0.062241 / 0.038508 (0.023733) | 0.030294 / 0.023109 (0.007185) | 0.249249 / 0.275898 (-0.026649) | 0.267718 / 0.323480 (-0.055761) | 0.003047 / 0.007986 (-0.004938) | 0.002661 / 0.004328 (-0.001668) | 0.049142 / 0.004250 (0.044892) | 0.047929 / 0.037052 (0.010877) | 0.255262 / 0.258489 (-0.003227) | 0.286241 / 0.293841 (-0.007600) | 0.027064 / 0.128546 (-0.101482) | 0.010374 / 0.075646 (-0.065273) | 0.201454 / 0.419271 (-0.217818) | 0.036586 / 0.043533 (-0.006947) | 0.255200 / 0.255139 (0.000061) | 0.267660 / 0.283200 (-0.015539) | 0.018621 / 0.141683 (-0.123062) | 1.159821 / 1.452155 (-0.292334) | 1.171597 / 1.492716 (-0.321120) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004752 / 0.018006 (-0.013254) | 0.295427 / 0.000490 (0.294937) | 0.000225 / 0.000200 (0.000025) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018914 / 0.037411 (-0.018497) | 0.061180 / 0.014526 (0.046654) | 0.073649 / 0.176557 (-0.102907) | 0.120142 / 0.737135 (-0.616993) | 0.074754 / 0.296338 (-0.221585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286637 / 0.215209 (0.071428) | 2.807941 / 2.077655 (0.730287) | 1.473577 / 1.504120 (-0.030542) | 1.353112 / 1.541195 (-0.188083) | 1.363020 / 1.468490 (-0.105470) | 0.567745 / 4.584777 (-4.017032) | 2.384887 / 3.745712 (-1.360826) | 2.685132 / 5.269862 (-2.584730) | 1.755922 / 4.565676 (-2.809755) | 0.062296 / 0.424275 (-0.361979) | 0.004941 / 0.007607 (-0.002666) | 0.346752 / 0.226044 (0.120707) | 3.378623 / 2.268929 (1.109694) | 1.809070 / 55.444624 (-53.635555) | 1.531490 / 6.876477 (-5.344986) | 1.687954 / 2.142072 (-0.454119) | 0.639917 / 4.805227 (-4.165310) | 0.118455 / 6.500664 (-6.382209) | 0.043072 / 0.075469 (-0.032397) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977154 / 1.841788 (-0.864634) | 11.380127 / 8.074308 (3.305819) | 9.621632 / 10.191392 (-0.569760) | 0.141768 / 0.680424 (-0.538655) | 0.014120 / 0.534201 (-0.520081) | 0.285073 / 0.579283 (-0.294210) | 0.264801 / 0.434364 (-0.169563) | 0.322357 / 0.540337 (-0.217981) | 0.431192 / 1.386936 (-0.955744) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005162 / 0.011353 (-0.006191) | 0.003499 / 0.011008 (-0.007509) | 0.049667 / 0.038508 (0.011159) | 0.032473 / 0.023109 (0.009363) | 0.259988 / 0.275898 (-0.015910) | 0.285723 / 0.323480 (-0.037757) | 0.004197 / 0.007986 (-0.003789) | 0.002710 / 0.004328 (-0.001618) | 0.049235 / 0.004250 (0.044984) | 0.040440 / 0.037052 (0.003387) | 0.276791 / 0.258489 (0.018302) | 0.311990 / 0.293841 (0.018149) | 0.029217 / 0.128546 (-0.099329) | 0.010217 / 0.075646 (-0.065429) | 0.057844 / 0.419271 (-0.361427) | 0.032799 / 0.043533 (-0.010734) | 0.260705 / 0.255139 (0.005566) | 0.280439 / 0.283200 (-0.002761) | 0.018682 / 0.141683 (-0.123001) | 1.135946 / 1.452155 (-0.316208) | 1.163144 / 1.492716 (-0.329572) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097968 / 0.018006 (0.079961) | 0.309276 / 0.000490 (0.308786) | 0.000214 / 0.000200 (0.000014) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022623 / 0.037411 (-0.014788) | 0.075471 / 0.014526 (0.060945) | 0.087928 / 0.176557 (-0.088629) | 0.129537 / 0.737135 (-0.607599) | 0.089376 / 0.296338 (-0.206963) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298223 / 0.215209 (0.083014) | 2.940462 / 2.077655 (0.862807) | 1.586024 / 1.504120 (0.081904) | 1.451161 / 1.541195 (-0.090034) | 1.457707 / 1.468490 (-0.010783) | 0.571172 / 4.584777 (-4.013604) | 0.961591 / 3.745712 (-2.784121) | 2.661258 / 5.269862 (-2.608604) | 1.755172 / 4.565676 (-2.810504) | 0.063430 / 0.424275 (-0.360845) | 0.005034 / 0.007607 (-0.002573) | 0.352356 / 0.226044 (0.126312) | 3.454986 / 2.268929 (1.186057) | 1.967375 / 55.444624 (-53.477249) | 1.638465 / 6.876477 (-5.238012) | 1.774098 / 2.142072 (-0.367975) | 0.650094 / 4.805227 (-4.155134) | 0.117377 / 6.500664 (-6.383287) | 0.041229 / 0.075469 (-0.034240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.014356 / 1.841788 (-0.827432) | 12.175823 / 8.074308 (4.101515) | 10.657486 / 10.191392 (0.466094) | 0.145080 / 0.680424 (-0.535344) | 0.015563 / 0.534201 (-0.518638) | 0.287093 / 0.579283 (-0.292190) | 0.127164 / 0.434364 (-0.307200) | 0.318518 / 0.540337 (-0.221820) | 0.415333 / 1.386936 (-0.971603) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#372078f617d9210c7f073c22f5f6f4fbee52c67f \"CML watermark\")\n" ]
Update process.mdx: Code Listings Fixes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6928/reactions" }
PR_kwDODunzps5w1ECb
{ "diff_url": "https://github.com/huggingface/datasets/pull/6928.diff", "html_url": "https://github.com/huggingface/datasets/pull/6928", "merged_at": "2024-06-04T12:55:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/6928.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6928" }
2024-05-29T03:17:07Z
https://api.github.com/repos/huggingface/datasets/issues/6928/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/16918280?v=4", "events_url": "https://api.github.com/users/FadyMorris/events{/privacy}", "followers_url": "https://api.github.com/users/FadyMorris/followers", "following_url": "https://api.github.com/users/FadyMorris/following{/other_user}", "gists_url": "https://api.github.com/users/FadyMorris/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FadyMorris", "id": 16918280, "login": "FadyMorris", "node_id": "MDQ6VXNlcjE2OTE4Mjgw", "organizations_url": "https://api.github.com/users/FadyMorris/orgs", "received_events_url": "https://api.github.com/users/FadyMorris/received_events", "repos_url": "https://api.github.com/users/FadyMorris/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FadyMorris/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FadyMorris/subscriptions", "type": "User", "url": "https://api.github.com/users/FadyMorris" }
https://api.github.com/repos/huggingface/datasets/issues/6928/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6928/timeline
closed
false
6,928
null
2024-06-04T12:55:00Z
null
true
2,322,260,725
https://api.github.com/repos/huggingface/datasets/issues/6927
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6927/events
[]
null
2024-05-29T03:12:46Z
[]
https://github.com/huggingface/datasets/pull/6927
CONTRIBUTOR
null
false
null
[]
Update process.mdx: Minor Code Listings Updates and Fixes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6927/reactions" }
PR_kwDODunzps5w1CmF
{ "diff_url": "https://github.com/huggingface/datasets/pull/6927.diff", "html_url": "https://github.com/huggingface/datasets/pull/6927", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6927.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6927" }
2024-05-29T03:09:01Z
https://api.github.com/repos/huggingface/datasets/issues/6927/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/16918280?v=4", "events_url": "https://api.github.com/users/FadyMorris/events{/privacy}", "followers_url": "https://api.github.com/users/FadyMorris/followers", "following_url": "https://api.github.com/users/FadyMorris/following{/other_user}", "gists_url": "https://api.github.com/users/FadyMorris/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FadyMorris", "id": 16918280, "login": "FadyMorris", "node_id": "MDQ6VXNlcjE2OTE4Mjgw", "organizations_url": "https://api.github.com/users/FadyMorris/orgs", "received_events_url": "https://api.github.com/users/FadyMorris/received_events", "repos_url": "https://api.github.com/users/FadyMorris/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FadyMorris/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FadyMorris/subscriptions", "type": "User", "url": "https://api.github.com/users/FadyMorris" }
https://api.github.com/repos/huggingface/datasets/issues/6927/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6927/timeline
closed
false
6,927
null
2024-05-29T03:12:46Z
null
true
2,322,164,287
https://api.github.com/repos/huggingface/datasets/issues/6926
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6926/events
[]
null
2024-05-29T03:11:20Z
[]
https://github.com/huggingface/datasets/pull/6926
CONTRIBUTOR
null
false
null
[]
Update process.mdx: Fix code listing in Shard section
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6926/reactions" }
PR_kwDODunzps5w0uII
{ "diff_url": "https://github.com/huggingface/datasets/pull/6926.diff", "html_url": "https://github.com/huggingface/datasets/pull/6926", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6926.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6926" }
2024-05-29T01:25:55Z
https://api.github.com/repos/huggingface/datasets/issues/6926/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/16918280?v=4", "events_url": "https://api.github.com/users/FadyMorris/events{/privacy}", "followers_url": "https://api.github.com/users/FadyMorris/followers", "following_url": "https://api.github.com/users/FadyMorris/following{/other_user}", "gists_url": "https://api.github.com/users/FadyMorris/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FadyMorris", "id": 16918280, "login": "FadyMorris", "node_id": "MDQ6VXNlcjE2OTE4Mjgw", "organizations_url": "https://api.github.com/users/FadyMorris/orgs", "received_events_url": "https://api.github.com/users/FadyMorris/received_events", "repos_url": "https://api.github.com/users/FadyMorris/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FadyMorris/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FadyMorris/subscriptions", "type": "User", "url": "https://api.github.com/users/FadyMorris" }
https://api.github.com/repos/huggingface/datasets/issues/6926/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6926/timeline
closed
false
6,926
null
2024-05-29T03:11:08Z
null
true
2,321,084,967
https://api.github.com/repos/huggingface/datasets/issues/6925
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6925/events
[]
null
2024-06-02T14:11:13Z
[]
https://github.com/huggingface/datasets/pull/6925
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6925). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Do you think this is worth making a patch release for?\r\nCC: @huggingface/datasets", "I will add some regression tests before merging.\r\n\r\nAnd I will make a patch release afterwards.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004959 / 0.011353 (-0.006394) | 0.003654 / 0.011008 (-0.007354) | 0.064087 / 0.038508 (0.025579) | 0.031942 / 0.023109 (0.008833) | 0.236830 / 0.275898 (-0.039068) | 0.265359 / 0.323480 (-0.058121) | 0.003108 / 0.007986 (-0.004878) | 0.002824 / 0.004328 (-0.001504) | 0.049102 / 0.004250 (0.044852) | 0.046070 / 0.037052 (0.009017) | 0.248830 / 0.258489 (-0.009659) | 0.283900 / 0.293841 (-0.009941) | 0.027799 / 0.128546 (-0.100747) | 0.010572 / 0.075646 (-0.065074) | 0.223595 / 0.419271 (-0.195677) | 0.036951 / 0.043533 (-0.006582) | 0.238813 / 0.255139 (-0.016326) | 0.253841 / 0.283200 (-0.029359) | 0.018471 / 0.141683 (-0.123212) | 1.131969 / 1.452155 (-0.320186) | 1.173763 / 1.492716 (-0.318954) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095504 / 0.018006 (0.077498) | 0.301469 / 0.000490 (0.300979) | 0.000212 / 0.000200 (0.000012) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019194 / 0.037411 (-0.018217) | 0.062313 / 0.014526 (0.047787) | 0.075852 / 0.176557 (-0.100704) | 0.121996 / 0.737135 (-0.615140) | 0.076416 / 0.296338 (-0.219923) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292465 / 0.215209 (0.077256) | 2.910234 / 2.077655 (0.832579) | 1.479672 / 1.504120 (-0.024448) | 1.332281 / 1.541195 (-0.208913) | 1.354095 / 1.468490 (-0.114395) | 0.573438 / 4.584777 (-4.011339) | 2.382406 / 3.745712 (-1.363307) | 2.708289 / 5.269862 (-2.561572) | 1.739665 / 4.565676 (-2.826011) | 0.063514 / 0.424275 (-0.360761) | 0.005008 / 0.007607 (-0.002599) | 0.350070 / 0.226044 (0.124025) | 3.475837 / 2.268929 (1.206909) | 1.804639 / 55.444624 (-53.639985) | 1.520472 / 6.876477 (-5.356005) | 1.658061 / 2.142072 (-0.484011) | 0.648495 / 4.805227 (-4.156732) | 0.118394 / 6.500664 (-6.382270) | 0.042557 / 0.075469 (-0.032912) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.960772 / 1.841788 (-0.881016) | 11.451629 / 8.074308 (3.377321) | 9.613331 / 10.191392 (-0.578061) | 0.130259 / 0.680424 (-0.550164) | 0.015828 / 0.534201 (-0.518373) | 0.287581 / 0.579283 (-0.291702) | 0.266517 / 0.434364 (-0.167847) | 0.327334 / 0.540337 (-0.213003) | 0.427881 / 1.386936 (-0.959055) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005364 / 0.011353 (-0.005989) | 0.003723 / 0.011008 (-0.007285) | 0.049990 / 0.038508 (0.011482) | 0.032023 / 0.023109 (0.008913) | 0.258609 / 0.275898 (-0.017289) | 0.281250 / 0.323480 (-0.042230) | 0.004222 / 0.007986 (-0.003764) | 0.002799 / 0.004328 (-0.001529) | 0.049546 / 0.004250 (0.045296) | 0.040298 / 0.037052 (0.003246) | 0.273552 / 0.258489 (0.015063) | 0.304042 / 0.293841 (0.010201) | 0.030116 / 0.128546 (-0.098430) | 0.010792 / 0.075646 (-0.064855) | 0.058427 / 0.419271 (-0.360845) | 0.033415 / 0.043533 (-0.010118) | 0.258794 / 0.255139 (0.003655) | 0.275304 / 0.283200 (-0.007896) | 0.017944 / 0.141683 (-0.123739) | 1.109291 / 1.452155 (-0.342864) | 1.156627 / 1.492716 (-0.336090) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096700 / 0.018006 (0.078693) | 0.301108 / 0.000490 (0.300618) | 0.000208 / 0.000200 (0.000008) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022632 / 0.037411 (-0.014779) | 0.075813 / 0.014526 (0.061287) | 0.090302 / 0.176557 (-0.086254) | 0.130375 / 0.737135 (-0.606760) | 0.089710 / 0.296338 (-0.206629) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297091 / 0.215209 (0.081882) | 2.910379 / 2.077655 (0.832725) | 1.570460 / 1.504120 (0.066340) | 1.441619 / 1.541195 (-0.099576) | 1.442417 / 1.468490 (-0.026073) | 0.570034 / 4.584777 (-4.014743) | 0.952613 / 3.745712 (-2.793099) | 2.659274 / 5.269862 (-2.610588) | 1.751013 / 4.565676 (-2.814663) | 0.064639 / 0.424275 (-0.359636) | 0.005145 / 0.007607 (-0.002462) | 0.347478 / 0.226044 (0.121434) | 3.443862 / 2.268929 (1.174933) | 1.897246 / 55.444624 (-53.547379) | 1.609267 / 6.876477 (-5.267210) | 1.755116 / 2.142072 (-0.386956) | 0.658982 / 4.805227 (-4.146245) | 0.117000 / 6.500664 (-6.383664) | 0.041453 / 0.075469 (-0.034016) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005843 / 1.841788 (-0.835944) | 12.101306 / 8.074308 (4.026998) | 10.370706 / 10.191392 (0.179314) | 0.139374 / 0.680424 (-0.541050) | 0.015605 / 0.534201 (-0.518596) | 0.286978 / 0.579283 (-0.292305) | 0.122951 / 0.434364 (-0.311413) | 0.331729 / 0.540337 (-0.208609) | 0.422088 / 1.386936 (-0.964848) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#157585f964b1c7f675860af0d21712555b34aabc \"CML watermark\")\n" ]
Fix NonMatchingSplitsSizesError/ExpectedMoreSplits when passing data_dir/data_files in no-code Hub datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6925/reactions" }
PR_kwDODunzps5wxDRE
{ "diff_url": "https://github.com/huggingface/datasets/pull/6925.diff", "html_url": "https://github.com/huggingface/datasets/pull/6925", "merged_at": "2024-05-31T17:10:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/6925.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6925" }
2024-05-28T13:33:38Z
https://api.github.com/repos/huggingface/datasets/issues/6925/comments
Fix `NonMatchingSplitsSizesError` or `ExpectedMoreSplits` error for no-code Hub datasets if the user passes: - `data_dir` - `data_files` The proposed solution is to avoid using exported dataset info (from Parquet exports) in these cases. Additionally, also if the user passes `revision` other than "main" (so that no network requests are made). This PR fixes a bug introduced by: - #6714 Fix #6918, fix #6939.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6925/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6925/timeline
closed
false
6,925
null
2024-05-31T17:10:37Z
null
true
2,320,531,015
https://api.github.com/repos/huggingface/datasets/issues/6924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6924/events
[]
null
2024-05-28T09:07:41Z
[]
https://github.com/huggingface/datasets/issues/6924
NONE
null
null
null
[]
Caching map result of DatasetDict.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6924/reactions" }
I_kwDODunzps6KUH5H
null
2024-05-28T09:07:41Z
https://api.github.com/repos/huggingface/datasets/issues/6924/comments
Hi! I'm currenty using the map function to tokenize a somewhat large dataset, so I need to use the cache to save ~25 mins. Changing num_proc incduces the recomputation of the map, I'm not sure why and if this is excepted behavior? here it says, that cached files are loaded sequentially: https://github.com/huggingface/datasets/blob/bb2664cf540d5ce4b066365e7c8b26e7f1ca4743/src/datasets/arrow_dataset.py#L3005-L3006 it seems like I can pass in a fingerprint, and load it directly: https://github.com/huggingface/datasets/blob/bb2664cf540d5ce4b066365e7c8b26e7f1ca4743/src/datasets/arrow_dataset.py#L3108-L3125 **Environment Setup:** - Python 3.11.9 - datasets 2.19.1 conda-forge - Linux 6.1.83-1.el9.elrepo.x86_64 **MRE** ```python fixed raw_datasets fixed tokenize_function tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=9, remove_columns=['text'], load_from_cache_file= True, desc="Running tokenizer on dataset line_by_line", ) tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=5, remove_columns=['text'], load_from_cache_file= True, desc="Running tokenizer on dataset line_by_line", ) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/56939432?v=4", "events_url": "https://api.github.com/users/MostHumble/events{/privacy}", "followers_url": "https://api.github.com/users/MostHumble/followers", "following_url": "https://api.github.com/users/MostHumble/following{/other_user}", "gists_url": "https://api.github.com/users/MostHumble/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MostHumble", "id": 56939432, "login": "MostHumble", "node_id": "MDQ6VXNlcjU2OTM5NDMy", "organizations_url": "https://api.github.com/users/MostHumble/orgs", "received_events_url": "https://api.github.com/users/MostHumble/received_events", "repos_url": "https://api.github.com/users/MostHumble/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MostHumble/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MostHumble/subscriptions", "type": "User", "url": "https://api.github.com/users/MostHumble" }
https://api.github.com/repos/huggingface/datasets/issues/6924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6924/timeline
open
false
6,924
null
null
null
false
2,319,292,872
https://api.github.com/repos/huggingface/datasets/issues/6923
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6923/events
[]
null
2024-05-27T14:27:57Z
[]
https://github.com/huggingface/datasets/issues/6923
NONE
null
null
null
[]
Export Parquet Tablet Audio-Set is null bytes in Arrow
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6923/reactions" }
I_kwDODunzps6KPZnI
null
2024-05-27T14:27:57Z
https://api.github.com/repos/huggingface/datasets/issues/6923/comments
### Describe the bug Exporting the processed audio inside the table with the dataset.to_parquet function, the object pyarrow {bytes: null, path: "Some/Path"} At the same time, the same dataset uploaded to the hub has bit arrays ![Screenshot from 2024-05-27 19-14-49](https://github.com/huggingface/datasets/assets/140120605/ddfba089-426f-4659-9df4-7a634c948b9e) ![Screenshot from 2024-05-27 19-12-51](https://github.com/huggingface/datasets/assets/140120605/4cf8c0a1-650e-491b-86c8-b475c284a021) ### Steps to reproduce the bug 1.Get dataset from audio and cast it 2.Export and push dataset 3.It’s scary to be indignant at the difference in the uploaded dataset and the fact that it was saved locally ```py from datasets import Dataset, Audio df = Dataset.from_csv("./datasets.csv") df = df.cast_column("audio", Audio(16000)) df.to_parquet("./datasets.parquet") df.push_to_hub(repo_id="************", token="**********************") ``` You can use "try replicate case" for this [replicate_packet.zip](https://github.com/huggingface/datasets/files/15457114/replicate_packet.zip) ### Expected behavior Two parquet tables identical in content. It is obvious? ### Environment info Python 3.11+ (I try did it in 3.12 and got same result )
{ "avatar_url": "https://avatars.githubusercontent.com/u/140120605?v=4", "events_url": "https://api.github.com/users/anioji/events{/privacy}", "followers_url": "https://api.github.com/users/anioji/followers", "following_url": "https://api.github.com/users/anioji/following{/other_user}", "gists_url": "https://api.github.com/users/anioji/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anioji", "id": 140120605, "login": "anioji", "node_id": "U_kgDOCFoSHQ", "organizations_url": "https://api.github.com/users/anioji/orgs", "received_events_url": "https://api.github.com/users/anioji/received_events", "repos_url": "https://api.github.com/users/anioji/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anioji/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anioji/subscriptions", "type": "User", "url": "https://api.github.com/users/anioji" }
https://api.github.com/repos/huggingface/datasets/issues/6923/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6923/timeline
open
false
6,923
null
null
null
false
2,318,602,059
https://api.github.com/repos/huggingface/datasets/issues/6922
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6922/events
[]
null
2024-05-27T09:08:19Z
[]
https://github.com/huggingface/datasets/pull/6922
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6922). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005525 / 0.011353 (-0.005828) | 0.004013 / 0.011008 (-0.006996) | 0.063931 / 0.038508 (0.025423) | 0.033857 / 0.023109 (0.010748) | 0.250910 / 0.275898 (-0.024988) | 0.278289 / 0.323480 (-0.045191) | 0.004289 / 0.007986 (-0.003697) | 0.002800 / 0.004328 (-0.001529) | 0.050127 / 0.004250 (0.045877) | 0.048901 / 0.037052 (0.011848) | 0.260628 / 0.258489 (0.002139) | 0.293904 / 0.293841 (0.000063) | 0.028339 / 0.128546 (-0.100207) | 0.010879 / 0.075646 (-0.064767) | 0.203618 / 0.419271 (-0.215654) | 0.036241 / 0.043533 (-0.007292) | 0.250481 / 0.255139 (-0.004657) | 0.274274 / 0.283200 (-0.008926) | 0.018912 / 0.141683 (-0.122771) | 1.146785 / 1.452155 (-0.305370) | 1.199795 / 1.492716 (-0.292921) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095571 / 0.018006 (0.077564) | 0.302961 / 0.000490 (0.302471) | 0.000217 / 0.000200 (0.000017) | 0.000109 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020121 / 0.037411 (-0.017290) | 0.063231 / 0.014526 (0.048705) | 0.075434 / 0.176557 (-0.101122) | 0.123994 / 0.737135 (-0.613141) | 0.076479 / 0.296338 (-0.219860) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277816 / 0.215209 (0.062607) | 2.775481 / 2.077655 (0.697826) | 1.454881 / 1.504120 (-0.049239) | 1.339055 / 1.541195 (-0.202140) | 1.347810 / 1.468490 (-0.120681) | 0.572802 / 4.584777 (-4.011975) | 2.357490 / 3.745712 (-1.388222) | 2.822548 / 5.269862 (-2.447313) | 1.746538 / 4.565676 (-2.819138) | 0.066159 / 0.424275 (-0.358116) | 0.005037 / 0.007607 (-0.002570) | 0.329256 / 0.226044 (0.103212) | 3.277511 / 2.268929 (1.008582) | 1.807855 / 55.444624 (-53.636769) | 1.505507 / 6.876477 (-5.370970) | 1.634237 / 2.142072 (-0.507835) | 0.643999 / 4.805227 (-4.161229) | 0.117494 / 6.500664 (-6.383170) | 0.042634 / 0.075469 (-0.032835) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977689 / 1.841788 (-0.864098) | 12.261836 / 8.074308 (4.187528) | 9.871541 / 10.191392 (-0.319851) | 0.147293 / 0.680424 (-0.533130) | 0.015134 / 0.534201 (-0.519067) | 0.287677 / 0.579283 (-0.291606) | 0.264622 / 0.434364 (-0.169742) | 0.330511 / 0.540337 (-0.209826) | 0.467618 / 1.386936 (-0.919318) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005690 / 0.011353 (-0.005663) | 0.003801 / 0.011008 (-0.007207) | 0.051817 / 0.038508 (0.013309) | 0.033355 / 0.023109 (0.010246) | 0.264416 / 0.275898 (-0.011482) | 0.288494 / 0.323480 (-0.034986) | 0.004246 / 0.007986 (-0.003740) | 0.002814 / 0.004328 (-0.001515) | 0.050547 / 0.004250 (0.046297) | 0.042977 / 0.037052 (0.005925) | 0.276884 / 0.258489 (0.018395) | 0.303758 / 0.293841 (0.009917) | 0.029412 / 0.128546 (-0.099134) | 0.010697 / 0.075646 (-0.064949) | 0.059497 / 0.419271 (-0.359775) | 0.033670 / 0.043533 (-0.009862) | 0.261311 / 0.255139 (0.006172) | 0.286478 / 0.283200 (0.003278) | 0.019386 / 0.141683 (-0.122297) | 1.155943 / 1.452155 (-0.296211) | 1.198512 / 1.492716 (-0.294205) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092954 / 0.018006 (0.074948) | 0.294144 / 0.000490 (0.293655) | 0.000213 / 0.000200 (0.000013) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023013 / 0.037411 (-0.014398) | 0.077161 / 0.014526 (0.062635) | 0.089957 / 0.176557 (-0.086600) | 0.129305 / 0.737135 (-0.607831) | 0.091006 / 0.296338 (-0.205333) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294091 / 0.215209 (0.078882) | 2.885395 / 2.077655 (0.807741) | 1.555658 / 1.504120 (0.051538) | 1.423276 / 1.541195 (-0.117919) | 1.476485 / 1.468490 (0.007995) | 0.569507 / 4.584777 (-4.015270) | 0.979221 / 3.745712 (-2.766491) | 2.818503 / 5.269862 (-2.451358) | 1.871938 / 4.565676 (-2.693739) | 0.064342 / 0.424275 (-0.359933) | 0.005495 / 0.007607 (-0.002112) | 0.351451 / 0.226044 (0.125407) | 3.516078 / 2.268929 (1.247149) | 1.928351 / 55.444624 (-53.516273) | 1.625362 / 6.876477 (-5.251115) | 1.813756 / 2.142072 (-0.328317) | 0.657642 / 4.805227 (-4.147585) | 0.117893 / 6.500664 (-6.382771) | 0.042009 / 0.075469 (-0.033460) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.032893 / 1.841788 (-0.808894) | 12.983400 / 8.074308 (4.909092) | 10.747204 / 10.191392 (0.555812) | 0.133163 / 0.680424 (-0.547261) | 0.015875 / 0.534201 (-0.518326) | 0.312592 / 0.579283 (-0.266691) | 0.124780 / 0.434364 (-0.309584) | 0.350735 / 0.540337 (-0.189603) | 0.447130 / 1.386936 (-0.939806) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#048c789607af0370c1f2337248897956f7a91617 \"CML watermark\")\n" ]
Remove torchaudio remnants from code
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6922/reactions" }
PR_kwDODunzps5wolTm
{ "diff_url": "https://github.com/huggingface/datasets/pull/6922.diff", "html_url": "https://github.com/huggingface/datasets/pull/6922", "merged_at": "2024-05-27T08:59:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/6922.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6922" }
2024-05-27T08:45:07Z
https://api.github.com/repos/huggingface/datasets/issues/6922/comments
Remove torchaudio remnants from code. Follow-up on: - #5573
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6922/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6922/timeline
closed
false
6,922
null
2024-05-27T08:59:21Z
null
true
2,318,394,398
https://api.github.com/repos/huggingface/datasets/issues/6921
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6921/events
[]
null
2024-05-27T08:07:16Z
[]
https://github.com/huggingface/datasets/pull/6921
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6921). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005252 / 0.011353 (-0.006100) | 0.003752 / 0.011008 (-0.007257) | 0.064034 / 0.038508 (0.025526) | 0.031205 / 0.023109 (0.008096) | 0.248903 / 0.275898 (-0.026995) | 0.275808 / 0.323480 (-0.047671) | 0.003135 / 0.007986 (-0.004851) | 0.002635 / 0.004328 (-0.001693) | 0.049869 / 0.004250 (0.045619) | 0.047602 / 0.037052 (0.010549) | 0.259738 / 0.258489 (0.001249) | 0.296131 / 0.293841 (0.002290) | 0.027467 / 0.128546 (-0.101080) | 0.010449 / 0.075646 (-0.065197) | 0.201369 / 0.419271 (-0.217903) | 0.036317 / 0.043533 (-0.007216) | 0.244347 / 0.255139 (-0.010792) | 0.267597 / 0.283200 (-0.015602) | 0.019930 / 0.141683 (-0.121753) | 1.149012 / 1.452155 (-0.303143) | 1.188083 / 1.492716 (-0.304633) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095190 / 0.018006 (0.077184) | 0.300705 / 0.000490 (0.300215) | 0.000222 / 0.000200 (0.000022) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019297 / 0.037411 (-0.018115) | 0.063183 / 0.014526 (0.048657) | 0.075094 / 0.176557 (-0.101463) | 0.123556 / 0.737135 (-0.613579) | 0.076721 / 0.296338 (-0.219618) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284136 / 0.215209 (0.068927) | 2.814041 / 2.077655 (0.736387) | 1.471038 / 1.504120 (-0.033082) | 1.344002 / 1.541195 (-0.197193) | 1.353875 / 1.468490 (-0.114615) | 0.599495 / 4.584777 (-3.985282) | 2.394491 / 3.745712 (-1.351221) | 2.781734 / 5.269862 (-2.488128) | 1.729829 / 4.565676 (-2.835848) | 0.064194 / 0.424275 (-0.360081) | 0.005022 / 0.007607 (-0.002585) | 0.343384 / 0.226044 (0.117340) | 3.357067 / 2.268929 (1.088139) | 1.816323 / 55.444624 (-53.628301) | 1.549405 / 6.876477 (-5.327072) | 1.594394 / 2.142072 (-0.547679) | 0.660650 / 4.805227 (-4.144578) | 0.120271 / 6.500664 (-6.380393) | 0.042422 / 0.075469 (-0.033047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975776 / 1.841788 (-0.866011) | 11.828093 / 8.074308 (3.753784) | 9.384164 / 10.191392 (-0.807228) | 0.140761 / 0.680424 (-0.539663) | 0.014038 / 0.534201 (-0.520163) | 0.284904 / 0.579283 (-0.294379) | 0.263430 / 0.434364 (-0.170934) | 0.320856 / 0.540337 (-0.219482) | 0.419199 / 1.386936 (-0.967737) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005672 / 0.011353 (-0.005681) | 0.003667 / 0.011008 (-0.007341) | 0.049989 / 0.038508 (0.011481) | 0.033115 / 0.023109 (0.010006) | 0.269808 / 0.275898 (-0.006090) | 0.293286 / 0.323480 (-0.030193) | 0.004238 / 0.007986 (-0.003748) | 0.002722 / 0.004328 (-0.001606) | 0.049516 / 0.004250 (0.045265) | 0.042076 / 0.037052 (0.005024) | 0.282182 / 0.258489 (0.023693) | 0.310817 / 0.293841 (0.016976) | 0.029824 / 0.128546 (-0.098722) | 0.010516 / 0.075646 (-0.065130) | 0.058223 / 0.419271 (-0.361049) | 0.033263 / 0.043533 (-0.010270) | 0.268769 / 0.255139 (0.013630) | 0.288308 / 0.283200 (0.005108) | 0.018531 / 0.141683 (-0.123151) | 1.136806 / 1.452155 (-0.315349) | 1.192636 / 1.492716 (-0.300080) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096583 / 0.018006 (0.078577) | 0.303678 / 0.000490 (0.303188) | 0.000211 / 0.000200 (0.000011) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022741 / 0.037411 (-0.014670) | 0.075799 / 0.014526 (0.061273) | 0.089930 / 0.176557 (-0.086626) | 0.129093 / 0.737135 (-0.608042) | 0.089672 / 0.296338 (-0.206666) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292789 / 0.215209 (0.077580) | 2.860137 / 2.077655 (0.782483) | 1.566678 / 1.504120 (0.062558) | 1.437756 / 1.541195 (-0.103439) | 1.472347 / 1.468490 (0.003857) | 0.566814 / 4.584777 (-4.017963) | 0.963918 / 3.745712 (-2.781794) | 2.717199 / 5.269862 (-2.552663) | 1.763612 / 4.565676 (-2.802064) | 0.063601 / 0.424275 (-0.360674) | 0.005308 / 0.007607 (-0.002299) | 0.363111 / 0.226044 (0.137066) | 3.458222 / 2.268929 (1.189293) | 1.939185 / 55.444624 (-53.505440) | 1.659552 / 6.876477 (-5.216925) | 1.801006 / 2.142072 (-0.341067) | 0.648884 / 4.805227 (-4.156343) | 0.116259 / 6.500664 (-6.384405) | 0.041384 / 0.075469 (-0.034085) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001594 / 1.841788 (-0.840194) | 12.371125 / 8.074308 (4.296817) | 10.489763 / 10.191392 (0.298371) | 0.132500 / 0.680424 (-0.547924) | 0.014742 / 0.534201 (-0.519459) | 0.282258 / 0.579283 (-0.297026) | 0.122755 / 0.434364 (-0.311608) | 0.346068 / 0.540337 (-0.194269) | 0.424943 / 1.386936 (-0.961994) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#df445c20346a34c08e7e039e4ec1a302eef3a69c \"CML watermark\")\n" ]
Support fsspec 2024.5.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6921/reactions" }
PR_kwDODunzps5wn4Dz
{ "diff_url": "https://github.com/huggingface/datasets/pull/6921.diff", "html_url": "https://github.com/huggingface/datasets/pull/6921", "merged_at": "2024-05-27T08:01:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/6921.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6921" }
2024-05-27T07:00:59Z
https://api.github.com/repos/huggingface/datasets/issues/6921/comments
Support fsspec 2024.5.0.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6921/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6921/timeline
closed
false
6,921
null
2024-05-27T08:01:08Z
null
true
2,317,648,021
https://api.github.com/repos/huggingface/datasets/issues/6920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6920/events
[]
null
2024-05-27T09:11:17Z
[]
https://github.com/huggingface/datasets/pull/6920
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6920). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005643 / 0.011353 (-0.005710) | 0.003810 / 0.011008 (-0.007198) | 0.065896 / 0.038508 (0.027388) | 0.031692 / 0.023109 (0.008583) | 0.258297 / 0.275898 (-0.017601) | 0.294555 / 0.323480 (-0.028925) | 0.004403 / 0.007986 (-0.003583) | 0.002857 / 0.004328 (-0.001472) | 0.049848 / 0.004250 (0.045597) | 0.049719 / 0.037052 (0.012666) | 0.266393 / 0.258489 (0.007904) | 0.306214 / 0.293841 (0.012373) | 0.028283 / 0.128546 (-0.100264) | 0.010450 / 0.075646 (-0.065196) | 0.203064 / 0.419271 (-0.216208) | 0.036535 / 0.043533 (-0.006998) | 0.247839 / 0.255139 (-0.007300) | 0.270538 / 0.283200 (-0.012661) | 0.018748 / 0.141683 (-0.122935) | 1.117478 / 1.452155 (-0.334677) | 1.162575 / 1.492716 (-0.330141) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101074 / 0.018006 (0.083068) | 0.304321 / 0.000490 (0.303831) | 0.000270 / 0.000200 (0.000070) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019036 / 0.037411 (-0.018376) | 0.064496 / 0.014526 (0.049970) | 0.076848 / 0.176557 (-0.099709) | 0.122979 / 0.737135 (-0.614156) | 0.078008 / 0.296338 (-0.218330) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287009 / 0.215209 (0.071800) | 2.839084 / 2.077655 (0.761429) | 1.495977 / 1.504120 (-0.008143) | 1.379147 / 1.541195 (-0.162047) | 1.413170 / 1.468490 (-0.055320) | 0.616408 / 4.584777 (-3.968369) | 2.419183 / 3.745712 (-1.326529) | 2.905720 / 5.269862 (-2.364142) | 1.801634 / 4.565676 (-2.764043) | 0.064034 / 0.424275 (-0.360241) | 0.005098 / 0.007607 (-0.002509) | 0.341732 / 0.226044 (0.115688) | 3.365262 / 2.268929 (1.096334) | 1.844335 / 55.444624 (-53.600289) | 1.561450 / 6.876477 (-5.315027) | 1.646254 / 2.142072 (-0.495819) | 0.654993 / 4.805227 (-4.150234) | 0.119837 / 6.500664 (-6.380827) | 0.043375 / 0.075469 (-0.032094) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000352 / 1.841788 (-0.841435) | 12.765122 / 8.074308 (4.690813) | 9.818879 / 10.191392 (-0.372513) | 0.133986 / 0.680424 (-0.546438) | 0.014065 / 0.534201 (-0.520136) | 0.295859 / 0.579283 (-0.283424) | 0.268497 / 0.434364 (-0.165867) | 0.330909 / 0.540337 (-0.209429) | 0.449218 / 1.386936 (-0.937718) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005646 / 0.011353 (-0.005707) | 0.003926 / 0.011008 (-0.007082) | 0.050437 / 0.038508 (0.011929) | 0.031828 / 0.023109 (0.008719) | 0.268218 / 0.275898 (-0.007680) | 0.292987 / 0.323480 (-0.030493) | 0.004353 / 0.007986 (-0.003633) | 0.002933 / 0.004328 (-0.001395) | 0.050357 / 0.004250 (0.046107) | 0.042988 / 0.037052 (0.005935) | 0.281627 / 0.258489 (0.023138) | 0.305664 / 0.293841 (0.011824) | 0.030162 / 0.128546 (-0.098385) | 0.010856 / 0.075646 (-0.064790) | 0.059528 / 0.419271 (-0.359744) | 0.033800 / 0.043533 (-0.009733) | 0.268200 / 0.255139 (0.013061) | 0.284982 / 0.283200 (0.001782) | 0.019105 / 0.141683 (-0.122578) | 1.171714 / 1.452155 (-0.280441) | 1.205690 / 1.492716 (-0.287026) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100979 / 0.018006 (0.082973) | 0.314691 / 0.000490 (0.314201) | 0.000217 / 0.000200 (0.000017) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023816 / 0.037411 (-0.013596) | 0.081749 / 0.014526 (0.067223) | 0.090118 / 0.176557 (-0.086438) | 0.131615 / 0.737135 (-0.605520) | 0.091821 / 0.296338 (-0.204517) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301222 / 0.215209 (0.086013) | 2.835310 / 2.077655 (0.757655) | 1.562396 / 1.504120 (0.058276) | 1.432365 / 1.541195 (-0.108830) | 1.468358 / 1.468490 (-0.000132) | 0.561300 / 4.584777 (-4.023477) | 0.962294 / 3.745712 (-2.783419) | 2.799705 / 5.269862 (-2.470157) | 1.803035 / 4.565676 (-2.762642) | 0.064104 / 0.424275 (-0.360171) | 0.005480 / 0.007607 (-0.002127) | 0.342519 / 0.226044 (0.116475) | 3.406286 / 2.268929 (1.137357) | 1.966962 / 55.444624 (-53.477663) | 1.654664 / 6.876477 (-5.221813) | 1.829303 / 2.142072 (-0.312769) | 0.650932 / 4.805227 (-4.154295) | 0.119211 / 6.500664 (-6.381453) | 0.043739 / 0.075469 (-0.031730) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006657 / 1.841788 (-0.835130) | 12.915348 / 8.074308 (4.841040) | 10.808156 / 10.191392 (0.616764) | 0.132664 / 0.680424 (-0.547760) | 0.015574 / 0.534201 (-0.518627) | 0.284525 / 0.579283 (-0.294758) | 0.122322 / 0.434364 (-0.312042) | 0.326826 / 0.540337 (-0.213511) | 0.416593 / 1.386936 (-0.970343) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#15ffefe5be194790a50af88ae1236a51b0ac95e6 \"CML watermark\")\n" ]
[WebDataset] Add `.pth` support for torch tensors
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6920/reactions" }
PR_kwDODunzps5wlchX
{ "diff_url": "https://github.com/huggingface/datasets/pull/6920.diff", "html_url": "https://github.com/huggingface/datasets/pull/6920", "merged_at": "2024-05-27T09:04:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/6920.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6920" }
2024-05-26T11:12:07Z
https://api.github.com/repos/huggingface/datasets/issues/6920/comments
In this PR I add support for `.pth` but with `weights_only=True` to disallow the use of pickle
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6920/timeline
closed
false
6,920
null
2024-05-27T09:04:54Z
null
true
2,315,618,993
https://api.github.com/repos/huggingface/datasets/issues/6919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6919/events
[]
null
2024-05-24T14:59:45Z
[]
https://github.com/huggingface/datasets/issues/6919
NONE
null
null
null
[]
Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python/tuple>
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6919/reactions" }
I_kwDODunzps6KBYqx
null
2024-05-24T14:59:45Z
https://api.github.com/repos/huggingface/datasets/issues/6919/comments
### Describe the bug I wrote a notebook to load an existing dataset, process it, and upload as a private dataset using `dataset.push_to_hub(...)` at the end. The push to hub is failing with: ``` ValueError: Invalid metadata in README.md. - Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python[/tuple](http://192.168.1.128:8888/tuple)> (50:11) 47 | - 4 48 | - 4 49 | - 8 50 | - !!binary | ----------------^ 51 | TwAAAA== 52 | '1': !!python[/object/apply](http://192.168.1.128:8888/object/apply):nump ... ``` My dataset has a `train` and `validation` dataset. These are the features: ``` {'c1': Value(dtype='string', id=None), 'c2': Value(dtype='string', id=None), 'c3': [{'value': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}], 'c4': Value(dtype='string', id=None), 'c5': Value(dtype='string', id=None), 'c6': Value(dtype='string', id=None), 'c7': Value(dtype='string', id=None), 'c8': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'c9': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'c10': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'labels': Sequence(feature=ClassLabel(names=['O', 'B-ABC', 'I-ABC', ...], id=None), length=-1, id=None), 'c12': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)} ``` This used to work until I decided to cast the `labels` feature to a `Sequence(ClassLabel(...))` type with: ``` ds['train'] = ds['train'].cast_column("labels", Sequence(ClassLabel(names=list(labels)))) ds['validation'] = ds['validation'].cast_column("labels", Sequence(ClassLabel(names=list(labels)))) ``` ### Steps to reproduce the bug 1. Start with any token classification dataset. 2. Add a `labels` column with data such as `[0,0,0,12,13,13,13,0,0]`. 3. Cast the label column from `Sequence` to `Sequence(ClassLabel))` with: ``` labels = ['O', 'B-TEST', 'I-TEST'] ds = ds.cast_column("labels", Sequence(ClassLabel(names=labels))) ``` 4. Push to hub with `ds.push_to_hub("me/awesome-stuff-dataset")` ### Expected behavior I expected `push_to_hub` to successfully push my dataset to the hub without error. ### Environment info Python 3.11.9 datasets==2.19.1 transformers==4.41.1 PyYAML==6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/67964?v=4", "events_url": "https://api.github.com/users/juanqui/events{/privacy}", "followers_url": "https://api.github.com/users/juanqui/followers", "following_url": "https://api.github.com/users/juanqui/following{/other_user}", "gists_url": "https://api.github.com/users/juanqui/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/juanqui", "id": 67964, "login": "juanqui", "node_id": "MDQ6VXNlcjY3OTY0", "organizations_url": "https://api.github.com/users/juanqui/orgs", "received_events_url": "https://api.github.com/users/juanqui/received_events", "repos_url": "https://api.github.com/users/juanqui/repos", "site_admin": false, "starred_url": "https://api.github.com/users/juanqui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/juanqui/subscriptions", "type": "User", "url": "https://api.github.com/users/juanqui" }
https://api.github.com/repos/huggingface/datasets/issues/6919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6919/timeline
open
false
6,919
null
null
null
false
2,315,322,738
https://api.github.com/repos/huggingface/datasets/issues/6918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6918/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2024-05-31T17:10:38Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6918
NONE
completed
null
null
[ "Thanks for reporting, @srehaag.\r\n\r\nWe are investigating this issue.", "I confirm there is a bug for data-based Hub datasets when the user passes `data_dir`, which was introduced by PR:\r\n- #6714" ]
NonMatchingSplitsSizesError when using data_dir
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6918/reactions" }
I_kwDODunzps6KAQVy
null
2024-05-24T12:43:39Z
https://api.github.com/repos/huggingface/datasets/issues/6918/comments
### Describe the bug Loading a dataset from with a data_dir argument generates a NonMatchingSplitsSizesError if there are multiple directories in the dataset. This appears to happen because the expected split is calculated based on the data in all the directories whereas the recorded split is calculated based on the data in the directory specified using the data_dir argument. This is recent behavior. Until the past few weeks loading using the data_dir argument worked without any issue. ### Steps to reproduce the bug Simple test dataset available here: https://huggingface.co/datasets/srehaag/hf-bug-temp The dataset contains two directories "data1" and "data2", each with a file called "train.parquet" with a 2 x 5 table. from datasets import load_dataset dataset = load_dataset("srehaag/hf-bug-temp", data_dir = "data1") Generates: --------------------------------------------------------------------------- NonMatchingSplitsSizesError Traceback (most recent call last) Cell In[3], <a href='vscode-notebook-cell:?execution_count=3&line=2'>line 2</a> <a href='vscode-notebook-cell:?execution_count=3&line=1'>1</a> from datasets import load_dataset ----> <a href='vscode-notebook-cell:?execution_count=3&line=2'>2</a> dataset = load_dataset("srehaag/hf-bug-temp", data_dir = "data1") File ~/.python/current/lib/python3.10/site-packages/datasets/load.py:2609, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2606'>2606</a> return builder_instance.as_streaming_dataset(split=split) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2608'>2608</a> # Download and prepare data -> <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2609'>2609</a> builder_instance.download_and_prepare( <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2610'>2610</a> download_config=download_config, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2611'>2611</a> download_mode=download_mode, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2612'>2612</a> verification_mode=verification_mode, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2613'>2613</a> num_proc=num_proc, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2614'>2614</a> storage_options=storage_options, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2615'>2615</a> ) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2617'>2617</a> # Build dataset for splits <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2618'>2618</a> keep_in_memory = ( <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2619'>2619</a> keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2620'>2620</a> ) File ~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1027, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1025'>1025</a> if num_proc is not None: <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1026'>1026</a> prepare_split_kwargs["num_proc"] = num_proc -> <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1027'>1027</a> self._download_and_prepare( <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1028'>1028</a> dl_manager=dl_manager, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1029'>1029</a> verification_mode=verification_mode, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1030'>1030</a> **prepare_split_kwargs, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1031'>1031</a> **download_and_prepare_kwargs, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1032'>1032</a> ) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1033'>1033</a> # Sync info <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1034'>1034</a> self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1140, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1137'>1137</a> dl_manager.manage_extracted_files() <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1139'>1139</a> if verification_mode == VerificationMode.BASIC_CHECKS or verification_mode == VerificationMode.ALL_CHECKS: -> <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1140'>1140</a> verify_splits(self.info.splits, split_dict) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1142'>1142</a> # Update the info object with the splits. <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1143'>1143</a> self.info.splits = split_dict File ~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:101, in verify_splits(expected_splits, recorded_splits) <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:95'>95</a> bad_splits = [ <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:96'>96</a> {"expected": expected_splits[name], "recorded": recorded_splits[name]} <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:97'>97</a> for name in expected_splits <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:98'>98</a> if expected_splits[name].num_examples != recorded_splits[name].num_examples <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:99'>99</a> ] <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:100'>100</a> if len(bad_splits) > 0: --> <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:101'>101</a> raise NonMatchingSplitsSizesError(str(bad_splits)) <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:102'>102</a> logger.info("All the splits matched successfully.") NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=212, num_examples=10, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=106, num_examples=5, shard_lengths=None, dataset_name='hf-bug-temp')}] __________ By contrast, this loads the data from both data1/train.parquet and data2/train.parquet without any error message: from datasets import load_dataset dataset = load_dataset("srehaag/hf-bug-temp") ### Expected behavior Should load the 5 x 2 table from data1/train.parquet without error message. ### Environment info Used Codespaces to simplify environment (see details below), but bug is present across various configurations. - `datasets` version: 2.19.1 - Platform: Linux-6.5.0-1021-azure-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.1 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/86664538?v=4", "events_url": "https://api.github.com/users/srehaag/events{/privacy}", "followers_url": "https://api.github.com/users/srehaag/followers", "following_url": "https://api.github.com/users/srehaag/following{/other_user}", "gists_url": "https://api.github.com/users/srehaag/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/srehaag", "id": 86664538, "login": "srehaag", "node_id": "MDQ6VXNlcjg2NjY0NTM4", "organizations_url": "https://api.github.com/users/srehaag/orgs", "received_events_url": "https://api.github.com/users/srehaag/received_events", "repos_url": "https://api.github.com/users/srehaag/repos", "site_admin": false, "starred_url": "https://api.github.com/users/srehaag/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srehaag/subscriptions", "type": "User", "url": "https://api.github.com/users/srehaag" }
https://api.github.com/repos/huggingface/datasets/issues/6918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6918/timeline
closed
false
6,918
null
2024-05-31T17:10:38Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,314,683,663
https://api.github.com/repos/huggingface/datasets/issues/6917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6917/events
[]
null
2024-05-24T07:54:51Z
[]
https://github.com/huggingface/datasets/issues/6917
NONE
null
null
null
[]
WinError 32 The process cannot access the file during load_dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6917/reactions" }
I_kwDODunzps6J90UP
null
2024-05-24T07:54:51Z
https://api.github.com/repos/huggingface/datasets/issues/6917/comments
### Describe the bug When I try to load the opus_book from hugging face (following the [guide on the website](https://huggingface.co/docs/transformers/main/en/tasks/translation)) ```python from datasets import load_dataset, Dataset dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "translation"]) ``` I get an error: `PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:/Users/Me/.cache/huggingface/datasets/Helsinki-NLP___parquet/ca-de-a39f1ef185b9b73b/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec.incomplete\\parquet-train-00000-00000-of-NNNNN.arrow' ` <details><summary>Full stacktrace</summary> <p> ```python AttributeError Traceback (most recent call last) File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\builder.py:1858, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) [1857](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1857) _time = time.time() -> [1858](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1858) for _, table in generator: [1859](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1859) if max_shard_size is not None and writer._num_bytes > max_shard_size: File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\packaged_modules\parquet\parquet.py:59, in Parquet._generate_tables(self, files) [58](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:58) def _generate_tables(self, files): ---> [59](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:59) schema = self.config.features.arrow_schema if self.config.features is not None else None [60](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:60) if self.config.features is not None and self.config.columns is not None: AttributeError: 'list' object has no attribute 'arrow_schema' During handling of the above exception, another exception occurred: AttributeError Traceback (most recent call last) File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\builder.py:1882, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) [1881](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1881) num_shards = shard_id + 1 -> [1882](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1882) num_examples, num_bytes = writer.finalize() [1883](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1883) writer.close() File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\arrow_writer.py:584, in ArrowWriter.finalize(self, close_stream) [583](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/arrow_writer.py:583) # If schema is known, infer features even if no examples were written --> [584](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/arrow_writer.py:584) if self.pa_writer is None and self.schema: ... --> [627](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:627) os.unlink(fullname) [628](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:628) except OSError: [629](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:629) onerror(os.unlink, fullname, sys.exc_info()) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:/Users/Me/.cache/huggingface/datasets/Helsinki-NLP___parquet/ca-de-a39f1ef185b9b73b/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec.incomplete\\parquet-train-00000-00000-of-NNNNN.arrow' ``` </p> </details> ### Steps to reproduce the bug Steps to reproduce: Just execute these lines ```python from datasets import load_dataset, Dataset dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "translation"]) ``` ### Expected behavior I expect the dataset to be loaded without any errors. ### Environment info | Package| Version| |--------|--------| | transformers| 4.37.2| | python| 3.9.19| | pytorch| 2.3.0| | datasets|2.12.0 | | arrow | 1.2.3| I am using Conda on Windows 11.
{ "avatar_url": "https://avatars.githubusercontent.com/u/56682168?v=4", "events_url": "https://api.github.com/users/elwe-2808/events{/privacy}", "followers_url": "https://api.github.com/users/elwe-2808/followers", "following_url": "https://api.github.com/users/elwe-2808/following{/other_user}", "gists_url": "https://api.github.com/users/elwe-2808/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/elwe-2808", "id": 56682168, "login": "elwe-2808", "node_id": "MDQ6VXNlcjU2NjgyMTY4", "organizations_url": "https://api.github.com/users/elwe-2808/orgs", "received_events_url": "https://api.github.com/users/elwe-2808/received_events", "repos_url": "https://api.github.com/users/elwe-2808/repos", "site_admin": false, "starred_url": "https://api.github.com/users/elwe-2808/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elwe-2808/subscriptions", "type": "User", "url": "https://api.github.com/users/elwe-2808" }
https://api.github.com/repos/huggingface/datasets/issues/6917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6917/timeline
open
false
6,917
null
null
null
false
2,311,675,564
https://api.github.com/repos/huggingface/datasets/issues/6916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6916/events
[]
null
2024-05-23T00:07:53Z
[]
https://github.com/huggingface/datasets/issues/6916
NONE
completed
null
null
[]
```push_to_hub()``` - Prevent Automatic Generation of Splits
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6916/reactions" }
I_kwDODunzps6JyV6s
null
2024-05-22T23:52:15Z
https://api.github.com/repos/huggingface/datasets/issues/6916/comments
### Describe the bug I currently have a dataset which has not been splited. When pushing the dataset to my hugging face dataset repository, it is split into a testing and training set. How can I prevent the split from happening? ### Steps to reproduce the bug 1. Have a unsplit dataset ```python Dataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], num_rows: 944685 }) ``` 2. Push it to huggingface ```python dataset.push_to_hub(dataset_name) ``` 3. On the hugging face dataset repo, the dataset then appears to be splited: ![image](https://github.com/huggingface/datasets/assets/29337128/b4fbc141-42b0-4f49-98df-dd479648fe09) 4. Indeed, when loading the dataset from this repo, the dataset is split in two testing and training set. ```python from datasets import load_dataset, Dataset dataset = load_dataset("Jetlime/NF-CSE-CIC-IDS2018-v2", streaming=True) dataset ``` output: ``` IterableDatasetDict({ train: IterableDataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], n_shards: 2 }) test: IterableDataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], n_shards: 1 }) ``` ### Expected behavior The dataset shall not be splited, as not requested. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.0 - PyArrow version: 15.0.2 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/29337128?v=4", "events_url": "https://api.github.com/users/jetlime/events{/privacy}", "followers_url": "https://api.github.com/users/jetlime/followers", "following_url": "https://api.github.com/users/jetlime/following{/other_user}", "gists_url": "https://api.github.com/users/jetlime/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jetlime", "id": 29337128, "login": "jetlime", "node_id": "MDQ6VXNlcjI5MzM3MTI4", "organizations_url": "https://api.github.com/users/jetlime/orgs", "received_events_url": "https://api.github.com/users/jetlime/received_events", "repos_url": "https://api.github.com/users/jetlime/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jetlime/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jetlime/subscriptions", "type": "User", "url": "https://api.github.com/users/jetlime" }
https://api.github.com/repos/huggingface/datasets/issues/6916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6916/timeline
closed
false
6,916
null
2024-05-23T00:07:53Z
null
false
2,310,564,961
https://api.github.com/repos/huggingface/datasets/issues/6915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6915/events
[]
null
2024-06-06T09:32:10Z
[]
https://github.com/huggingface/datasets/pull/6915
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6915). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I pushed a change that fixes 2.15 cache reloading (I fixed the packaged module hash), feel free to merge if this change is fine for you", "Something weird happened in GitHub: I just merged this PR to main, See: https://github.com/huggingface/datasets/commit/5bbbf1b19766e31a6905f3e82bf3aa3f9f84a982\r\n\r\nHowever this PR still appears as Open...\r\n\r\nIf I retry to merge this PR, an error appears: \"Merge attempt failed: Merge already in progress\"\r\n![Screenshot from 2024-06-06 06-29-22](https://github.com/huggingface/datasets/assets/8515462/5fe87442-cc5d-4e9b-b60e-fdfbab830c81)\r\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005543 / 0.011353 (-0.005810) | 0.004059 / 0.011008 (-0.006949) | 0.064678 / 0.038508 (0.026170) | 0.032615 / 0.023109 (0.009506) | 0.245883 / 0.275898 (-0.030015) | 0.273545 / 0.323480 (-0.049935) | 0.004268 / 0.007986 (-0.003718) | 0.003160 / 0.004328 (-0.001168) | 0.051982 / 0.004250 (0.047731) | 0.051186 / 0.037052 (0.014134) | 0.254009 / 0.258489 (-0.004480) | 0.289594 / 0.293841 (-0.004247) | 0.028459 / 0.128546 (-0.100087) | 0.011061 / 0.075646 (-0.064585) | 0.203571 / 0.419271 (-0.215700) | 0.038049 / 0.043533 (-0.005484) | 0.243700 / 0.255139 (-0.011439) | 0.264816 / 0.283200 (-0.018383) | 0.019556 / 0.141683 (-0.122127) | 1.114395 / 1.452155 (-0.337759) | 1.168915 / 1.492716 (-0.323802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098814 / 0.018006 (0.080808) | 0.308218 / 0.000490 (0.307728) | 0.000221 / 0.000200 (0.000022) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019660 / 0.037411 (-0.017752) | 0.070542 / 0.014526 (0.056017) | 0.078906 / 0.176557 (-0.097650) | 0.126658 / 0.737135 (-0.610477) | 0.080427 / 0.296338 (-0.215911) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280686 / 0.215209 (0.065477) | 2.767480 / 2.077655 (0.689825) | 1.455325 / 1.504120 (-0.048795) | 1.336677 / 1.541195 (-0.204518) | 1.380359 / 1.468490 (-0.088131) | 0.576310 / 4.584777 (-4.008467) | 2.431829 / 3.745712 (-1.313883) | 2.815266 / 5.269862 (-2.454595) | 1.908962 / 4.565676 (-2.656714) | 0.065306 / 0.424275 (-0.358969) | 0.005229 / 0.007607 (-0.002378) | 0.336018 / 0.226044 (0.109973) | 3.349283 / 2.268929 (1.080355) | 1.814696 / 55.444624 (-53.629929) | 1.520969 / 6.876477 (-5.355508) | 1.735322 / 2.142072 (-0.406751) | 0.661513 / 4.805227 (-4.143714) | 0.121465 / 6.500664 (-6.379199) | 0.044505 / 0.075469 (-0.030964) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.989204 / 1.841788 (-0.852584) | 12.608414 / 8.074308 (4.534106) | 10.133358 / 10.191392 (-0.058034) | 0.133986 / 0.680424 (-0.546438) | 0.014332 / 0.534201 (-0.519869) | 0.293207 / 0.579283 (-0.286076) | 0.265657 / 0.434364 (-0.168707) | 0.325972 / 0.540337 (-0.214365) | 0.478103 / 1.386936 (-0.908833) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006070 / 0.011353 (-0.005283) | 0.004122 / 0.011008 (-0.006886) | 0.050572 / 0.038508 (0.012064) | 0.033732 / 0.023109 (0.010623) | 0.271282 / 0.275898 (-0.004616) | 0.296247 / 0.323480 (-0.027233) | 0.004400 / 0.007986 (-0.003585) | 0.002914 / 0.004328 (-0.001415) | 0.049332 / 0.004250 (0.045082) | 0.042213 / 0.037052 (0.005161) | 0.281230 / 0.258489 (0.022741) | 0.315514 / 0.293841 (0.021673) | 0.030864 / 0.128546 (-0.097682) | 0.011185 / 0.075646 (-0.064461) | 0.059227 / 0.419271 (-0.360044) | 0.034006 / 0.043533 (-0.009527) | 0.270059 / 0.255139 (0.014920) | 0.284014 / 0.283200 (0.000814) | 0.019502 / 0.141683 (-0.122181) | 1.143650 / 1.452155 (-0.308505) | 1.190968 / 1.492716 (-0.301749) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100502 / 0.018006 (0.082496) | 0.307863 / 0.000490 (0.307373) | 0.000212 / 0.000200 (0.000012) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023442 / 0.037411 (-0.013969) | 0.080185 / 0.014526 (0.065659) | 0.089372 / 0.176557 (-0.087185) | 0.131030 / 0.737135 (-0.606105) | 0.091174 / 0.296338 (-0.205165) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304187 / 0.215209 (0.088978) | 3.043055 / 2.077655 (0.965400) | 1.629578 / 1.504120 (0.125459) | 1.533762 / 1.541195 (-0.007432) | 1.546134 / 1.468490 (0.077643) | 0.577739 / 4.584777 (-4.007038) | 0.986310 / 3.745712 (-2.759402) | 2.791650 / 5.269862 (-2.478212) | 1.841190 / 4.565676 (-2.724487) | 0.064943 / 0.424275 (-0.359333) | 0.005251 / 0.007607 (-0.002356) | 0.355009 / 0.226044 (0.128965) | 3.560935 / 2.268929 (1.292007) | 1.991995 / 55.444624 (-53.452629) | 1.708796 / 6.876477 (-5.167681) | 1.917721 / 2.142072 (-0.224351) | 0.667667 / 4.805227 (-4.137561) | 0.119956 / 6.500664 (-6.380708) | 0.042069 / 0.075469 (-0.033400) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006242 / 1.841788 (-0.835546) | 13.321644 / 8.074308 (5.247336) | 10.712409 / 10.191392 (0.521017) | 0.134036 / 0.680424 (-0.546388) | 0.017645 / 0.534201 (-0.516555) | 0.289077 / 0.579283 (-0.290206) | 0.131356 / 0.434364 (-0.303007) | 0.333062 / 0.540337 (-0.207275) | 0.425327 / 1.386936 (-0.961609) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#09ebf5190afbd017f3ca24ef444be2d933411eed \"CML watermark\")\n", "Indeed, the merge commit is: https://github.com/huggingface/datasets/commit/5bbbf1b19766e31a6905f3e82bf3aa3f9f84a982\r\n\r\nThe following commit is just empty: https://github.com/huggingface/datasets/commit/09ebf5190afbd017f3ca24ef444be2d933411eed" ]
Validate config name and data_files in packaged modules
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6915/reactions" }
PR_kwDODunzps5wNIUh
{ "diff_url": "https://github.com/huggingface/datasets/pull/6915.diff", "html_url": "https://github.com/huggingface/datasets/pull/6915", "merged_at": "2024-06-06T09:24:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/6915.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6915" }
2024-05-22T13:36:33Z
https://api.github.com/repos/huggingface/datasets/issues/6915/comments
Validate the config attributes `name` and `data_files` in packaged modules by making the derived classes call their parent `__post_init__` method. Note that their parent `BuilderConfig` validates its attributes `name` and `data_files` in its `__post_init__` method: https://github.com/huggingface/datasets/blob/60d21efbc01e15d0b596ac1072750cbecd91548a/src/datasets/builder.py#L128-L137 This PR makes the derived config classes call their parent `__post_init__` method to validate their `name` and `data_files` attributes.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6915/timeline
closed
false
6,915
null
2024-06-06T09:24:35Z
null
true
2,310,107,326
https://api.github.com/repos/huggingface/datasets/issues/6914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6914/events
[]
null
2024-05-29T13:18:47Z
[]
https://github.com/huggingface/datasets/pull/6914
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6914). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005492 / 0.011353 (-0.005861) | 0.004087 / 0.011008 (-0.006921) | 0.065334 / 0.038508 (0.026826) | 0.032282 / 0.023109 (0.009173) | 0.246441 / 0.275898 (-0.029457) | 0.278807 / 0.323480 (-0.044673) | 0.003245 / 0.007986 (-0.004741) | 0.003795 / 0.004328 (-0.000534) | 0.050082 / 0.004250 (0.045832) | 0.050613 / 0.037052 (0.013561) | 0.258885 / 0.258489 (0.000396) | 0.297257 / 0.293841 (0.003416) | 0.028847 / 0.128546 (-0.099699) | 0.011377 / 0.075646 (-0.064270) | 0.206089 / 0.419271 (-0.213182) | 0.037354 / 0.043533 (-0.006178) | 0.257319 / 0.255139 (0.002180) | 0.275134 / 0.283200 (-0.008066) | 0.018064 / 0.141683 (-0.123619) | 1.112371 / 1.452155 (-0.339783) | 1.160909 / 1.492716 (-0.331807) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101893 / 0.018006 (0.083887) | 0.311084 / 0.000490 (0.310594) | 0.000208 / 0.000200 (0.000008) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019548 / 0.037411 (-0.017863) | 0.064396 / 0.014526 (0.049870) | 0.074900 / 0.176557 (-0.101656) | 0.122750 / 0.737135 (-0.614385) | 0.076693 / 0.296338 (-0.219646) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288609 / 0.215209 (0.073400) | 2.831354 / 2.077655 (0.753699) | 1.453961 / 1.504120 (-0.050159) | 1.327702 / 1.541195 (-0.213493) | 1.382140 / 1.468490 (-0.086351) | 0.568465 / 4.584777 (-4.016312) | 2.427199 / 3.745712 (-1.318513) | 2.810586 / 5.269862 (-2.459275) | 1.839227 / 4.565676 (-2.726449) | 0.063219 / 0.424275 (-0.361056) | 0.005111 / 0.007607 (-0.002496) | 0.341447 / 0.226044 (0.115403) | 3.357429 / 2.268929 (1.088501) | 1.806501 / 55.444624 (-53.638123) | 1.541696 / 6.876477 (-5.334781) | 1.755400 / 2.142072 (-0.386673) | 0.661442 / 4.805227 (-4.143785) | 0.120203 / 6.500664 (-6.380461) | 0.044429 / 0.075469 (-0.031040) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.987810 / 1.841788 (-0.853978) | 12.765467 / 8.074308 (4.691159) | 10.497788 / 10.191392 (0.306396) | 0.132723 / 0.680424 (-0.547701) | 0.014484 / 0.534201 (-0.519717) | 0.285763 / 0.579283 (-0.293520) | 0.264377 / 0.434364 (-0.169987) | 0.326971 / 0.540337 (-0.213367) | 0.429432 / 1.386936 (-0.957504) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005996 / 0.011353 (-0.005357) | 0.004092 / 0.011008 (-0.006916) | 0.051660 / 0.038508 (0.013152) | 0.036661 / 0.023109 (0.013552) | 0.271133 / 0.275898 (-0.004765) | 0.295728 / 0.323480 (-0.027752) | 0.004452 / 0.007986 (-0.003534) | 0.002915 / 0.004328 (-0.001413) | 0.050669 / 0.004250 (0.046418) | 0.044431 / 0.037052 (0.007378) | 0.284683 / 0.258489 (0.026194) | 0.318799 / 0.293841 (0.024958) | 0.031094 / 0.128546 (-0.097452) | 0.010810 / 0.075646 (-0.064836) | 0.059740 / 0.419271 (-0.359531) | 0.034912 / 0.043533 (-0.008621) | 0.268779 / 0.255139 (0.013640) | 0.291294 / 0.283200 (0.008095) | 0.019769 / 0.141683 (-0.121914) | 1.124833 / 1.452155 (-0.327322) | 1.168301 / 1.492716 (-0.324416) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097080 / 0.018006 (0.079074) | 0.304636 / 0.000490 (0.304146) | 0.000232 / 0.000200 (0.000032) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023186 / 0.037411 (-0.014225) | 0.082232 / 0.014526 (0.067706) | 0.089427 / 0.176557 (-0.087130) | 0.132715 / 0.737135 (-0.604421) | 0.092820 / 0.296338 (-0.203518) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300672 / 0.215209 (0.085463) | 2.969603 / 2.077655 (0.891948) | 1.577827 / 1.504120 (0.073707) | 1.440768 / 1.541195 (-0.100427) | 1.494526 / 1.468490 (0.026035) | 0.574599 / 4.584777 (-4.010178) | 0.963300 / 3.745712 (-2.782412) | 2.847854 / 5.269862 (-2.422008) | 1.841248 / 4.565676 (-2.724428) | 0.062321 / 0.424275 (-0.361954) | 0.005389 / 0.007607 (-0.002218) | 0.350853 / 0.226044 (0.124808) | 3.463514 / 2.268929 (1.194586) | 1.937661 / 55.444624 (-53.506964) | 1.665320 / 6.876477 (-5.211157) | 1.849028 / 2.142072 (-0.293044) | 0.655333 / 4.805227 (-4.149894) | 0.119062 / 6.500664 (-6.381602) | 0.043387 / 0.075469 (-0.032082) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004118 / 1.841788 (-0.837670) | 13.350894 / 8.074308 (5.276585) | 11.179363 / 10.191392 (0.987971) | 0.135169 / 0.680424 (-0.545255) | 0.016298 / 0.534201 (-0.517903) | 0.288467 / 0.579283 (-0.290816) | 0.132712 / 0.434364 (-0.301651) | 0.325436 / 0.540337 (-0.214901) | 0.413406 / 1.386936 (-0.973530) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#670e1cf31606f397ae0f858b568b1b4ed50c1843 \"CML watermark\")\n" ]
Preserve JSON column order and support list of strings field
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6914/reactions" }
PR_kwDODunzps5wLi3e
{ "diff_url": "https://github.com/huggingface/datasets/pull/6914.diff", "html_url": "https://github.com/huggingface/datasets/pull/6914", "merged_at": "2024-05-29T13:12:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/6914.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6914" }
2024-05-22T09:58:54Z
https://api.github.com/repos/huggingface/datasets/issues/6914/comments
Preserve column order when loading from a JSON file with a list of dict (or with a field containing a list of dicts). Additionally, support JSON file with a list of strings field. Fix #6913.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6914/timeline
closed
false
6,914
null
2024-05-29T13:12:23Z
null
true
2,309,605,889
https://api.github.com/repos/huggingface/datasets/issues/6913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6913/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2024-05-29T13:12:24Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6913
MEMBER
completed
null
null
[]
Column order is nondeterministic when loading from JSON
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6913/reactions" }
I_kwDODunzps6JqcoB
null
2024-05-22T05:30:14Z
https://api.github.com/repos/huggingface/datasets/issues/6913/comments
As reported by @meg-huggingface, the order of the JSON object keys is not preserved while loading a dataset from a JSON file with a list of objects. For example, when loading a JSON files with a list of objects, each with the following ordered keys: - [ID, Language, Topic], the resulting dataset may have columns: - [ID, Topic, Language], or - [Topic, Language, ID], or - [Topic, ID, Language],... This issue is caused by the use of a Python set (which does not preserve the order): https://github.com/huggingface/datasets/blob/60d21efbc01e15d0b596ac1072750cbecd91548a/src/datasets/packaged_modules/json/json.py#L168 introduced in - #5772
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6913/timeline
closed
false
6,913
null
2024-05-29T13:12:24Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,309,365,961
https://api.github.com/repos/huggingface/datasets/issues/6912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6912/events
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
null
2024-06-03T14:40:10Z
[]
https://github.com/huggingface/datasets/issues/6912
NONE
null
null
null
[ "@mariosasko, @lhoestq, @albertvillanova\r\nHello! Can anyone help? or can you guys suggest who can help with this?", "Hi ! Feel free to download the dataset and create a `Dataset` object with it.\r\n\r\nThen your'll be able to use `push_to_hub()` to upload the dataset to HF in Parquet format and make it streamable :)", "> Hi ! Feel free to download the dataset and create a `Dataset` object with it.\r\n> \r\n> Then your'll be able to use `push_to_hub()` to upload the dataset to HF in Parquet format and make it streamable :)\r\n\r\nThe dataset is several TB in total, which I do not have the resources to handle." ]
Add MedImg for streaming
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6912/reactions" }
I_kwDODunzps6JpiDJ
null
2024-05-22T00:55:30Z
https://api.github.com/repos/huggingface/datasets/issues/6912/comments
### Feature request Host the MedImg dataset (similar to Imagenet but for biomedical images). ### Motivation There is a clear need for biomedical image foundation models and large scale biomedical datasets that are easily streamable. This would be an excellent tool for the biomedical community. ### Your contribution MedImg can be found [here](https://www.cuilab.cn/medimg/#).
{ "avatar_url": "https://avatars.githubusercontent.com/u/72926928?v=4", "events_url": "https://api.github.com/users/lhallee/events{/privacy}", "followers_url": "https://api.github.com/users/lhallee/followers", "following_url": "https://api.github.com/users/lhallee/following{/other_user}", "gists_url": "https://api.github.com/users/lhallee/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhallee", "id": 72926928, "login": "lhallee", "node_id": "MDQ6VXNlcjcyOTI2OTI4", "organizations_url": "https://api.github.com/users/lhallee/orgs", "received_events_url": "https://api.github.com/users/lhallee/received_events", "repos_url": "https://api.github.com/users/lhallee/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhallee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhallee/subscriptions", "type": "User", "url": "https://api.github.com/users/lhallee" }
https://api.github.com/repos/huggingface/datasets/issues/6912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6912/timeline
open
false
6,912
null
null
null
false
2,308,152,711
https://api.github.com/repos/huggingface/datasets/issues/6911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6911/events
[]
null
2024-05-23T08:05:58Z
[]
https://github.com/huggingface/datasets/pull/6911
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6911). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005136 / 0.011353 (-0.006217) | 0.003136 / 0.011008 (-0.007872) | 0.063752 / 0.038508 (0.025244) | 0.031060 / 0.023109 (0.007950) | 0.249848 / 0.275898 (-0.026050) | 0.275918 / 0.323480 (-0.047561) | 0.004047 / 0.007986 (-0.003938) | 0.002696 / 0.004328 (-0.001632) | 0.049884 / 0.004250 (0.045634) | 0.044646 / 0.037052 (0.007593) | 0.264769 / 0.258489 (0.006280) | 0.299874 / 0.293841 (0.006033) | 0.027530 / 0.128546 (-0.101016) | 0.010026 / 0.075646 (-0.065620) | 0.204007 / 0.419271 (-0.215265) | 0.035982 / 0.043533 (-0.007550) | 0.253560 / 0.255139 (-0.001579) | 0.276206 / 0.283200 (-0.006993) | 0.017770 / 0.141683 (-0.123913) | 1.156008 / 1.452155 (-0.296146) | 1.197265 / 1.492716 (-0.295451) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092960 / 0.018006 (0.074954) | 0.302876 / 0.000490 (0.302386) | 0.000214 / 0.000200 (0.000014) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019060 / 0.037411 (-0.018351) | 0.062262 / 0.014526 (0.047737) | 0.073836 / 0.176557 (-0.102721) | 0.122327 / 0.737135 (-0.614809) | 0.076050 / 0.296338 (-0.220289) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282489 / 0.215209 (0.067280) | 2.745084 / 2.077655 (0.667429) | 1.453044 / 1.504120 (-0.051076) | 1.339065 / 1.541195 (-0.202130) | 1.341395 / 1.468490 (-0.127095) | 0.586497 / 4.584777 (-3.998280) | 2.342198 / 3.745712 (-1.403514) | 2.684984 / 5.269862 (-2.584878) | 1.703738 / 4.565676 (-2.861939) | 0.062489 / 0.424275 (-0.361786) | 0.004906 / 0.007607 (-0.002701) | 0.332325 / 0.226044 (0.106280) | 3.255381 / 2.268929 (0.986452) | 1.797045 / 55.444624 (-53.647579) | 1.515197 / 6.876477 (-5.361280) | 1.508317 / 2.142072 (-0.633756) | 0.635973 / 4.805227 (-4.169254) | 0.117292 / 6.500664 (-6.383372) | 0.041456 / 0.075469 (-0.034013) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.973934 / 1.841788 (-0.867853) | 11.288665 / 8.074308 (3.214356) | 9.269404 / 10.191392 (-0.921988) | 0.143190 / 0.680424 (-0.537234) | 0.014366 / 0.534201 (-0.519835) | 0.285936 / 0.579283 (-0.293347) | 0.261632 / 0.434364 (-0.172732) | 0.327191 / 0.540337 (-0.213146) | 0.418900 / 1.386936 (-0.968036) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005131 / 0.011353 (-0.006222) | 0.003181 / 0.011008 (-0.007827) | 0.049697 / 0.038508 (0.011189) | 0.032754 / 0.023109 (0.009645) | 0.263954 / 0.275898 (-0.011944) | 0.285110 / 0.323480 (-0.038370) | 0.004133 / 0.007986 (-0.003852) | 0.002713 / 0.004328 (-0.001615) | 0.051684 / 0.004250 (0.047433) | 0.040607 / 0.037052 (0.003554) | 0.277919 / 0.258489 (0.019429) | 0.304773 / 0.293841 (0.010932) | 0.029530 / 0.128546 (-0.099016) | 0.010176 / 0.075646 (-0.065470) | 0.058501 / 0.419271 (-0.360771) | 0.033436 / 0.043533 (-0.010097) | 0.269899 / 0.255139 (0.014760) | 0.284490 / 0.283200 (0.001290) | 0.017092 / 0.141683 (-0.124591) | 1.132399 / 1.452155 (-0.319756) | 1.167290 / 1.492716 (-0.325427) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094460 / 0.018006 (0.076454) | 0.301462 / 0.000490 (0.300972) | 0.000202 / 0.000200 (0.000002) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022767 / 0.037411 (-0.014645) | 0.075993 / 0.014526 (0.061467) | 0.087729 / 0.176557 (-0.088827) | 0.127599 / 0.737135 (-0.609536) | 0.088873 / 0.296338 (-0.207465) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286420 / 0.215209 (0.071211) | 2.811376 / 2.077655 (0.733722) | 1.558645 / 1.504120 (0.054525) | 1.426371 / 1.541195 (-0.114824) | 1.422347 / 1.468490 (-0.046143) | 0.567181 / 4.584777 (-4.017596) | 0.936731 / 3.745712 (-2.808982) | 2.643566 / 5.269862 (-2.626296) | 1.727843 / 4.565676 (-2.837834) | 0.062748 / 0.424275 (-0.361527) | 0.005033 / 0.007607 (-0.002574) | 0.339708 / 0.226044 (0.113663) | 3.354119 / 2.268929 (1.085190) | 1.877594 / 55.444624 (-53.567030) | 1.589202 / 6.876477 (-5.287274) | 1.707780 / 2.142072 (-0.434292) | 0.644520 / 4.805227 (-4.160708) | 0.115226 / 6.500664 (-6.385438) | 0.040004 / 0.075469 (-0.035465) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.002774 / 1.841788 (-0.839014) | 11.812647 / 8.074308 (3.738339) | 10.384198 / 10.191392 (0.192806) | 0.131120 / 0.680424 (-0.549304) | 0.014862 / 0.534201 (-0.519339) | 0.282873 / 0.579283 (-0.296410) | 0.120415 / 0.434364 (-0.313949) | 0.321995 / 0.540337 (-0.218343) | 0.441987 / 1.386936 (-0.944949) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b12a2c5016499cc1d110798c6815f0245f61010e \"CML watermark\")\n" ]
Remove dead code for non-dict data_files from packaged modules
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6911/reactions" }
PR_kwDODunzps5wE2ah
{ "diff_url": "https://github.com/huggingface/datasets/pull/6911.diff", "html_url": "https://github.com/huggingface/datasets/pull/6911", "merged_at": "2024-05-23T07:59:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/6911.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6911" }
2024-05-21T12:10:24Z
https://api.github.com/repos/huggingface/datasets/issues/6911/comments
Remove dead code for non-dict data_files from packaged modules. Since the merge of this PR: - #2986 the builders' variable self.config.data_files is always a dict, which makes the condition on (str, list, tuple) dead code.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6911/timeline
closed
false
6,911
null
2024-05-23T07:59:57Z
null
true
2,307,570,084
https://api.github.com/repos/huggingface/datasets/issues/6910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6910/events
[]
null
2024-05-23T06:04:05Z
[]
https://github.com/huggingface/datasets/pull/6910
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6910). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005135 / 0.011353 (-0.006218) | 0.003757 / 0.011008 (-0.007251) | 0.063122 / 0.038508 (0.024614) | 0.029837 / 0.023109 (0.006727) | 0.246120 / 0.275898 (-0.029778) | 0.268529 / 0.323480 (-0.054951) | 0.004136 / 0.007986 (-0.003849) | 0.002650 / 0.004328 (-0.001678) | 0.048749 / 0.004250 (0.044499) | 0.045279 / 0.037052 (0.008226) | 0.257970 / 0.258489 (-0.000519) | 0.285993 / 0.293841 (-0.007848) | 0.027612 / 0.128546 (-0.100935) | 0.010175 / 0.075646 (-0.065471) | 0.207373 / 0.419271 (-0.211899) | 0.037672 / 0.043533 (-0.005861) | 0.249603 / 0.255139 (-0.005536) | 0.271081 / 0.283200 (-0.012119) | 0.018174 / 0.141683 (-0.123509) | 1.116703 / 1.452155 (-0.335452) | 1.169261 / 1.492716 (-0.323455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095161 / 0.018006 (0.077155) | 0.301112 / 0.000490 (0.300623) | 0.000221 / 0.000200 (0.000021) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023218 / 0.037411 (-0.014193) | 0.063125 / 0.014526 (0.048599) | 0.075857 / 0.176557 (-0.100699) | 0.137922 / 0.737135 (-0.599213) | 0.076989 / 0.296338 (-0.219349) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279272 / 0.215209 (0.064063) | 2.776463 / 2.077655 (0.698809) | 1.472220 / 1.504120 (-0.031900) | 1.347105 / 1.541195 (-0.194090) | 1.361014 / 1.468490 (-0.107476) | 0.589233 / 4.584777 (-3.995544) | 2.395212 / 3.745712 (-1.350500) | 2.794855 / 5.269862 (-2.475007) | 1.698350 / 4.565676 (-2.867327) | 0.063328 / 0.424275 (-0.360947) | 0.005020 / 0.007607 (-0.002588) | 0.335872 / 0.226044 (0.109828) | 3.293486 / 2.268929 (1.024558) | 1.837270 / 55.444624 (-53.607354) | 1.535694 / 6.876477 (-5.340782) | 1.559696 / 2.142072 (-0.582376) | 0.639302 / 4.805227 (-4.165925) | 0.116554 / 6.500664 (-6.384110) | 0.042305 / 0.075469 (-0.033164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971562 / 1.841788 (-0.870226) | 11.710500 / 8.074308 (3.636192) | 9.505935 / 10.191392 (-0.685457) | 0.139161 / 0.680424 (-0.541263) | 0.014351 / 0.534201 (-0.519850) | 0.285790 / 0.579283 (-0.293493) | 0.265718 / 0.434364 (-0.168646) | 0.323558 / 0.540337 (-0.216780) | 0.412635 / 1.386936 (-0.974301) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005987 / 0.011353 (-0.005366) | 0.003787 / 0.011008 (-0.007221) | 0.049839 / 0.038508 (0.011331) | 0.032817 / 0.023109 (0.009708) | 0.268304 / 0.275898 (-0.007594) | 0.303409 / 0.323480 (-0.020071) | 0.004924 / 0.007986 (-0.003061) | 0.002740 / 0.004328 (-0.001589) | 0.048906 / 0.004250 (0.044655) | 0.044266 / 0.037052 (0.007213) | 0.290506 / 0.258489 (0.032017) | 0.314124 / 0.293841 (0.020283) | 0.030242 / 0.128546 (-0.098304) | 0.010555 / 0.075646 (-0.065091) | 0.058849 / 0.419271 (-0.360423) | 0.033540 / 0.043533 (-0.009993) | 0.267833 / 0.255139 (0.012694) | 0.291056 / 0.283200 (0.007857) | 0.018611 / 0.141683 (-0.123072) | 1.137620 / 1.452155 (-0.314534) | 1.199554 / 1.492716 (-0.293162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096716 / 0.018006 (0.078709) | 0.302033 / 0.000490 (0.301543) | 0.000217 / 0.000200 (0.000017) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023208 / 0.037411 (-0.014203) | 0.076231 / 0.014526 (0.061705) | 0.088672 / 0.176557 (-0.087884) | 0.129033 / 0.737135 (-0.608103) | 0.090709 / 0.296338 (-0.205630) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297033 / 0.215209 (0.081824) | 2.951181 / 2.077655 (0.873526) | 1.567690 / 1.504120 (0.063570) | 1.436809 / 1.541195 (-0.104385) | 1.469696 / 1.468490 (0.001206) | 0.567963 / 4.584777 (-4.016813) | 0.954168 / 3.745712 (-2.791544) | 2.700473 / 5.269862 (-2.569389) | 1.742144 / 4.565676 (-2.823532) | 0.065027 / 0.424275 (-0.359248) | 0.005319 / 0.007607 (-0.002288) | 0.346459 / 0.226044 (0.120415) | 3.446117 / 2.268929 (1.177189) | 1.953142 / 55.444624 (-53.491483) | 1.639131 / 6.876477 (-5.237346) | 1.830664 / 2.142072 (-0.311409) | 0.657807 / 4.805227 (-4.147420) | 0.117987 / 6.500664 (-6.382678) | 0.040726 / 0.075469 (-0.034744) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.992666 / 1.841788 (-0.849122) | 12.305377 / 8.074308 (4.231069) | 10.274829 / 10.191392 (0.083437) | 0.141731 / 0.680424 (-0.538692) | 0.015100 / 0.534201 (-0.519101) | 0.282298 / 0.579283 (-0.296985) | 0.124301 / 0.434364 (-0.310063) | 0.320914 / 0.540337 (-0.219424) | 0.445855 / 1.386936 (-0.941081) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3b66daa02b3307079a90fbfd13856e9bec0fc1ab \"CML watermark\")\n" ]
Fix wrong type hints in data_files
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6910/reactions" }
PR_kwDODunzps5wC2An
{ "diff_url": "https://github.com/huggingface/datasets/pull/6910.diff", "html_url": "https://github.com/huggingface/datasets/pull/6910", "merged_at": "2024-05-23T05:58:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/6910.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6910" }
2024-05-21T07:41:09Z
https://api.github.com/repos/huggingface/datasets/issues/6910/comments
Fix wrong type hints in data_files introduced in: - #6493
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6910/timeline
closed
false
6,910
null
2024-05-23T05:58:05Z
null
true
2,307,508,120
https://api.github.com/repos/huggingface/datasets/issues/6909
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6909/events
[]
null
2024-05-21T07:45:58Z
[]
https://github.com/huggingface/datasets/pull/6909
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6909). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005375 / 0.011353 (-0.005978) | 0.004005 / 0.011008 (-0.007003) | 0.062407 / 0.038508 (0.023899) | 0.032241 / 0.023109 (0.009131) | 0.256092 / 0.275898 (-0.019806) | 0.285740 / 0.323480 (-0.037740) | 0.004146 / 0.007986 (-0.003839) | 0.002831 / 0.004328 (-0.001497) | 0.049179 / 0.004250 (0.044928) | 0.048303 / 0.037052 (0.011251) | 0.270841 / 0.258489 (0.012352) | 0.303209 / 0.293841 (0.009368) | 0.027642 / 0.128546 (-0.100905) | 0.010661 / 0.075646 (-0.064985) | 0.201999 / 0.419271 (-0.217272) | 0.036532 / 0.043533 (-0.007001) | 0.262441 / 0.255139 (0.007302) | 0.280944 / 0.283200 (-0.002256) | 0.018369 / 0.141683 (-0.123314) | 1.122249 / 1.452155 (-0.329906) | 1.171352 / 1.492716 (-0.321364) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096433 / 0.018006 (0.078427) | 0.297272 / 0.000490 (0.296782) | 0.000222 / 0.000200 (0.000023) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019645 / 0.037411 (-0.017766) | 0.062744 / 0.014526 (0.048219) | 0.076096 / 0.176557 (-0.100460) | 0.121882 / 0.737135 (-0.615253) | 0.076267 / 0.296338 (-0.220072) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.274159 / 0.215209 (0.058950) | 2.729371 / 2.077655 (0.651716) | 1.454328 / 1.504120 (-0.049792) | 1.330517 / 1.541195 (-0.210678) | 1.338832 / 1.468490 (-0.129658) | 0.600252 / 4.584777 (-3.984525) | 2.388658 / 3.745712 (-1.357054) | 2.837717 / 5.269862 (-2.432145) | 1.747329 / 4.565676 (-2.818347) | 0.064620 / 0.424275 (-0.359655) | 0.004955 / 0.007607 (-0.002653) | 0.340253 / 0.226044 (0.114209) | 3.351559 / 2.268929 (1.082630) | 1.822718 / 55.444624 (-53.621907) | 1.518663 / 6.876477 (-5.357814) | 1.548066 / 2.142072 (-0.594006) | 0.663525 / 4.805227 (-4.141702) | 0.118334 / 6.500664 (-6.382331) | 0.042060 / 0.075469 (-0.033410) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976509 / 1.841788 (-0.865278) | 11.703321 / 8.074308 (3.629013) | 9.305605 / 10.191392 (-0.885787) | 0.131016 / 0.680424 (-0.549408) | 0.014299 / 0.534201 (-0.519902) | 0.293963 / 0.579283 (-0.285320) | 0.264018 / 0.434364 (-0.170345) | 0.330265 / 0.540337 (-0.210073) | 0.427239 / 1.386936 (-0.959697) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005437 / 0.011353 (-0.005916) | 0.003774 / 0.011008 (-0.007234) | 0.049927 / 0.038508 (0.011419) | 0.032246 / 0.023109 (0.009137) | 0.271808 / 0.275898 (-0.004090) | 0.295652 / 0.323480 (-0.027828) | 0.004220 / 0.007986 (-0.003766) | 0.002803 / 0.004328 (-0.001525) | 0.049656 / 0.004250 (0.045406) | 0.041938 / 0.037052 (0.004885) | 0.282199 / 0.258489 (0.023710) | 0.310206 / 0.293841 (0.016365) | 0.030389 / 0.128546 (-0.098157) | 0.010593 / 0.075646 (-0.065054) | 0.057862 / 0.419271 (-0.361409) | 0.033937 / 0.043533 (-0.009596) | 0.268920 / 0.255139 (0.013781) | 0.286000 / 0.283200 (0.002800) | 0.018766 / 0.141683 (-0.122917) | 1.118556 / 1.452155 (-0.333599) | 1.175083 / 1.492716 (-0.317633) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095135 / 0.018006 (0.077129) | 0.304735 / 0.000490 (0.304245) | 0.000210 / 0.000200 (0.000010) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022971 / 0.037411 (-0.014441) | 0.076204 / 0.014526 (0.061678) | 0.090801 / 0.176557 (-0.085756) | 0.130149 / 0.737135 (-0.606987) | 0.090986 / 0.296338 (-0.205352) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298535 / 0.215209 (0.083326) | 2.882959 / 2.077655 (0.805304) | 1.574018 / 1.504120 (0.069899) | 1.445251 / 1.541195 (-0.095944) | 1.483651 / 1.468490 (0.015160) | 0.572012 / 4.584777 (-4.012765) | 0.972223 / 3.745712 (-2.773489) | 2.745776 / 5.269862 (-2.524085) | 1.783980 / 4.565676 (-2.781697) | 0.063910 / 0.424275 (-0.360365) | 0.005397 / 0.007607 (-0.002210) | 0.349104 / 0.226044 (0.123059) | 3.433303 / 2.268929 (1.164374) | 1.961506 / 55.444624 (-53.483119) | 1.665905 / 6.876477 (-5.210571) | 1.800977 / 2.142072 (-0.341095) | 0.655843 / 4.805227 (-4.149384) | 0.118320 / 6.500664 (-6.382345) | 0.041748 / 0.075469 (-0.033722) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006835 / 1.841788 (-0.834952) | 12.506123 / 8.074308 (4.431815) | 10.564310 / 10.191392 (0.372918) | 0.143121 / 0.680424 (-0.537303) | 0.016340 / 0.534201 (-0.517861) | 0.284181 / 0.579283 (-0.295102) | 0.125975 / 0.434364 (-0.308389) | 0.324369 / 0.540337 (-0.215969) | 0.443713 / 1.386936 (-0.943223) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#60d21efbc01e15d0b596ac1072750cbecd91548a \"CML watermark\")\n" ]
Update requests >=2.32.1 to fix vulnerability
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6909/reactions" }
PR_kwDODunzps5wCoiE
{ "diff_url": "https://github.com/huggingface/datasets/pull/6909.diff", "html_url": "https://github.com/huggingface/datasets/pull/6909", "merged_at": "2024-05-21T07:38:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/6909.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6909" }
2024-05-21T07:11:20Z
https://api.github.com/repos/huggingface/datasets/issues/6909/comments
Update requests >=2.32.1 to fix vulnerability.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6909/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6909/timeline
closed
false
6,909
null
2024-05-21T07:38:25Z
null
true
2,304,958,116
https://api.github.com/repos/huggingface/datasets/issues/6908
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6908/events
[]
null
2024-05-24T10:58:09Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6908
NONE
completed
null
null
[ "I am not able to reproduce the error with datasets 2.19.1:\r\n```python\r\nIn [1]: from datasets import load_dataset; ds = load_dataset(\"stas/c4-en-10k\", streaming=True); item = next(iter(ds[\"train\"])); item\r\nOut[1]: {'text': 'Beginners BBQ Class Taking Place in Missoula!\\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.'}\r\n\r\nIn [2]: from datasets import load_dataset; ds = load_dataset(\"stas/c4-en-10k\", download_mode=\"force_redownload\"); ds\r\nDownloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 13.3M/13.3M [00:00<00:00, 18.7MB/s]\r\nGenerating train split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10000/10000 [00:00<00:00, 78548.55 examples/s]\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text'],\r\n num_rows: 10000\r\n })\r\n})\r\n```\r\n\r\nLooking at your error traceback, I notice that the code line numbers do not correspond to the ones of datasets 2.19.1.\r\n\r\nAdditionally, I can't reproduce the issue with `HfFileSystem`:\r\n```python\r\nIn [1]: from huggingface_hub import HfFileSystem\r\n\r\nIn [2]: fs = HfFileSystem()\r\n\r\nIn [3]: with fs.open(\"datasets/stas/c4-en-10k/c4-en-10k.py\", \"rb\") as f:\r\n ...: data = f.read()\r\n ...: \r\n\r\nIn [4]: data[:20]\r\nOut[4]: b'# coding=utf-8\\n# Cop'\r\n```\r\n\r\nCould you please verify the `datasets` and `huggingface_hub` versions you are indeed using?\r\n```python\r\nimport datasets; print(datasets.__version__)\r\n\r\nimport huggingface_hub; print(huggingface_hub.__version__)\r\n```", "Thanks for your reply! After I update the datasets version from 2.15.0 back to 2.19.1 again, it seems everything work well. Sorry for bordering you!" ]
Fail to load "stas/c4-en-10k" dataset since 2.16 version
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6908/reactions" }
I_kwDODunzps6JYt6k
null
2024-05-20T02:43:59Z
https://api.github.com/repos/huggingface/datasets/issues/6908/comments
### Describe the bug When update datasets library to version 2.16+ ( I test it on 2.16, 2.19.0 and 2.19.1), using the following code to load stas/c4-en-10k dataset ```python from datasets import load_dataset, Dataset dataset = load_dataset('stas/c4-en-10k') ``` and then it raise UnicodeDecodeError like ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 2523, in load_dataset builder_instance = load_dataset_builder( File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 2195, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 1846, in dataset_module_factory raise e1 from None File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 1798, in dataset_module_factory can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read() File "/home/*/conda3/envs/watermark/lib/python3.10/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte ``` I found that fs.open loads a gzip file and parses it like plain text using utf-8 encoder. ```python fs = HfFileSystem('https://huggingface.co') fs.open("datasets/stas/c4-en-10k/c4-en-10k.py", "rb") data = fs.read() # data is gzip bytes begin with b'\x1f\x8b\x08\x00\x00\tn\x88\x00...' data2 = unzip_gzip_bytes(data) # data2 is what we want: '# coding=utf-8\n# Copyright 2020 The HuggingFace Datasets...' ``` ### Steps to reproduce the bug 1. Install datasets between version 2.16 and 2.19 2. Use `datasets.load_dataset` method to load `stas/c4-en-10k` dataset. ### Expected behavior Load dataset normally. ### Environment info Platform = Linux-5.4.0-159-generic-x86_64-with-glibc2.35 Python = 3.10.14 Datasets = 2.19
{ "avatar_url": "https://avatars.githubusercontent.com/u/38173059?v=4", "events_url": "https://api.github.com/users/guch8017/events{/privacy}", "followers_url": "https://api.github.com/users/guch8017/followers", "following_url": "https://api.github.com/users/guch8017/following{/other_user}", "gists_url": "https://api.github.com/users/guch8017/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/guch8017", "id": 38173059, "login": "guch8017", "node_id": "MDQ6VXNlcjM4MTczMDU5", "organizations_url": "https://api.github.com/users/guch8017/orgs", "received_events_url": "https://api.github.com/users/guch8017/received_events", "repos_url": "https://api.github.com/users/guch8017/repos", "site_admin": false, "starred_url": "https://api.github.com/users/guch8017/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guch8017/subscriptions", "type": "User", "url": "https://api.github.com/users/guch8017" }
https://api.github.com/repos/huggingface/datasets/issues/6908/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6908/timeline
closed
false
6,908
null
2024-05-24T10:58:09Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,303,855,833
https://api.github.com/repos/huggingface/datasets/issues/6907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6907/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-05-18T08:53:28Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6907
NONE
null
null
null
[ "Update: I ended up deciding to go back to use lines of dictionaries instead of arrays, not because of this issue as my users would be capable of downloading my corpus without `datasets`, but the speed and storage savings are not currently worth breaking my API and harming the backwards compatibility of each new revision.\r\n\r\nWith that said, for a static dataset that is not regularly updated like mine, and particularly for extremely large datasets with millions or billions of rows, using arrays could have a meaningful impact, and so there is probably still value in supporting this structure, provided the effort is not too much." ]
Support the deserialization of json lines files comprised of lists
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6907/reactions" }
I_kwDODunzps6JUgzZ
null
2024-05-18T05:07:23Z
https://api.github.com/repos/huggingface/datasets/issues/6907/comments
### Feature request I manage a somewhat large and popular Hugging Face dataset known as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus). I recently updated my corpus to be stored in a json lines file where each line is an array and each element represents a value at a particular column. Previously, my corpus was stored as a json lines file where each line was a dictionary and the keys were the fields. Essentially, a line in my json lines file used to look like this: ```json {"version_id":"","type":"","jurisdiction":"","source":"","citation":"","url":"","when_scraped":"","text":""} ``` And now it looks like this: ```json ["","","","","","","",""] ``` This saves 65 bytes per document and allows me very quickly serialise and deserialise documents via `msgspec`. After making this change, I found that `datasets` was incapable of deserialising my Corpus without a custom loading script, even if I ensured that the `dataset_info` field in my dataset card contained the desired names of my features. I would like to request that functionality be added to support this format which is more memory-efficent and faster than using dictionaries. ### Motivation The [documentation](https://huggingface.co/docs/datasets/en/dataset_script) for creating dataset loading scripts asserts that: > In the next major release, the new safety features of πŸ€— Datasets will disable running dataset loading scripts by default, and you will have to pass trust_remote_code=True to load datasets that require running a dataset script. I would rather not require my users to pass `trust_remote_code=True` which means that I will need built-in support for this format. ### Your contribution I would be happy to submit a PR for this if this is something you would incorporate into `datasets` and if I can be pointed to where the code would need to go.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8473183?v=4", "events_url": "https://api.github.com/users/umarbutler/events{/privacy}", "followers_url": "https://api.github.com/users/umarbutler/followers", "following_url": "https://api.github.com/users/umarbutler/following{/other_user}", "gists_url": "https://api.github.com/users/umarbutler/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/umarbutler", "id": 8473183, "login": "umarbutler", "node_id": "MDQ6VXNlcjg0NzMxODM=", "organizations_url": "https://api.github.com/users/umarbutler/orgs", "received_events_url": "https://api.github.com/users/umarbutler/received_events", "repos_url": "https://api.github.com/users/umarbutler/repos", "site_admin": false, "starred_url": "https://api.github.com/users/umarbutler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/umarbutler/subscriptions", "type": "User", "url": "https://api.github.com/users/umarbutler" }
https://api.github.com/repos/huggingface/datasets/issues/6907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6907/timeline
open
false
6,907
null
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false