url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
840M
2.49B
node_id
stringlengths
18
32
number
int64
2.11k
7.13k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
sequencelengths
0
30
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
4 values
active_lock_reason
float64
body
stringlengths
0
36.2k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
draft
float64
0
1
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/7128
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7128/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7128/comments
https://api.github.com/repos/huggingface/datasets/issues/7128/events
https://github.com/huggingface/datasets/issues/7128
2,490,274,775
I_kwDODunzps6UbpPX
7,128
Filter Large Dataset Entry by Entry
{ "avatar_url": "https://avatars.githubusercontent.com/u/36057290?v=4", "events_url": "https://api.github.com/users/QiyaoWei/events{/privacy}", "followers_url": "https://api.github.com/users/QiyaoWei/followers", "following_url": "https://api.github.com/users/QiyaoWei/following{/other_user}", "gists_url": "https://api.github.com/users/QiyaoWei/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/QiyaoWei", "id": 36057290, "login": "QiyaoWei", "node_id": "MDQ6VXNlcjM2MDU3Mjkw", "organizations_url": "https://api.github.com/users/QiyaoWei/orgs", "received_events_url": "https://api.github.com/users/QiyaoWei/received_events", "repos_url": "https://api.github.com/users/QiyaoWei/repos", "site_admin": false, "starred_url": "https://api.github.com/users/QiyaoWei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/QiyaoWei/subscriptions", "type": "User", "url": "https://api.github.com/users/QiyaoWei" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
"2024-08-27T20:31:09"
"2024-08-27T20:31:09"
null
NONE
null
### Feature request I am not sure if this is a new feature, but I wanted to post this problem here, and hear if others have ways of optimizing and speeding up this process. Let's say I have a really large dataset that I cannot load into memory. At this point, I am only aware of `streaming=True` to load the dataset. Now, the dataset consists of many tables. Ideally, I would want to have some simple filtering criterion, such that I only see the "good" tables. Here is an example of what the code might look like: ``` dataset = load_dataset( "really-large-dataset", streaming=True ) # And let's say we process the dataset bit by bit because we want intermediate results dataset = islice(dataset, 10000) # Define a function to filter the data def filter_function(table): if some_condition: return True else: return False # Use the filter function on your dataset filtered_dataset = (ex for ex in dataset if filter_function(ex)) ``` And then I work on the processed dataset, which would be magnitudes faster than working on the original. I would love to hear if the problem setup + solution makes sense to people, and if anyone has suggestions! ### Motivation See description above ### Your contribution Happy to make PR if this is a new feature
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7128/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7128/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7127
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7127/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7127/comments
https://api.github.com/repos/huggingface/datasets/issues/7127/events
https://github.com/huggingface/datasets/issues/7127
2,486,524,966
I_kwDODunzps6UNVwm
7,127
Caching shuffles by np.random.Generator results in unintiutive behavior
{ "avatar_url": "https://avatars.githubusercontent.com/u/11832922?v=4", "events_url": "https://api.github.com/users/el-hult/events{/privacy}", "followers_url": "https://api.github.com/users/el-hult/followers", "following_url": "https://api.github.com/users/el-hult/following{/other_user}", "gists_url": "https://api.github.com/users/el-hult/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/el-hult", "id": 11832922, "login": "el-hult", "node_id": "MDQ6VXNlcjExODMyOTIy", "organizations_url": "https://api.github.com/users/el-hult/orgs", "received_events_url": "https://api.github.com/users/el-hult/received_events", "repos_url": "https://api.github.com/users/el-hult/repos", "site_admin": false, "starred_url": "https://api.github.com/users/el-hult/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/el-hult/subscriptions", "type": "User", "url": "https://api.github.com/users/el-hult" }
[]
open
false
null
[]
null
[ "I first thought this was a mistake of mine, and also posted on stack overflow. https://stackoverflow.com/questions/78913797/iterating-a-huggingface-dataset-from-disk-using-generator-seems-broken-how-to-d \r\n\r\nIt seems to me the issue is the caching step in \r\n\r\nhttps://github.com/huggingface/datasets/blob/be5cff059a2a5b89d7a97bc04739c4919ab8089f/src/datasets/arrow_dataset.py#L4306-L4316\r\n\r\nbecause the shuffle happens after checking the cache, the rng state won't advance if the cache is used. This is VERY confusing. Also not documented.\r\n\r\nMy proposal is that you remove the API for using a Generator, and only keep the seed-based API since that is functional and cache-compatible." ]
"2024-08-26T10:29:48"
"2024-08-26T10:35:57"
null
NONE
null
### Describe the bug Create a dataset. Save it to disk. Load from disk. Shuffle, usning a `np.random.Generator`. Iterate. Shuffle again. Iterate. The iterates are different since the supplied np.random.Generator has progressed between the shuffles. Load dataset from disk again. Shuffle and Iterate. See same result as before. Shuffle and iterate, and this time it does not have the same shuffling as ion previous run. The motivation is I have a deep learning loop with ``` for epoch in range(10): for batch in dataset.shuffle(generator=generator).iter(batch_size=32): .... # do stuff ``` where I want a new shuffling at every epoch. Instead I get the same shuffling. ### Steps to reproduce the bug Run the code below two times. ```python import datasets import numpy as np generator = np.random.default_rng(0) ds = datasets.Dataset.from_dict(mapping={"X":range(1000)}) ds.save_to_disk("tmp") print("First loop: ", end="") for _ in range(10): print(next(ds.shuffle(generator=generator).iter(batch_size=1))['X'], end=", ") print("") print("Second loop: ", end="") ds = datasets.Dataset.load_from_disk("tmp") for _ in range(10): print(next(ds.shuffle(generator=generator).iter(batch_size=1))['X'], end=", ") print("") ``` The output is: ``` $ python main.py Saving the dataset (1/1 shards): 100%|███████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 495019.95 examples/s] First loop: 459, 739, 72, 943, 241, 181, 845, 830, 896, 334, Second loop: 741, 847, 944, 795, 483, 842, 717, 865, 231, 840, $ python main.py Saving the dataset (1/1 shards): 100%|████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 22243.40 examples/s] First loop: 459, 739, 72, 943, 241, 181, 845, 830, 896, 334, Second loop: 741, 741, 741, 741, 741, 741, 741, 741, 741, 741, ``` The second loop, on the second run, only spits out "741, 741, 741...." which is *not* the desired output ### Expected behavior I want the dataset to shuffle at every epoch since I provide it with a generator for shuffling. ### Environment info Datasets version 2.21.0 Ubuntu linux.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7127/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7127/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7126
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7126/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7126/comments
https://api.github.com/repos/huggingface/datasets/issues/7126/events
https://github.com/huggingface/datasets/pull/7126
2,485,939,495
PR_kwDODunzps55Y-Ws
7,126
Disable implicit token in CI
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7126). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005232 / 0.011353 (-0.006121) | 0.003428 / 0.011008 (-0.007580) | 0.062673 / 0.038508 (0.024164) | 0.030111 / 0.023109 (0.007002) | 0.238017 / 0.275898 (-0.037881) | 0.262655 / 0.323480 (-0.060825) | 0.003015 / 0.007986 (-0.004971) | 0.002664 / 0.004328 (-0.001665) | 0.050010 / 0.004250 (0.045759) | 0.045620 / 0.037052 (0.008567) | 0.251800 / 0.258489 (-0.006689) | 0.278829 / 0.293841 (-0.015011) | 0.029838 / 0.128546 (-0.098709) | 0.011703 / 0.075646 (-0.063943) | 0.204503 / 0.419271 (-0.214768) | 0.036173 / 0.043533 (-0.007359) | 0.242850 / 0.255139 (-0.012289) | 0.263811 / 0.283200 (-0.019389) | 0.019027 / 0.141683 (-0.122656) | 1.168028 / 1.452155 (-0.284126) | 1.208975 / 1.492716 (-0.283742) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091309 / 0.018006 (0.073303) | 0.299583 / 0.000490 (0.299093) | 0.000215 / 0.000200 (0.000015) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018451 / 0.037411 (-0.018960) | 0.062516 / 0.014526 (0.047991) | 0.073983 / 0.176557 (-0.102573) | 0.120952 / 0.737135 (-0.616184) | 0.075275 / 0.296338 (-0.221063) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286870 / 0.215209 (0.071661) | 2.810498 / 2.077655 (0.732843) | 1.490028 / 1.504120 (-0.014092) | 1.362249 / 1.541195 (-0.178946) | 1.368939 / 1.468490 (-0.099551) | 0.736643 / 4.584777 (-3.848134) | 2.414237 / 3.745712 (-1.331475) | 2.898911 / 5.269862 (-2.370951) | 1.840630 / 4.565676 (-2.725047) | 0.077872 / 0.424275 (-0.346403) | 0.005087 / 0.007607 (-0.002520) | 0.337054 / 0.226044 (0.111009) | 3.390734 / 2.268929 (1.121806) | 1.844174 / 55.444624 (-53.600451) | 1.532741 / 6.876477 (-5.343736) | 1.551650 / 2.142072 (-0.590422) | 0.778642 / 4.805227 (-4.026585) | 0.131899 / 6.500664 (-6.368765) | 0.041801 / 0.075469 (-0.033668) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.958362 / 1.841788 (-0.883425) | 11.323330 / 8.074308 (3.249022) | 9.396199 / 10.191392 (-0.795193) | 0.131154 / 0.680424 (-0.549270) | 0.014705 / 0.534201 (-0.519496) | 0.302424 / 0.579283 (-0.276859) | 0.261870 / 0.434364 (-0.172494) | 0.340788 / 0.540337 (-0.199550) | 0.433360 / 1.386936 (-0.953576) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005571 / 0.011353 (-0.005782) | 0.003388 / 0.011008 (-0.007621) | 0.050366 / 0.038508 (0.011858) | 0.032633 / 0.023109 (0.009524) | 0.261847 / 0.275898 (-0.014051) | 0.292197 / 0.323480 (-0.031283) | 0.005070 / 0.007986 (-0.002916) | 0.002753 / 0.004328 (-0.001575) | 0.048613 / 0.004250 (0.044363) | 0.040272 / 0.037052 (0.003219) | 0.275441 / 0.258489 (0.016952) | 0.309175 / 0.293841 (0.015334) | 0.032403 / 0.128546 (-0.096143) | 0.011734 / 0.075646 (-0.063912) | 0.059532 / 0.419271 (-0.359740) | 0.033886 / 0.043533 (-0.009647) | 0.263453 / 0.255139 (0.008314) | 0.281997 / 0.283200 (-0.001203) | 0.018522 / 0.141683 (-0.123161) | 1.150364 / 1.452155 (-0.301791) | 1.204090 / 1.492716 (-0.288627) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093129 / 0.018006 (0.075123) | 0.303691 / 0.000490 (0.303201) | 0.000231 / 0.000200 (0.000031) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022084 / 0.037411 (-0.015327) | 0.076354 / 0.014526 (0.061828) | 0.087710 / 0.176557 (-0.088847) | 0.128907 / 0.737135 (-0.608228) | 0.088603 / 0.296338 (-0.207735) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301161 / 0.215209 (0.085952) | 2.954780 / 2.077655 (0.877125) | 1.601366 / 1.504120 (0.097246) | 1.477225 / 1.541195 (-0.063970) | 1.482355 / 1.468490 (0.013865) | 0.722461 / 4.584777 (-3.862315) | 0.981439 / 3.745712 (-2.764273) | 2.927006 / 5.269862 (-2.342856) | 1.884444 / 4.565676 (-2.681233) | 0.079044 / 0.424275 (-0.345231) | 0.005530 / 0.007607 (-0.002077) | 0.347082 / 0.226044 (0.121037) | 3.491984 / 2.268929 (1.223056) | 1.944317 / 55.444624 (-53.500307) | 1.645792 / 6.876477 (-5.230685) | 1.649506 / 2.142072 (-0.492567) | 0.800822 / 4.805227 (-4.004405) | 0.133936 / 6.500664 (-6.366729) | 0.041198 / 0.075469 (-0.034271) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.029764 / 1.841788 (-0.812024) | 11.928840 / 8.074308 (3.854532) | 10.021390 / 10.191392 (-0.170002) | 0.141608 / 0.680424 (-0.538816) | 0.014921 / 0.534201 (-0.519280) | 0.302050 / 0.579283 (-0.277233) | 0.124151 / 0.434364 (-0.310213) | 0.347143 / 0.540337 (-0.193195) | 0.467649 / 1.386936 (-0.919287) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e4c87a6bf57b3aa094c28895c5b89b91b3509c58 \"CML watermark\")\n" ]
"2024-08-26T05:29:46"
"2024-08-26T06:05:01"
"2024-08-26T05:59:15"
MEMBER
null
Disable implicit token in CI. This PR allows running CI tests locally without implicitly using the local user HF token. For example, run locally the tests in: - #7124
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7126/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7126/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7126.diff", "html_url": "https://github.com/huggingface/datasets/pull/7126", "merged_at": "2024-08-26T05:59:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/7126.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7126" }
true
https://api.github.com/repos/huggingface/datasets/issues/7125
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7125/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7125/comments
https://api.github.com/repos/huggingface/datasets/issues/7125/events
https://github.com/huggingface/datasets/pull/7125
2,485,912,246
PR_kwDODunzps55Y4TM
7,125
Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7125). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005741 / 0.011353 (-0.005612) | 0.004011 / 0.011008 (-0.006998) | 0.063962 / 0.038508 (0.025454) | 0.031512 / 0.023109 (0.008403) | 0.242249 / 0.275898 (-0.033649) | 0.269601 / 0.323480 (-0.053879) | 0.004502 / 0.007986 (-0.003483) | 0.002835 / 0.004328 (-0.001494) | 0.049878 / 0.004250 (0.045628) | 0.048012 / 0.037052 (0.010959) | 0.250454 / 0.258489 (-0.008035) | 0.283266 / 0.293841 (-0.010575) | 0.030752 / 0.128546 (-0.097794) | 0.012655 / 0.075646 (-0.062991) | 0.211043 / 0.419271 (-0.208229) | 0.037165 / 0.043533 (-0.006367) | 0.246815 / 0.255139 (-0.008324) | 0.264306 / 0.283200 (-0.018893) | 0.018343 / 0.141683 (-0.123340) | 1.140452 / 1.452155 (-0.311702) | 1.214849 / 1.492716 (-0.277867) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098048 / 0.018006 (0.080042) | 0.292201 / 0.000490 (0.291712) | 0.000217 / 0.000200 (0.000017) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018732 / 0.037411 (-0.018679) | 0.062887 / 0.014526 (0.048361) | 0.074353 / 0.176557 (-0.102204) | 0.120794 / 0.737135 (-0.616341) | 0.077066 / 0.296338 (-0.219272) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276335 / 0.215209 (0.061126) | 2.722905 / 2.077655 (0.645250) | 1.423080 / 1.504120 (-0.081040) | 1.305443 / 1.541195 (-0.235752) | 1.342142 / 1.468490 (-0.126348) | 0.741899 / 4.584777 (-3.842878) | 2.407567 / 3.745712 (-1.338145) | 3.070263 / 5.269862 (-2.199599) | 1.935732 / 4.565676 (-2.629944) | 0.081371 / 0.424275 (-0.342904) | 0.005207 / 0.007607 (-0.002401) | 0.328988 / 0.226044 (0.102943) | 3.240771 / 2.268929 (0.971842) | 1.801028 / 55.444624 (-53.643597) | 1.490593 / 6.876477 (-5.385884) | 1.521317 / 2.142072 (-0.620756) | 0.794051 / 4.805227 (-4.011176) | 0.136398 / 6.500664 (-6.364266) | 0.042902 / 0.075469 (-0.032567) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974186 / 1.841788 (-0.867602) | 12.280011 / 8.074308 (4.205703) | 9.453389 / 10.191392 (-0.738003) | 0.132627 / 0.680424 (-0.547797) | 0.014608 / 0.534201 (-0.519593) | 0.309298 / 0.579283 (-0.269985) | 0.275911 / 0.434364 (-0.158452) | 0.348261 / 0.540337 (-0.192077) | 0.439031 / 1.386936 (-0.947905) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006248 / 0.011353 (-0.005105) | 0.004369 / 0.011008 (-0.006639) | 0.050588 / 0.038508 (0.012080) | 0.032880 / 0.023109 (0.009771) | 0.268979 / 0.275898 (-0.006919) | 0.294714 / 0.323480 (-0.028766) | 0.004518 / 0.007986 (-0.003467) | 0.002995 / 0.004328 (-0.001333) | 0.048776 / 0.004250 (0.044525) | 0.041696 / 0.037052 (0.004644) | 0.283413 / 0.258489 (0.024924) | 0.322137 / 0.293841 (0.028296) | 0.032809 / 0.128546 (-0.095737) | 0.012559 / 0.075646 (-0.063087) | 0.060456 / 0.419271 (-0.358815) | 0.034564 / 0.043533 (-0.008968) | 0.267263 / 0.255139 (0.012124) | 0.292633 / 0.283200 (0.009434) | 0.019011 / 0.141683 (-0.122672) | 1.199820 / 1.452155 (-0.252335) | 1.251829 / 1.492716 (-0.240887) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097615 / 0.018006 (0.079609) | 0.313764 / 0.000490 (0.313274) | 0.000220 / 0.000200 (0.000020) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024365 / 0.037411 (-0.013046) | 0.089301 / 0.014526 (0.074775) | 0.092964 / 0.176557 (-0.083592) | 0.131724 / 0.737135 (-0.605412) | 0.094792 / 0.296338 (-0.201546) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.305119 / 0.215209 (0.089910) | 2.932192 / 2.077655 (0.854537) | 1.610573 / 1.504120 (0.106453) | 1.487502 / 1.541195 (-0.053693) | 1.533300 / 1.468490 (0.064810) | 0.717223 / 4.584777 (-3.867554) | 0.964402 / 3.745712 (-2.781310) | 3.111398 / 5.269862 (-2.158464) | 1.957942 / 4.565676 (-2.607734) | 0.079160 / 0.424275 (-0.345116) | 0.005639 / 0.007607 (-0.001968) | 0.358971 / 0.226044 (0.132927) | 3.564401 / 2.268929 (1.295472) | 2.043079 / 55.444624 (-53.401546) | 1.742681 / 6.876477 (-5.133795) | 1.784758 / 2.142072 (-0.357314) | 0.798508 / 4.805227 (-4.006719) | 0.133905 / 6.500664 (-6.366759) | 0.043008 / 0.075469 (-0.032461) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.031715 / 1.841788 (-0.810073) | 13.374312 / 8.074308 (5.300004) | 10.789098 / 10.191392 (0.597706) | 0.133663 / 0.680424 (-0.546761) | 0.016692 / 0.534201 (-0.517509) | 0.304716 / 0.579283 (-0.274567) | 0.129074 / 0.434364 (-0.305290) | 0.346440 / 0.540337 (-0.193897) | 0.464593 / 1.386936 (-0.922343) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#880a52cea337032d39e90e6f0dcc55198a75a285 \"CML watermark\")\n" ]
"2024-08-26T05:09:35"
"2024-08-26T05:33:15"
"2024-08-26T05:27:09"
MEMBER
null
Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7125/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7125/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7125.diff", "html_url": "https://github.com/huggingface/datasets/pull/7125", "merged_at": "2024-08-26T05:27:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/7125.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7125" }
true
https://api.github.com/repos/huggingface/datasets/issues/7124
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7124/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7124/comments
https://api.github.com/repos/huggingface/datasets/issues/7124/events
https://github.com/huggingface/datasets/pull/7124
2,485,890,442
PR_kwDODunzps55YzWr
7,124
Test get_dataset_config_info with non-existing/gated/private dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7124). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005339 / 0.011353 (-0.006014) | 0.003640 / 0.011008 (-0.007368) | 0.064012 / 0.038508 (0.025504) | 0.030424 / 0.023109 (0.007314) | 0.239966 / 0.275898 (-0.035932) | 0.264361 / 0.323480 (-0.059119) | 0.004247 / 0.007986 (-0.003739) | 0.002847 / 0.004328 (-0.001481) | 0.049640 / 0.004250 (0.045390) | 0.044903 / 0.037052 (0.007851) | 0.250174 / 0.258489 (-0.008315) | 0.281423 / 0.293841 (-0.012418) | 0.029419 / 0.128546 (-0.099127) | 0.012221 / 0.075646 (-0.063426) | 0.205907 / 0.419271 (-0.213365) | 0.036654 / 0.043533 (-0.006878) | 0.245805 / 0.255139 (-0.009334) | 0.265029 / 0.283200 (-0.018170) | 0.018081 / 0.141683 (-0.123602) | 1.113831 / 1.452155 (-0.338324) | 1.156443 / 1.492716 (-0.336274) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.134389 / 0.018006 (0.116383) | 0.300637 / 0.000490 (0.300147) | 0.000240 / 0.000200 (0.000040) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019111 / 0.037411 (-0.018300) | 0.062585 / 0.014526 (0.048059) | 0.075909 / 0.176557 (-0.100647) | 0.121382 / 0.737135 (-0.615753) | 0.074980 / 0.296338 (-0.221359) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285062 / 0.215209 (0.069853) | 2.850130 / 2.077655 (0.772476) | 1.519877 / 1.504120 (0.015757) | 1.388711 / 1.541195 (-0.152484) | 1.397284 / 1.468490 (-0.071206) | 0.723100 / 4.584777 (-3.861677) | 2.393184 / 3.745712 (-1.352529) | 2.908418 / 5.269862 (-2.361443) | 1.871024 / 4.565676 (-2.694653) | 0.078230 / 0.424275 (-0.346045) | 0.005158 / 0.007607 (-0.002449) | 0.345622 / 0.226044 (0.119577) | 3.357611 / 2.268929 (1.088683) | 1.844492 / 55.444624 (-53.600132) | 1.584237 / 6.876477 (-5.292240) | 1.577158 / 2.142072 (-0.564915) | 0.789702 / 4.805227 (-4.015525) | 0.132045 / 6.500664 (-6.368619) | 0.042304 / 0.075469 (-0.033165) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977166 / 1.841788 (-0.864622) | 11.306118 / 8.074308 (3.231810) | 9.490778 / 10.191392 (-0.700614) | 0.143536 / 0.680424 (-0.536888) | 0.015304 / 0.534201 (-0.518897) | 0.313892 / 0.579283 (-0.265391) | 0.267009 / 0.434364 (-0.167355) | 0.345560 / 0.540337 (-0.194778) | 0.435649 / 1.386936 (-0.951287) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005700 / 0.011353 (-0.005653) | 0.003490 / 0.011008 (-0.007519) | 0.049990 / 0.038508 (0.011482) | 0.032070 / 0.023109 (0.008961) | 0.272622 / 0.275898 (-0.003276) | 0.298265 / 0.323480 (-0.025215) | 0.004379 / 0.007986 (-0.003606) | 0.002786 / 0.004328 (-0.001543) | 0.048271 / 0.004250 (0.044020) | 0.040102 / 0.037052 (0.003050) | 0.286433 / 0.258489 (0.027944) | 0.319306 / 0.293841 (0.025465) | 0.032872 / 0.128546 (-0.095675) | 0.011870 / 0.075646 (-0.063776) | 0.059886 / 0.419271 (-0.359385) | 0.034281 / 0.043533 (-0.009252) | 0.275588 / 0.255139 (0.020450) | 0.292951 / 0.283200 (0.009751) | 0.018095 / 0.141683 (-0.123588) | 1.130870 / 1.452155 (-0.321285) | 1.190761 / 1.492716 (-0.301955) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093346 / 0.018006 (0.075340) | 0.307506 / 0.000490 (0.307016) | 0.000214 / 0.000200 (0.000014) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022873 / 0.037411 (-0.014538) | 0.077070 / 0.014526 (0.062544) | 0.089152 / 0.176557 (-0.087404) | 0.130186 / 0.737135 (-0.606949) | 0.090244 / 0.296338 (-0.206095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297950 / 0.215209 (0.082740) | 2.942360 / 2.077655 (0.864705) | 1.614324 / 1.504120 (0.110204) | 1.495795 / 1.541195 (-0.045400) | 1.506155 / 1.468490 (0.037665) | 0.730307 / 4.584777 (-3.854470) | 0.966312 / 3.745712 (-2.779400) | 2.928955 / 5.269862 (-2.340906) | 1.940049 / 4.565676 (-2.625627) | 0.079589 / 0.424275 (-0.344686) | 0.006004 / 0.007607 (-0.001604) | 0.356630 / 0.226044 (0.130585) | 3.516652 / 2.268929 (1.247724) | 1.963196 / 55.444624 (-53.481429) | 1.674489 / 6.876477 (-5.201988) | 1.677558 / 2.142072 (-0.464514) | 0.806447 / 4.805227 (-3.998780) | 0.133819 / 6.500664 (-6.366845) | 0.040762 / 0.075469 (-0.034707) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.038495 / 1.841788 (-0.803293) | 11.829186 / 8.074308 (3.754878) | 10.214158 / 10.191392 (0.022766) | 0.140590 / 0.680424 (-0.539834) | 0.014729 / 0.534201 (-0.519472) | 0.300557 / 0.579283 (-0.278726) | 0.122772 / 0.434364 (-0.311592) | 0.344618 / 0.540337 (-0.195720) | 0.460064 / 1.386936 (-0.926872) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#be5cff059a2a5b89d7a97bc04739c4919ab8089f \"CML watermark\")\n" ]
"2024-08-26T04:53:59"
"2024-08-26T06:15:33"
"2024-08-26T06:09:42"
MEMBER
null
Test get_dataset_config_info with non-existing/gated/private dataset. Related to: - #7109 See also: - https://github.com/huggingface/dataset-viewer/pull/3037: https://github.com/huggingface/dataset-viewer/pull/3037/commits/bb1a7e00c53c242088597cab6572e4fd57797ecb
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7124/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7124/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7124.diff", "html_url": "https://github.com/huggingface/datasets/pull/7124", "merged_at": "2024-08-26T06:09:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/7124.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7124" }
true
https://api.github.com/repos/huggingface/datasets/issues/7123
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7123/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7123/comments
https://api.github.com/repos/huggingface/datasets/issues/7123/events
https://github.com/huggingface/datasets/issues/7123
2,484,003,937
I_kwDODunzps6UDuRh
7,123
Make dataset viewer more flexible in displaying metadata alongside images
{ "avatar_url": "https://avatars.githubusercontent.com/u/38985481?v=4", "events_url": "https://api.github.com/users/egrace479/events{/privacy}", "followers_url": "https://api.github.com/users/egrace479/followers", "following_url": "https://api.github.com/users/egrace479/following{/other_user}", "gists_url": "https://api.github.com/users/egrace479/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/egrace479", "id": 38985481, "login": "egrace479", "node_id": "MDQ6VXNlcjM4OTg1NDgx", "organizations_url": "https://api.github.com/users/egrace479/orgs", "received_events_url": "https://api.github.com/users/egrace479/received_events", "repos_url": "https://api.github.com/users/egrace479/repos", "site_admin": false, "starred_url": "https://api.github.com/users/egrace479/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/egrace479/subscriptions", "type": "User", "url": "https://api.github.com/users/egrace479" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
"2024-08-23T22:56:01"
"2024-08-23T23:01:42"
null
NONE
null
### Feature request To display images with their associated metadata in the dataset viewer, a `metadata.csv` file is required. In the case of a dataset with multiple subsets, this would require the CSVs to be contained in the same folder as the images since they all need to be named `metadata.csv`. The request is that this be made more flexible for datasets with multiple subsets to avoid the need to put a `metadata.csv` into each image directory where they are not as easily accessed. ### Motivation When creating datasets with multiple subsets I can't get the images to display alongside their associated metadata (it's usually one or the other that will show up). Since this requires a file specifically named `metadata.csv`, I then have to place that file within the image directory, which makes it much more difficult to access. Additionally, it still doesn't necessarily display the images alongside their metadata correctly (see, for instance, [this discussion](https://huggingface.co/datasets/imageomics/2018-NEON-beetles/discussions/8)). It was suggested I bring this discussion to GitHub on another dataset struggling with a similar issue ([discussion](https://huggingface.co/datasets/imageomics/fish-vista/discussions/4)). In that case, it's a mix of data subsets, where some just reference the image URLs, while others actually have the images uploaded. The ones with images uploaded are not displaying images, but renaming that file to just `metadata.csv` would diminish the clarity of the construction of the dataset itself (and I'm not entirely convinced it would solve the issue). ### Your contribution I can make a suggestion for one approach to address the issue: For instance, even if it could just end in `_metadata.csv` or `-metadata.csv`, that would be very helpful to allow for more flexibility of dataset structure without impacting clarity. I would think that the functionality on the backend looking for `metadata.csv` could reasonably be adapted to look for such an ending on a filename (maybe also check that it has a `file_name` column?). Presumably, requiring the `configs` in a setup like on [this dataset](https://huggingface.co/datasets/imageomics/rare-species/blob/main/README.md) could also help in figuring out how it should work? ``` configs: - config_name: <image subset> data_files: - <image-metadata>.csv - <path/to/images>/*.jpg ``` I'd also be happy to look at whatever solution is decided upon and contribute to the ideation. Thanks for your time and consideration! The dataset viewer really is fabulous when it works :)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7123/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7123/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7122
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7122/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7122/comments
https://api.github.com/repos/huggingface/datasets/issues/7122/events
https://github.com/huggingface/datasets/issues/7122
2,482,491,258
I_kwDODunzps6T9896
7,122
[interleave_dataset] sample batches from a single source at a time
{ "avatar_url": "https://avatars.githubusercontent.com/u/4197249?v=4", "events_url": "https://api.github.com/users/memray/events{/privacy}", "followers_url": "https://api.github.com/users/memray/followers", "following_url": "https://api.github.com/users/memray/following{/other_user}", "gists_url": "https://api.github.com/users/memray/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/memray", "id": 4197249, "login": "memray", "node_id": "MDQ6VXNlcjQxOTcyNDk=", "organizations_url": "https://api.github.com/users/memray/orgs", "received_events_url": "https://api.github.com/users/memray/received_events", "repos_url": "https://api.github.com/users/memray/repos", "site_admin": false, "starred_url": "https://api.github.com/users/memray/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/memray/subscriptions", "type": "User", "url": "https://api.github.com/users/memray" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
"2024-08-23T07:21:15"
"2024-08-23T07:21:15"
null
NONE
null
### Feature request interleave_dataset and [RandomlyCyclingMultiSourcesExamplesIterable](https://github.com/huggingface/datasets/blob/3813ce846e52824b38e53895810682f0a496a2e3/src/datasets/iterable_dataset.py#L816) enable us to sample data examples from different sources. But can we also sample batches in a similar manner (each batch only contains data from a single source)? ### Motivation Some recent research [[1](https://blog.salesforceairesearch.com/sfr-embedded-mistral/), [2](https://arxiv.org/pdf/2310.07554)] shows that source homogenous batching can be helpful for contrastive learning. Can we add a function called `RandomlyCyclingMultiSourcesBatchesIterable` to support this functionality? ### Your contribution I can contribute a PR. But I wonder what the best way is to test its correctness and robustness.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7122/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7122/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7121
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7121/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7121/comments
https://api.github.com/repos/huggingface/datasets/issues/7121/events
https://github.com/huggingface/datasets/pull/7121
2,480,978,483
PR_kwDODunzps55Iukl
7,121
Fix typed examples iterable state dict
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7121). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005273 / 0.011353 (-0.006079) | 0.003789 / 0.011008 (-0.007219) | 0.062811 / 0.038508 (0.024303) | 0.031055 / 0.023109 (0.007946) | 0.238663 / 0.275898 (-0.037235) | 0.269706 / 0.323480 (-0.053774) | 0.004105 / 0.007986 (-0.003881) | 0.002781 / 0.004328 (-0.001547) | 0.048800 / 0.004250 (0.044549) | 0.045759 / 0.037052 (0.008707) | 0.260467 / 0.258489 (0.001978) | 0.288800 / 0.293841 (-0.005041) | 0.029341 / 0.128546 (-0.099205) | 0.012413 / 0.075646 (-0.063233) | 0.203493 / 0.419271 (-0.215778) | 0.037270 / 0.043533 (-0.006263) | 0.246130 / 0.255139 (-0.009009) | 0.269046 / 0.283200 (-0.014154) | 0.017788 / 0.141683 (-0.123895) | 1.175537 / 1.452155 (-0.276617) | 1.197909 / 1.492716 (-0.294808) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098258 / 0.018006 (0.080251) | 0.305283 / 0.000490 (0.304794) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019066 / 0.037411 (-0.018345) | 0.062723 / 0.014526 (0.048197) | 0.075827 / 0.176557 (-0.100730) | 0.121371 / 0.737135 (-0.615764) | 0.075167 / 0.296338 (-0.221171) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296650 / 0.215209 (0.081441) | 2.910593 / 2.077655 (0.832939) | 1.510798 / 1.504120 (0.006678) | 1.375461 / 1.541195 (-0.165733) | 1.386423 / 1.468490 (-0.082067) | 0.743818 / 4.584777 (-3.840959) | 2.437848 / 3.745712 (-1.307864) | 2.943661 / 5.269862 (-2.326201) | 1.888977 / 4.565676 (-2.676699) | 0.080126 / 0.424275 (-0.344149) | 0.005168 / 0.007607 (-0.002439) | 0.348699 / 0.226044 (0.122654) | 3.477686 / 2.268929 (1.208758) | 1.901282 / 55.444624 (-53.543343) | 1.574847 / 6.876477 (-5.301629) | 1.594359 / 2.142072 (-0.547714) | 0.793415 / 4.805227 (-4.011812) | 0.133982 / 6.500664 (-6.366682) | 0.042435 / 0.075469 (-0.033034) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963057 / 1.841788 (-0.878731) | 11.597217 / 8.074308 (3.522909) | 9.285172 / 10.191392 (-0.906220) | 0.130510 / 0.680424 (-0.549914) | 0.013964 / 0.534201 (-0.520237) | 0.299334 / 0.579283 (-0.279949) | 0.267775 / 0.434364 (-0.166589) | 0.336922 / 0.540337 (-0.203416) | 0.430493 / 1.386936 (-0.956443) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005701 / 0.011353 (-0.005652) | 0.003941 / 0.011008 (-0.007067) | 0.050204 / 0.038508 (0.011696) | 0.032275 / 0.023109 (0.009166) | 0.271076 / 0.275898 (-0.004822) | 0.295565 / 0.323480 (-0.027914) | 0.004393 / 0.007986 (-0.003592) | 0.002881 / 0.004328 (-0.001447) | 0.048032 / 0.004250 (0.043782) | 0.040430 / 0.037052 (0.003378) | 0.281631 / 0.258489 (0.023142) | 0.317964 / 0.293841 (0.024124) | 0.032318 / 0.128546 (-0.096228) | 0.012348 / 0.075646 (-0.063298) | 0.060336 / 0.419271 (-0.358936) | 0.034148 / 0.043533 (-0.009385) | 0.273803 / 0.255139 (0.018664) | 0.292068 / 0.283200 (0.008868) | 0.018693 / 0.141683 (-0.122990) | 1.155704 / 1.452155 (-0.296451) | 1.192245 / 1.492716 (-0.300472) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097588 / 0.018006 (0.079582) | 0.311760 / 0.000490 (0.311270) | 0.000232 / 0.000200 (0.000032) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022825 / 0.037411 (-0.014586) | 0.077698 / 0.014526 (0.063172) | 0.088567 / 0.176557 (-0.087989) | 0.129689 / 0.737135 (-0.607446) | 0.090626 / 0.296338 (-0.205712) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299791 / 0.215209 (0.084582) | 2.978558 / 2.077655 (0.900903) | 1.594095 / 1.504120 (0.089975) | 1.468476 / 1.541195 (-0.072719) | 1.482880 / 1.468490 (0.014390) | 0.717553 / 4.584777 (-3.867224) | 0.977501 / 3.745712 (-2.768211) | 2.954289 / 5.269862 (-2.315572) | 1.895473 / 4.565676 (-2.670203) | 0.078452 / 0.424275 (-0.345824) | 0.005508 / 0.007607 (-0.002099) | 0.350882 / 0.226044 (0.124837) | 3.480878 / 2.268929 (1.211949) | 1.965240 / 55.444624 (-53.479385) | 1.672448 / 6.876477 (-5.204029) | 1.674319 / 2.142072 (-0.467753) | 0.789049 / 4.805227 (-4.016178) | 0.132715 / 6.500664 (-6.367949) | 0.041081 / 0.075469 (-0.034388) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.022953 / 1.841788 (-0.818834) | 12.123349 / 8.074308 (4.049041) | 10.336115 / 10.191392 (0.144723) | 0.142233 / 0.680424 (-0.538191) | 0.015416 / 0.534201 (-0.518785) | 0.303088 / 0.579283 (-0.276195) | 0.124942 / 0.434364 (-0.309422) | 0.338454 / 0.540337 (-0.201883) | 0.460039 / 1.386936 (-0.926897) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3813ce846e52824b38e53895810682f0a496a2e3 \"CML watermark\")\n" ]
"2024-08-22T14:45:03"
"2024-08-22T14:54:56"
"2024-08-22T14:49:06"
MEMBER
null
fix https://github.com/huggingface/datasets/issues/7085 as noted by @VeryLazyBoy and reported by @AjayP13
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7121/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7121/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7121.diff", "html_url": "https://github.com/huggingface/datasets/pull/7121", "merged_at": "2024-08-22T14:49:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/7121.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7121" }
true
https://api.github.com/repos/huggingface/datasets/issues/7120
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7120/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7120/comments
https://api.github.com/repos/huggingface/datasets/issues/7120/events
https://github.com/huggingface/datasets/pull/7120
2,480,674,237
PR_kwDODunzps55HrBy
7,120
don't mention the script if trust_remote_code=False
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7120). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Note that in this case, we could even expect this kind of message:\r\n\r\n```\r\nDataFilesNotFoundError: Unable to find 'hf://datasets/Omega02gdfdd/bioclip-demo-zero-shot-mistakes@12b0313ba4c3189ee5a24cb76200959e9bf7492e/data.csv'\r\n```\r\n\r\nWe generally return `DataFilesNotFoundError` for this case (data files passed as an argument), not sure why it does not occur with this dataset.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005484 / 0.011353 (-0.005869) | 0.003932 / 0.011008 (-0.007077) | 0.063177 / 0.038508 (0.024669) | 0.031311 / 0.023109 (0.008202) | 0.254881 / 0.275898 (-0.021017) | 0.273818 / 0.323480 (-0.049662) | 0.003312 / 0.007986 (-0.004674) | 0.003251 / 0.004328 (-0.001078) | 0.049307 / 0.004250 (0.045057) | 0.046189 / 0.037052 (0.009137) | 0.268182 / 0.258489 (0.009693) | 0.303659 / 0.293841 (0.009818) | 0.029312 / 0.128546 (-0.099234) | 0.013649 / 0.075646 (-0.061997) | 0.204240 / 0.419271 (-0.215032) | 0.036607 / 0.043533 (-0.006926) | 0.252232 / 0.255139 (-0.002907) | 0.271960 / 0.283200 (-0.011239) | 0.018043 / 0.141683 (-0.123640) | 1.148601 / 1.452155 (-0.303553) | 1.212313 / 1.492716 (-0.280403) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096354 / 0.018006 (0.078348) | 0.302575 / 0.000490 (0.302085) | 0.000246 / 0.000200 (0.000046) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019023 / 0.037411 (-0.018389) | 0.064821 / 0.014526 (0.050295) | 0.077046 / 0.176557 (-0.099510) | 0.122896 / 0.737135 (-0.614239) | 0.078300 / 0.296338 (-0.218038) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283681 / 0.215209 (0.068472) | 2.801473 / 2.077655 (0.723818) | 1.505611 / 1.504120 (0.001491) | 1.385832 / 1.541195 (-0.155363) | 1.430284 / 1.468490 (-0.038206) | 0.752041 / 4.584777 (-3.832736) | 2.406138 / 3.745712 (-1.339574) | 2.941370 / 5.269862 (-2.328492) | 1.887681 / 4.565676 (-2.677996) | 0.078693 / 0.424275 (-0.345582) | 0.005266 / 0.007607 (-0.002341) | 0.336484 / 0.226044 (0.110440) | 3.372262 / 2.268929 (1.103334) | 1.861541 / 55.444624 (-53.583084) | 1.572782 / 6.876477 (-5.303694) | 1.592387 / 2.142072 (-0.549685) | 0.796557 / 4.805227 (-4.008670) | 0.134923 / 6.500664 (-6.365741) | 0.043007 / 0.075469 (-0.032462) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982690 / 1.841788 (-0.859097) | 11.700213 / 8.074308 (3.625905) | 9.122642 / 10.191392 (-1.068750) | 0.141430 / 0.680424 (-0.538994) | 0.014971 / 0.534201 (-0.519230) | 0.300938 / 0.579283 (-0.278345) | 0.268315 / 0.434364 (-0.166049) | 0.339891 / 0.540337 (-0.200447) | 0.428302 / 1.386936 (-0.958634) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005732 / 0.011353 (-0.005621) | 0.003905 / 0.011008 (-0.007103) | 0.049900 / 0.038508 (0.011392) | 0.032255 / 0.023109 (0.009145) | 0.267929 / 0.275898 (-0.007969) | 0.295595 / 0.323480 (-0.027885) | 0.004437 / 0.007986 (-0.003549) | 0.003008 / 0.004328 (-0.001321) | 0.048357 / 0.004250 (0.044107) | 0.040118 / 0.037052 (0.003066) | 0.282859 / 0.258489 (0.024370) | 0.319243 / 0.293841 (0.025402) | 0.032793 / 0.128546 (-0.095754) | 0.012091 / 0.075646 (-0.063555) | 0.060082 / 0.419271 (-0.359189) | 0.034426 / 0.043533 (-0.009107) | 0.273668 / 0.255139 (0.018529) | 0.292110 / 0.283200 (0.008910) | 0.019002 / 0.141683 (-0.122680) | 1.165850 / 1.452155 (-0.286304) | 1.209195 / 1.492716 (-0.283521) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099267 / 0.018006 (0.081261) | 0.316746 / 0.000490 (0.316256) | 0.000267 / 0.000200 (0.000067) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023117 / 0.037411 (-0.014294) | 0.076691 / 0.014526 (0.062165) | 0.092190 / 0.176557 (-0.084367) | 0.130620 / 0.737135 (-0.606515) | 0.091068 / 0.296338 (-0.205271) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296419 / 0.215209 (0.081210) | 2.933964 / 2.077655 (0.856309) | 1.595015 / 1.504120 (0.090895) | 1.467610 / 1.541195 (-0.073585) | 1.487386 / 1.468490 (0.018896) | 0.730927 / 4.584777 (-3.853850) | 0.971276 / 3.745712 (-2.774436) | 2.969735 / 5.269862 (-2.300127) | 1.916126 / 4.565676 (-2.649550) | 0.078863 / 0.424275 (-0.345412) | 0.005506 / 0.007607 (-0.002101) | 0.345191 / 0.226044 (0.119147) | 3.407481 / 2.268929 (1.138553) | 1.955966 / 55.444624 (-53.488659) | 1.677365 / 6.876477 (-5.199112) | 1.716052 / 2.142072 (-0.426020) | 0.797208 / 4.805227 (-4.008020) | 0.132853 / 6.500664 (-6.367811) | 0.041691 / 0.075469 (-0.033778) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.042331 / 1.841788 (-0.799456) | 12.186080 / 8.074308 (4.111772) | 10.288961 / 10.191392 (0.097569) | 0.141897 / 0.680424 (-0.538526) | 0.015321 / 0.534201 (-0.518880) | 0.308302 / 0.579283 (-0.270981) | 0.123292 / 0.434364 (-0.311072) | 0.348515 / 0.540337 (-0.191823) | 0.473045 / 1.386936 (-0.913891) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cedffa52879ebc5e4df43f0bcf8660ee7229f0dc \"CML watermark\")\n" ]
"2024-08-22T12:32:32"
"2024-08-22T14:39:52"
"2024-08-22T14:33:52"
CONTRIBUTOR
null
See https://huggingface.co/datasets/Omega02gdfdd/bioclip-demo-zero-shot-mistakes for example. The error is: ``` FileNotFoundError: Couldn't find a dataset script at /src/services/worker/Omega02gdfdd/bioclip-demo-zero-shot-mistakes/bioclip-demo-zero-shot-mistakes.py or any data file in the same directory. Couldn't find 'Omega02gdfdd/bioclip-demo-zero-shot-mistakes' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/Omega02gdfdd/bioclip-demo-zero-shot-mistakes@12b0313ba4c3189ee5a24cb76200959e9bf7492e/data.csv' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip'] ``` The issue there is that a `configs` parameter is set in the README, while the mentioned data file (`data.csv`) does not exist.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7120/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7120/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7120.diff", "html_url": "https://github.com/huggingface/datasets/pull/7120", "merged_at": "2024-08-22T14:33:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/7120.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7120" }
true
https://api.github.com/repos/huggingface/datasets/issues/7119
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7119/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7119/comments
https://api.github.com/repos/huggingface/datasets/issues/7119/events
https://github.com/huggingface/datasets/pull/7119
2,477,766,493
PR_kwDODunzps54-GjY
7,119
Install transformers with numpy-2 CI
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7119). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005156 / 0.011353 (-0.006197) | 0.003365 / 0.011008 (-0.007643) | 0.063451 / 0.038508 (0.024943) | 0.029510 / 0.023109 (0.006401) | 0.244825 / 0.275898 (-0.031074) | 0.265157 / 0.323480 (-0.058323) | 0.004239 / 0.007986 (-0.003747) | 0.002732 / 0.004328 (-0.001596) | 0.050412 / 0.004250 (0.046162) | 0.043608 / 0.037052 (0.006556) | 0.256635 / 0.258489 (-0.001854) | 0.277472 / 0.293841 (-0.016369) | 0.029329 / 0.128546 (-0.099217) | 0.012318 / 0.075646 (-0.063329) | 0.204751 / 0.419271 (-0.214520) | 0.036468 / 0.043533 (-0.007065) | 0.246773 / 0.255139 (-0.008366) | 0.263932 / 0.283200 (-0.019268) | 0.017053 / 0.141683 (-0.124629) | 1.173249 / 1.452155 (-0.278905) | 1.234186 / 1.492716 (-0.258531) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092398 / 0.018006 (0.074391) | 0.309473 / 0.000490 (0.308983) | 0.000220 / 0.000200 (0.000020) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018553 / 0.037411 (-0.018858) | 0.062546 / 0.014526 (0.048020) | 0.073943 / 0.176557 (-0.102613) | 0.120498 / 0.737135 (-0.616638) | 0.075185 / 0.296338 (-0.221153) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296899 / 0.215209 (0.081690) | 2.919088 / 2.077655 (0.841433) | 1.533146 / 1.504120 (0.029026) | 1.395441 / 1.541195 (-0.145754) | 1.399089 / 1.468490 (-0.069401) | 0.742750 / 4.584777 (-3.842027) | 2.390317 / 3.745712 (-1.355395) | 2.883166 / 5.269862 (-2.386695) | 1.854003 / 4.565676 (-2.711674) | 0.077140 / 0.424275 (-0.347136) | 0.005176 / 0.007607 (-0.002432) | 0.349391 / 0.226044 (0.123347) | 3.466043 / 2.268929 (1.197114) | 1.870619 / 55.444624 (-53.574005) | 1.559173 / 6.876477 (-5.317303) | 1.605480 / 2.142072 (-0.536592) | 0.786753 / 4.805227 (-4.018474) | 0.134869 / 6.500664 (-6.365795) | 0.042176 / 0.075469 (-0.033293) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.954256 / 1.841788 (-0.887532) | 11.194758 / 8.074308 (3.120449) | 9.129670 / 10.191392 (-1.061722) | 0.138318 / 0.680424 (-0.542106) | 0.014299 / 0.534201 (-0.519902) | 0.303704 / 0.579283 (-0.275579) | 0.262513 / 0.434364 (-0.171851) | 0.346539 / 0.540337 (-0.193798) | 0.429524 / 1.386936 (-0.957412) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005692 / 0.011353 (-0.005661) | 0.003423 / 0.011008 (-0.007586) | 0.050618 / 0.038508 (0.012110) | 0.031053 / 0.023109 (0.007944) | 0.275901 / 0.275898 (0.000003) | 0.294404 / 0.323480 (-0.029076) | 0.004303 / 0.007986 (-0.003682) | 0.002728 / 0.004328 (-0.001600) | 0.049757 / 0.004250 (0.045507) | 0.039997 / 0.037052 (0.002945) | 0.287291 / 0.258489 (0.028802) | 0.319186 / 0.293841 (0.025345) | 0.032558 / 0.128546 (-0.095988) | 0.012088 / 0.075646 (-0.063558) | 0.060746 / 0.419271 (-0.358525) | 0.034046 / 0.043533 (-0.009486) | 0.276170 / 0.255139 (0.021031) | 0.293673 / 0.283200 (0.010474) | 0.018018 / 0.141683 (-0.123665) | 1.158453 / 1.452155 (-0.293701) | 1.198599 / 1.492716 (-0.294118) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093134 / 0.018006 (0.075127) | 0.304511 / 0.000490 (0.304021) | 0.000216 / 0.000200 (0.000016) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022991 / 0.037411 (-0.014421) | 0.077548 / 0.014526 (0.063022) | 0.087887 / 0.176557 (-0.088670) | 0.131786 / 0.737135 (-0.605349) | 0.088747 / 0.296338 (-0.207591) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302811 / 0.215209 (0.087602) | 2.959276 / 2.077655 (0.881621) | 1.591348 / 1.504120 (0.087229) | 1.464731 / 1.541195 (-0.076464) | 1.474112 / 1.468490 (0.005622) | 0.741573 / 4.584777 (-3.843204) | 0.959229 / 3.745712 (-2.786483) | 2.895750 / 5.269862 (-2.374111) | 1.896051 / 4.565676 (-2.669625) | 0.079012 / 0.424275 (-0.345264) | 0.005494 / 0.007607 (-0.002113) | 0.355699 / 0.226044 (0.129655) | 3.524833 / 2.268929 (1.255905) | 1.972358 / 55.444624 (-53.472266) | 1.667249 / 6.876477 (-5.209228) | 1.658635 / 2.142072 (-0.483438) | 0.813184 / 4.805227 (-3.992044) | 0.134226 / 6.500664 (-6.366438) | 0.041087 / 0.075469 (-0.034382) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.038963 / 1.841788 (-0.802824) | 11.785835 / 8.074308 (3.711526) | 10.397027 / 10.191392 (0.205635) | 0.141748 / 0.680424 (-0.538676) | 0.014738 / 0.534201 (-0.519463) | 0.300056 / 0.579283 (-0.279227) | 0.127442 / 0.434364 (-0.306922) | 0.345013 / 0.540337 (-0.195324) | 0.449598 / 1.386936 (-0.937338) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#70bac27ef861b2b11f581a291a6b76adeee24f98 \"CML watermark\")\n" ]
"2024-08-21T11:14:59"
"2024-08-21T11:42:35"
"2024-08-21T11:36:50"
MEMBER
null
Install transformers with numpy-2 CI. Note that transformers no longer pins numpy < 2 since transformers-4.43.0: - https://github.com/huggingface/transformers/pull/32018 - https://github.com/huggingface/transformers/releases/tag/v4.43.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7119/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7119/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7119.diff", "html_url": "https://github.com/huggingface/datasets/pull/7119", "merged_at": "2024-08-21T11:36:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/7119.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7119" }
true
https://api.github.com/repos/huggingface/datasets/issues/7118
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7118/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7118/comments
https://api.github.com/repos/huggingface/datasets/issues/7118/events
https://github.com/huggingface/datasets/pull/7118
2,477,676,893
PR_kwDODunzps549yu4
7,118
Allow numpy-2.1 and test it without audio extra
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7118). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005674 / 0.011353 (-0.005679) | 0.003919 / 0.011008 (-0.007089) | 0.062665 / 0.038508 (0.024157) | 0.031750 / 0.023109 (0.008641) | 0.234809 / 0.275898 (-0.041089) | 0.264454 / 0.323480 (-0.059026) | 0.004265 / 0.007986 (-0.003720) | 0.002757 / 0.004328 (-0.001572) | 0.048921 / 0.004250 (0.044671) | 0.050765 / 0.037052 (0.013713) | 0.246185 / 0.258489 (-0.012305) | 0.287011 / 0.293841 (-0.006829) | 0.030754 / 0.128546 (-0.097792) | 0.012368 / 0.075646 (-0.063278) | 0.203841 / 0.419271 (-0.215431) | 0.037579 / 0.043533 (-0.005953) | 0.238165 / 0.255139 (-0.016974) | 0.264375 / 0.283200 (-0.018824) | 0.018663 / 0.141683 (-0.123020) | 1.143897 / 1.452155 (-0.308258) | 1.218130 / 1.492716 (-0.274586) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102112 / 0.018006 (0.084106) | 0.303214 / 0.000490 (0.302724) | 0.000232 / 0.000200 (0.000032) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019401 / 0.037411 (-0.018010) | 0.062444 / 0.014526 (0.047919) | 0.076497 / 0.176557 (-0.100060) | 0.122309 / 0.737135 (-0.614826) | 0.077178 / 0.296338 (-0.219160) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282931 / 0.215209 (0.067722) | 2.783587 / 2.077655 (0.705932) | 1.464076 / 1.504120 (-0.040044) | 1.333912 / 1.541195 (-0.207282) | 1.367391 / 1.468490 (-0.101099) | 0.736702 / 4.584777 (-3.848075) | 2.413625 / 3.745712 (-1.332087) | 2.949549 / 5.269862 (-2.320313) | 1.910308 / 4.565676 (-2.655369) | 0.077419 / 0.424275 (-0.346856) | 0.005159 / 0.007607 (-0.002448) | 0.345595 / 0.226044 (0.119551) | 3.433205 / 2.268929 (1.164277) | 1.844443 / 55.444624 (-53.600181) | 1.527475 / 6.876477 (-5.349002) | 1.544315 / 2.142072 (-0.597758) | 0.803942 / 4.805227 (-4.001285) | 0.134131 / 6.500664 (-6.366533) | 0.042638 / 0.075469 (-0.032831) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975158 / 1.841788 (-0.866629) | 11.726187 / 8.074308 (3.651879) | 9.403347 / 10.191392 (-0.788045) | 0.131583 / 0.680424 (-0.548840) | 0.014358 / 0.534201 (-0.519843) | 0.301360 / 0.579283 (-0.277923) | 0.266529 / 0.434364 (-0.167835) | 0.341669 / 0.540337 (-0.198668) | 0.425751 / 1.386936 (-0.961186) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005911 / 0.011353 (-0.005442) | 0.004093 / 0.011008 (-0.006915) | 0.049936 / 0.038508 (0.011428) | 0.031828 / 0.023109 (0.008719) | 0.273874 / 0.275898 (-0.002025) | 0.296871 / 0.323480 (-0.026609) | 0.004470 / 0.007986 (-0.003516) | 0.002902 / 0.004328 (-0.001426) | 0.048848 / 0.004250 (0.044597) | 0.042320 / 0.037052 (0.005268) | 0.287957 / 0.258489 (0.029468) | 0.321033 / 0.293841 (0.027192) | 0.032996 / 0.128546 (-0.095550) | 0.012244 / 0.075646 (-0.063403) | 0.060493 / 0.419271 (-0.358779) | 0.034630 / 0.043533 (-0.008902) | 0.277254 / 0.255139 (0.022115) | 0.292822 / 0.283200 (0.009623) | 0.017966 / 0.141683 (-0.123717) | 1.167432 / 1.452155 (-0.284723) | 1.231837 / 1.492716 (-0.260880) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099970 / 0.018006 (0.081964) | 0.313240 / 0.000490 (0.312750) | 0.000217 / 0.000200 (0.000017) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022928 / 0.037411 (-0.014483) | 0.077058 / 0.014526 (0.062532) | 0.090147 / 0.176557 (-0.086409) | 0.129416 / 0.737135 (-0.607720) | 0.091021 / 0.296338 (-0.205318) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300697 / 0.215209 (0.085488) | 2.944649 / 2.077655 (0.866995) | 1.609106 / 1.504120 (0.104986) | 1.483762 / 1.541195 (-0.057433) | 1.519433 / 1.468490 (0.050943) | 0.714129 / 4.584777 (-3.870648) | 0.991848 / 3.745712 (-2.753864) | 2.966340 / 5.269862 (-2.303521) | 1.905427 / 4.565676 (-2.660249) | 0.079041 / 0.424275 (-0.345234) | 0.005671 / 0.007607 (-0.001936) | 0.356037 / 0.226044 (0.129993) | 3.504599 / 2.268929 (1.235670) | 1.979207 / 55.444624 (-53.465417) | 1.695030 / 6.876477 (-5.181447) | 1.703978 / 2.142072 (-0.438095) | 0.800871 / 4.805227 (-4.004357) | 0.134414 / 6.500664 (-6.366250) | 0.041743 / 0.075469 (-0.033726) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.029879 / 1.841788 (-0.811909) | 12.132252 / 8.074308 (4.057944) | 10.596576 / 10.191392 (0.405184) | 0.132237 / 0.680424 (-0.548187) | 0.016239 / 0.534201 (-0.517962) | 0.301831 / 0.579283 (-0.277452) | 0.127966 / 0.434364 (-0.306398) | 0.341081 / 0.540337 (-0.199256) | 0.448996 / 1.386936 (-0.937940) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0a0fa48a68c3502edfa50273b881f909e4e6e70c \"CML watermark\")\n" ]
"2024-08-21T10:29:35"
"2024-08-21T11:05:03"
"2024-08-21T10:58:15"
MEMBER
null
Allow numpy-2.1 and test it without audio extra. This PR reverts: - #7114 Note that audio extra tests can be included again with numpy-2.1 once next numba-0.61.0 version is released.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7118/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7118/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7118.diff", "html_url": "https://github.com/huggingface/datasets/pull/7118", "merged_at": "2024-08-21T10:58:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/7118.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7118" }
true
https://api.github.com/repos/huggingface/datasets/issues/7117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7117/comments
https://api.github.com/repos/huggingface/datasets/issues/7117/events
https://github.com/huggingface/datasets/issues/7117
2,476,555,659
I_kwDODunzps6TnT2L
7,117
Audio dataset load everything in RAM and is very slow
{ "avatar_url": "https://avatars.githubusercontent.com/u/64205064?v=4", "events_url": "https://api.github.com/users/Jourdelune/events{/privacy}", "followers_url": "https://api.github.com/users/Jourdelune/followers", "following_url": "https://api.github.com/users/Jourdelune/following{/other_user}", "gists_url": "https://api.github.com/users/Jourdelune/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Jourdelune", "id": 64205064, "login": "Jourdelune", "node_id": "MDQ6VXNlcjY0MjA1MDY0", "organizations_url": "https://api.github.com/users/Jourdelune/orgs", "received_events_url": "https://api.github.com/users/Jourdelune/received_events", "repos_url": "https://api.github.com/users/Jourdelune/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Jourdelune/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jourdelune/subscriptions", "type": "User", "url": "https://api.github.com/users/Jourdelune" }
[]
open
false
null
[]
null
[ "Hi ! I think the issue comes from the fact that you return `row` entirely, and therefore the dataset has to re-encode the audio data in `row`.\r\n\r\nCan you try this instead ?\r\n\r\n```python\r\n# map the dataset\r\ndef transcribe_audio(row):\r\n audio = row[\"audio\"] # get the audio but do nothing with it\r\n return {\"transcribed\": True}\r\n```\r\n\r\nPS: no need to iter on the dataset to trigger the `map` function on a `Dataset` - `map` runs directly when it's called (contrary to `IterableDataset` taht you can get when streaming, which are lazy)", "No, that doesn't change anything, I manage to solve this problem by setting with_indices=True in the map function and directly retrieving the audio corresponding to the index.\r\n```py\r\nfrom datasets import load_dataset\r\nimport time\r\n\r\nds = load_dataset(\"WaveGenAI/audios2\", split=\"train[:50]\")\r\n\r\n\r\n# map the dataset\r\ndef transcribe_audio(row, idx):\r\n audio = ds[idx][\"audio\"] # get the audio but do nothing with it\r\n row[\"transcribed\"] = True\r\n return row\r\n\r\n\r\ntime1 = time.time()\r\nds = ds.map(\r\n transcribe_audio, with_indices=True\r\n) # set low writer_batch_size to avoid memory issues\r\n\r\nfor row in ds:\r\n pass # do nothing, just iterate to trigger the map function\r\n\r\nprint(f\"Time taken: {time.time() - time1:.2f} seconds\")\r\n```", "Hmm maybe accessing `row[\"audio\"]` makes `map()` reencode what's inside `row[\"audio\"]` in case there are in-place modifications" ]
"2024-08-20T21:18:12"
"2024-08-26T13:11:55"
null
NONE
null
Hello, I'm working with an audio dataset. I want to transcribe the audio that the dataset contain, and for that I use whisper. My issue is that the dataset load everything in the RAM when I map the dataset, obviously, when RAM usage is too high, the program crashes. To fix this issue, I'm using writer_batch_size that I set to 10, but in this case, the mapping of the dataset is extremely slow. To illustrate this, on 50 examples, with `writer_batch_size` set to 10, it takes 123.24 seconds to process the dataset, but without `writer_batch_size` set to 10, it takes about ten seconds to process the dataset, but then the process remains blocked (I assume that it is writing the dataset and therefore suffers from the same problem as `writer_batch_size`) ### Steps to reproduce the bug Hug ram usage but fast (but actually slow when saving the dataset): ```py from datasets import load_dataset import time ds = load_dataset("WaveGenAI/audios2", split="train[:50]") # map the dataset def transcribe_audio(row): audio = row["audio"] # get the audio but do nothing with it row["transcribed"] = True return row time1 = time.time() ds = ds.map( transcribe_audio ) for row in ds: pass # do nothing, just iterate to trigger the map function print(f"Time taken: {time.time() - time1:.2f} seconds") ``` Low ram usage but very very slow: ```py from datasets import load_dataset import time ds = load_dataset("WaveGenAI/audios2", split="train[:50]") # map the dataset def transcribe_audio(row): audio = row["audio"] # get the audio but do nothing with it row["transcribed"] = True return row time1 = time.time() ds = ds.map( transcribe_audio, writer_batch_size=10 ) # set low writer_batch_size to avoid memory issues for row in ds: pass # do nothing, just iterate to trigger the map function print(f"Time taken: {time.time() - time1:.2f} seconds") ``` ### Expected behavior I think the processing should be much faster, on only 50 audio examples, the mapping takes several minutes while nothing is done (just loading the audio). ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-6.10.5-arch1-1-x86_64-with-glibc2.40 - Python version: 3.10.4 - `huggingface_hub` version: 0.24.5 - PyArrow version: 17.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2024.6.1 # Extra The dataset has been generated by using audio folder, so I don't think anything specific in my code is causing this problem. ```py import argparse from datasets import load_dataset parser = argparse.ArgumentParser() parser.add_argument("--folder", help="folder path", default="/media/works/test/") args = parser.parse_args() dataset = load_dataset("audiofolder", data_dir=args.folder) # push the dataset to hub dataset.push_to_hub("WaveGenAI/audios") ``` Also, it's the combination of `audio = row["audio"]` and `row["transcribed"] = True` which causes problems, `row["transcribed"] = True `alone does nothing and `audio = row["audio"]` alone sometimes causes problems, sometimes not.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7117/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7117/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7116
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7116/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7116/comments
https://api.github.com/repos/huggingface/datasets/issues/7116/events
https://github.com/huggingface/datasets/issues/7116
2,475,522,721
I_kwDODunzps6TjXqh
7,116
datasets cannot handle nested json if features is given.
{ "avatar_url": "https://avatars.githubusercontent.com/u/38550511?v=4", "events_url": "https://api.github.com/users/ljw20180420/events{/privacy}", "followers_url": "https://api.github.com/users/ljw20180420/followers", "following_url": "https://api.github.com/users/ljw20180420/following{/other_user}", "gists_url": "https://api.github.com/users/ljw20180420/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ljw20180420", "id": 38550511, "login": "ljw20180420", "node_id": "MDQ6VXNlcjM4NTUwNTEx", "organizations_url": "https://api.github.com/users/ljw20180420/orgs", "received_events_url": "https://api.github.com/users/ljw20180420/received_events", "repos_url": "https://api.github.com/users/ljw20180420/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ljw20180420/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ljw20180420/subscriptions", "type": "User", "url": "https://api.github.com/users/ljw20180420" }
[]
open
false
null
[]
null
[ "Hi ! `Sequence` has a weird behavior for dictionaries (from tensorflow-datasets), use a regular list instead:\r\n\r\n```python\r\nds = datasets.load_dataset('json', data_files=\"./temp.json\", features=datasets.Features({\r\n 'ref1': datasets.Value('string'),\r\n 'ref2': datasets.Value('string'),\r\n 'cuts': [{\r\n \"cut1\": datasets.Value(\"uint16\"),\r\n \"cut2\": datasets.Value(\"uint16\")\r\n }]\r\n}))\r\n```" ]
"2024-08-20T12:27:49"
"2024-08-22T15:00:16"
null
NONE
null
### Describe the bug I have a json named temp.json. ```json {"ref1": "ABC", "ref2": "DEF", "cuts":[{"cut1": 3, "cut2": 5}]} ``` I want to load it. ```python ds = datasets.load_dataset('json', data_files="./temp.json", features=datasets.Features({ 'ref1': datasets.Value('string'), 'ref2': datasets.Value('string'), 'cuts': datasets.Sequence({ "cut1": datasets.Value("uint16"), "cut2": datasets.Value("uint16") }) })) ``` The above code does not work. However, I can load it without giving features. ```python ds = datasets.load_dataset('json', data_files="./temp.json") ``` Is it possible to load integers as uint16 to save some memory? ### Steps to reproduce the bug As in the bug description. ### Expected behavior The data are loaded and integers are uint16. ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.21.0 - Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35 - Python version: 3.11.9 - `huggingface_hub` version: 0.24.5 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7116/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7116/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7115/comments
https://api.github.com/repos/huggingface/datasets/issues/7115/events
https://github.com/huggingface/datasets/issues/7115
2,475,363,142
I_kwDODunzps6TiwtG
7,115
module 'pyarrow.lib' has no attribute 'ListViewType'
{ "avatar_url": "https://avatars.githubusercontent.com/u/175128880?v=4", "events_url": "https://api.github.com/users/neurafusionai/events{/privacy}", "followers_url": "https://api.github.com/users/neurafusionai/followers", "following_url": "https://api.github.com/users/neurafusionai/following{/other_user}", "gists_url": "https://api.github.com/users/neurafusionai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/neurafusionai", "id": 175128880, "login": "neurafusionai", "node_id": "U_kgDOCnBBMA", "organizations_url": "https://api.github.com/users/neurafusionai/orgs", "received_events_url": "https://api.github.com/users/neurafusionai/received_events", "repos_url": "https://api.github.com/users/neurafusionai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/neurafusionai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neurafusionai/subscriptions", "type": "User", "url": "https://api.github.com/users/neurafusionai" }
[]
open
false
null
[]
null
[ "https://github.com/neurafusionai/Hugging_Face/blob/main/meta_opt_350m_customer_support_lora_v1.ipynb\r\n\r\ncouldnt train because of GPU\r\nI didnt pip install datasets -U\r\nbut looks like restarting worked" ]
"2024-08-20T11:05:44"
"2024-08-20T12:06:20"
null
NONE
null
### Describe the bug Code: `!pipuninstall -y pyarrow !pip install --no-cache-dir pyarrow !pip uninstall -y pyarrow !pip install pyarrow --no-cache-dir !pip install --upgrade datasets transformers pyarrow !pip install pyarrow.parquet ! pip install pyarrow-core libparquet !pip install pyarrow --no-cache-dir !pip install pyarrow !pip install transformers !pip install --upgrade datasets !pip install datasets ! pip install pyarrow ! pip install pyarrow.lib ! pip install pyarrow.parquet !pip install transformers import pyarrow as pa print(pa.__version__) from datasets import load_dataset import pyarrow.parquet as pq import pyarrow.lib as lib import pandas as pd from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments from datasets import load_dataset from transformers import AutoTokenizer ! pip install pyarrow-core libparquet # Load the dataset for content moderation dataset = load_dataset("PolyAI/banking77") # Example dataset for customer support # Initialize the tokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") # Tokenize the dataset def tokenize_function(examples): return tokenizer(examples['text'], padding="max_length", truncation=True) # Apply tokenization to the entire dataset tokenized_datasets = dataset.map(tokenize_function, batched=True) # Check the first few tokenized samples print(tokenized_datasets['train'][0]) from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments # Load the model model = AutoModelForSequenceClassification.from_pretrained("facebook/opt-350m", num_labels=77) # Define training arguments training_args = TrainingArguments( output_dir="./results", per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=3, eval_strategy="epoch", # save_strategy="epoch", logging_dir="./logs", learning_rate=2e-5, ) # Initialize the Trainer trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["test"], ) # Train the model trainer.train() # Evaluate the model trainer.evaluate() ` AttributeError Traceback (most recent call last) [<ipython-input-23-60bed3143a93>](https://localhost:8080/#) in <cell line: 22>() 20 21 ---> 22 from datasets import load_dataset 23 import pyarrow.parquet as pq 24 import pyarrow.lib as lib 5 frames [/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module> 15 __version__ = "2.21.0" 16 ---> 17 from .arrow_dataset import Dataset 18 from .arrow_reader import ReadInstruction 19 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module> 74 75 from . import config ---> 76 from .arrow_reader import ArrowReader 77 from .arrow_writer import ArrowWriter, OptimizedTypedSequence 78 from .data_files import sanitize_patterns [/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module> 27 28 import pyarrow as pa ---> 29 import pyarrow.parquet as pq 30 from tqdm.contrib.concurrent import thread_map 31 [/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/__init__.py](https://localhost:8080/#) in <module> 18 # flake8: noqa 19 ---> 20 from .core import * [/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py](https://localhost:8080/#) in <module> 31 32 try: ---> 33 import pyarrow._parquet as _parquet 34 except ImportError as exc: 35 raise ImportError( /usr/local/lib/python3.10/dist-packages/pyarrow/_parquet.pyx in init pyarrow._parquet() AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType' ### Steps to reproduce the bug https://colab.research.google.com/drive/1HNbsg3tHxUJOHVtYIaRnNGY4T2PnLn4a?usp=sharing ### Expected behavior Looks like there is an issue with datasets and pyarrow ### Environment info google colab python huggingface Found existing installation: pyarrow 17.0.0 Uninstalling pyarrow-17.0.0: Successfully uninstalled pyarrow-17.0.0 Collecting pyarrow Downloading pyarrow-17.0.0-cp310-cp310-manylinux_2_28_x86_64.whl.metadata (3.3 kB) Requirement already satisfied: numpy>=1.16.6 in /usr/local/lib/python3.10/dist-packages (from pyarrow) (1.26.4) Downloading pyarrow-17.0.0-cp310-cp310-manylinux_2_28_x86_64.whl (39.9 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 39.9/39.9 MB 188.9 MB/s eta 0:00:00 Installing collected packages: pyarrow ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. cudf-cu12 24.4.1 requires pyarrow<15.0.0a0,>=14.0.1, but you have pyarrow 17.0.0 which is incompatible. ibis-framework 8.0.0 requires pyarrow<16,>=2, but you have pyarrow 17.0.0 which is incompatible. Successfully installed pyarrow-17.0.0 WARNING: The following packages were previously imported in this runtime: [pyarrow] You must restart the runtime in order to use newly installed versions.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7115/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7115/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7114
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7114/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7114/comments
https://api.github.com/repos/huggingface/datasets/issues/7114/events
https://github.com/huggingface/datasets/pull/7114
2,475,062,252
PR_kwDODunzps5404mO
7,114
Temporarily pin numpy<2.1 to fix CI
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7114). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005381 / 0.011353 (-0.005972) | 0.003929 / 0.011008 (-0.007079) | 0.062505 / 0.038508 (0.023997) | 0.031048 / 0.023109 (0.007938) | 0.244794 / 0.275898 (-0.031104) | 0.270997 / 0.323480 (-0.052483) | 0.003186 / 0.007986 (-0.004799) | 0.002750 / 0.004328 (-0.001579) | 0.048289 / 0.004250 (0.044039) | 0.042617 / 0.037052 (0.005565) | 0.262607 / 0.258489 (0.004118) | 0.281778 / 0.293841 (-0.012063) | 0.029426 / 0.128546 (-0.099120) | 0.012466 / 0.075646 (-0.063181) | 0.205221 / 0.419271 (-0.214051) | 0.035535 / 0.043533 (-0.007998) | 0.247866 / 0.255139 (-0.007273) | 0.269121 / 0.283200 (-0.014079) | 0.018557 / 0.141683 (-0.123125) | 1.147982 / 1.452155 (-0.304173) | 1.188998 / 1.492716 (-0.303718) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096550 / 0.018006 (0.078544) | 0.300497 / 0.000490 (0.300007) | 0.000219 / 0.000200 (0.000019) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019150 / 0.037411 (-0.018261) | 0.063518 / 0.014526 (0.048993) | 0.076643 / 0.176557 (-0.099914) | 0.122958 / 0.737135 (-0.614177) | 0.078511 / 0.296338 (-0.217828) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278163 / 0.215209 (0.062953) | 2.733514 / 2.077655 (0.655859) | 1.434335 / 1.504120 (-0.069785) | 1.318976 / 1.541195 (-0.222219) | 1.352498 / 1.468490 (-0.115992) | 0.717326 / 4.584777 (-3.867450) | 2.403683 / 3.745712 (-1.342029) | 2.930366 / 5.269862 (-2.339495) | 1.879938 / 4.565676 (-2.685739) | 0.079016 / 0.424275 (-0.345259) | 0.005156 / 0.007607 (-0.002451) | 0.331099 / 0.226044 (0.105055) | 3.305878 / 2.268929 (1.036949) | 1.804185 / 55.444624 (-53.640439) | 1.508785 / 6.876477 (-5.367692) | 1.570102 / 2.142072 (-0.571970) | 0.796348 / 4.805227 (-4.008879) | 0.135737 / 6.500664 (-6.364927) | 0.042902 / 0.075469 (-0.032567) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.979923 / 1.841788 (-0.861865) | 11.656257 / 8.074308 (3.581949) | 9.745611 / 10.191392 (-0.445781) | 0.144497 / 0.680424 (-0.535927) | 0.022457 / 0.534201 (-0.511744) | 0.317251 / 0.579283 (-0.262032) | 0.264956 / 0.434364 (-0.169408) | 0.341873 / 0.540337 (-0.198464) | 0.439734 / 1.386936 (-0.947202) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006137 / 0.011353 (-0.005216) | 0.003999 / 0.011008 (-0.007009) | 0.049994 / 0.038508 (0.011486) | 0.032401 / 0.023109 (0.009292) | 0.272210 / 0.275898 (-0.003688) | 0.296038 / 0.323480 (-0.027442) | 0.004429 / 0.007986 (-0.003557) | 0.002894 / 0.004328 (-0.001434) | 0.049296 / 0.004250 (0.045045) | 0.041390 / 0.037052 (0.004337) | 0.288951 / 0.258489 (0.030462) | 0.321733 / 0.293841 (0.027892) | 0.033553 / 0.128546 (-0.094994) | 0.012122 / 0.075646 (-0.063524) | 0.060661 / 0.419271 (-0.358610) | 0.034752 / 0.043533 (-0.008781) | 0.272866 / 0.255139 (0.017727) | 0.292436 / 0.283200 (0.009237) | 0.018822 / 0.141683 (-0.122861) | 1.167758 / 1.452155 (-0.284397) | 1.207977 / 1.492716 (-0.284739) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095862 / 0.018006 (0.077855) | 0.313746 / 0.000490 (0.313256) | 0.000219 / 0.000200 (0.000020) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022940 / 0.037411 (-0.014472) | 0.076833 / 0.014526 (0.062307) | 0.088209 / 0.176557 (-0.088348) | 0.130154 / 0.737135 (-0.606981) | 0.089948 / 0.296338 (-0.206390) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.305393 / 0.215209 (0.090184) | 3.001629 / 2.077655 (0.923975) | 1.629378 / 1.504120 (0.125258) | 1.496022 / 1.541195 (-0.045173) | 1.542937 / 1.468490 (0.074447) | 0.734249 / 4.584777 (-3.850528) | 0.966226 / 3.745712 (-2.779486) | 3.051986 / 5.269862 (-2.217876) | 1.954694 / 4.565676 (-2.610982) | 0.081538 / 0.424275 (-0.342737) | 0.005198 / 0.007607 (-0.002409) | 0.355837 / 0.226044 (0.129793) | 3.537454 / 2.268929 (1.268525) | 2.036157 / 55.444624 (-53.408467) | 1.719255 / 6.876477 (-5.157222) | 1.744899 / 2.142072 (-0.397174) | 0.816034 / 4.805227 (-3.989193) | 0.135650 / 6.500664 (-6.365014) | 0.042206 / 0.075469 (-0.033263) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.055518 / 1.841788 (-0.786269) | 12.654622 / 8.074308 (4.580313) | 10.450807 / 10.191392 (0.259415) | 0.153567 / 0.680424 (-0.526857) | 0.016114 / 0.534201 (-0.518087) | 0.301182 / 0.579283 (-0.278101) | 0.130043 / 0.434364 (-0.304321) | 0.341289 / 0.540337 (-0.199048) | 0.434573 / 1.386936 (-0.952363) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fb8ae4d2c3dda8c770fe48a40195775a7b517b6b \"CML watermark\")\n" ]
"2024-08-20T08:42:57"
"2024-08-20T09:09:27"
"2024-08-20T09:02:35"
MEMBER
null
Temporarily pin numpy<2.1 to fix CI. Fix #7111.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7114/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7114/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7114.diff", "html_url": "https://github.com/huggingface/datasets/pull/7114", "merged_at": "2024-08-20T09:02:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/7114.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7114" }
true
https://api.github.com/repos/huggingface/datasets/issues/7113
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7113/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7113/comments
https://api.github.com/repos/huggingface/datasets/issues/7113/events
https://github.com/huggingface/datasets/issues/7113
2,475,029,640
I_kwDODunzps6ThfSI
7,113
Stream dataset does not iterate if the batch size is larger than the dataset size (related to drop_last_batch)
{ "avatar_url": "https://avatars.githubusercontent.com/u/4197249?v=4", "events_url": "https://api.github.com/users/memray/events{/privacy}", "followers_url": "https://api.github.com/users/memray/followers", "following_url": "https://api.github.com/users/memray/following{/other_user}", "gists_url": "https://api.github.com/users/memray/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/memray", "id": 4197249, "login": "memray", "node_id": "MDQ6VXNlcjQxOTcyNDk=", "organizations_url": "https://api.github.com/users/memray/orgs", "received_events_url": "https://api.github.com/users/memray/received_events", "repos_url": "https://api.github.com/users/memray/repos", "site_admin": false, "starred_url": "https://api.github.com/users/memray/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/memray/subscriptions", "type": "User", "url": "https://api.github.com/users/memray" }
[]
closed
false
null
[]
null
[ "That's expected behavior, it's also the same in `torch`:\r\n\r\n```python\r\n>>> list(DataLoader(list(range(5)), batch_size=10, drop_last=True))\r\n[]\r\n```" ]
"2024-08-20T08:26:40"
"2024-08-26T04:24:11"
"2024-08-26T04:24:10"
NONE
null
### Describe the bug Hi there, I use streaming and interleaving to combine multiple datasets saved in jsonl files. The size of dataset can vary (from 100ish to 100k-ish). I use dataset.map() and a big batch size to reduce the IO cost. It was working fine with datasets-2.16.1 but this problem shows up after I upgraded to datasets-2.19.2. With 2.21.0 the problem remains. Please see the code below to reproduce the problem. The dataset can iterate correctly if we set either streaming=False or drop_last_batch=False. I have to use drop_last_batch=True since it's for distributed training. ### Steps to reproduce the bug ```python # datasets==2.21.0 import datasets def data_prepare(examples): print(examples["sentence1"][0]) return examples batch_size = 101 # the size of the dataset is 100 # the dataset iterates correctly if we set either streaming=False or drop_last_batch=False dataset = datasets.load_dataset("mteb/biosses-sts", split="test", streaming=True) dataset = dataset.map(lambda x: data_prepare(x), drop_last_batch=True, batched=True, batch_size=batch_size) for ex in dataset: print(ex) pass ``` ### Expected behavior The dataset iterates regardless of the batch size. ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.14 - `huggingface_hub` version: 0.24.5 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.2.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7113/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7113/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7112
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7112/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7112/comments
https://api.github.com/repos/huggingface/datasets/issues/7112/events
https://github.com/huggingface/datasets/issues/7112
2,475,004,644
I_kwDODunzps6ThZLk
7,112
cudf-cu12 24.4.1, ibis-framework 8.0.0 requires pyarrow<15.0.0a0,>=14.0.1,pyarrow<16,>=2 and datasets 2.21.0 requires pyarrow>=15.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/174590283?v=4", "events_url": "https://api.github.com/users/SoumyaMB10/events{/privacy}", "followers_url": "https://api.github.com/users/SoumyaMB10/followers", "following_url": "https://api.github.com/users/SoumyaMB10/following{/other_user}", "gists_url": "https://api.github.com/users/SoumyaMB10/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SoumyaMB10", "id": 174590283, "login": "SoumyaMB10", "node_id": "U_kgDOCmgJSw", "organizations_url": "https://api.github.com/users/SoumyaMB10/orgs", "received_events_url": "https://api.github.com/users/SoumyaMB10/received_events", "repos_url": "https://api.github.com/users/SoumyaMB10/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SoumyaMB10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SoumyaMB10/subscriptions", "type": "User", "url": "https://api.github.com/users/SoumyaMB10" }
[]
open
false
null
[]
null
[ "@sayakpaul please advice " ]
"2024-08-20T08:13:55"
"2024-08-20T08:14:25"
null
NONE
null
### Describe the bug !pip install accelerate>=0.16.0 torchvision transformers>=4.25.1 datasets>=2.19.1 ftfy tensorboard Jinja2 peft==0.7.0 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. cudf-cu12 24.4.1 requires pyarrow<15.0.0a0,>=14.0.1, but you have pyarrow 17.0.0 which is incompatible. ibis-framework 8.0.0 requires pyarrow<16,>=2, but you have pyarrow 17.0.0 which is incompatible. to solve above error !pip install pyarrow==14.0.1 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. datasets 2.21.0 requires pyarrow>=15.0.0, but you have pyarrow 14.0.1 which is incompatible. ### Steps to reproduce the bug !pip install datasets>=2.19.1 ### Expected behavior run without dependency error ### Environment info Diffusers version: 0.31.0.dev0 Platform: Linux-6.1.85+-x86_64-with-glibc2.35 Running on Google Colab?: Yes Python version: 3.10.12 PyTorch version (GPU?): 2.3.1+cu121 (True) Flax version (CPU?/GPU?/TPU?): 0.8.4 (gpu) Jax version: 0.4.26 JaxLib version: 0.4.26 Huggingface_hub version: 0.23.5 Transformers version: 4.42.4 Accelerate version: 0.32.1 PEFT version: 0.7.0 Bitsandbytes version: not installed Safetensors version: 0.4.4 xFormers version: not installed Accelerator: Tesla T4, 15360 MiB Using GPU in script?: Using distributed or parallel set-up in script?:
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7112/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7112/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7111
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7111/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7111/comments
https://api.github.com/repos/huggingface/datasets/issues/7111/events
https://github.com/huggingface/datasets/issues/7111
2,474,915,845
I_kwDODunzps6ThDgF
7,111
CI is broken for numpy-2: Failed to fetch wheel: llvmlite==0.34.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Note that the CI before was using:\r\n- llvmlite: 0.43.0\r\n- numba: 0.60.0\r\n\r\nNow it tries to use:\r\n- llvmlite: 0.34.0\r\n- numba: 0.51.2", "The issue is because numba-0.60.0 pins numpy<2.1 and `uv` tries to install latest numpy-2.1.0 with an old numba-0.51.0 version (and llvmlite-0.34.0). See discussion in their repo:\r\n- https://github.com/numba/numba/issues/9708\r\n\r\nLatest numpy-2.1.0 will be supported by the next numba-0.61.0 release in September.\r\n\r\nNote that our CI requires numba with the \"audio\" extra:\r\n- librosa > numba" ]
"2024-08-20T07:27:28"
"2024-08-21T05:05:36"
"2024-08-20T09:02:36"
MEMBER
null
Ci is broken with error `Failed to fetch wheel: llvmlite==0.34.0`: https://github.com/huggingface/datasets/actions/runs/10466825281/job/28984414269 ``` Run uv pip install --system "datasets[tests_numpy2] @ ." Resolved 150 packages in 4.42s error: Failed to prepare distributions Caused by: Failed to fetch wheel: llvmlite==0.34.0 Caused by: Build backend failed to build wheel through `build_wheel()` with exit status: 1 --- stdout: running bdist_wheel /home/runner/.cache/uv/builds-v0/.tmpcyKh8S/bin/python /home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py LLVM version... --- stderr: Traceback (most recent call last): File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 105, in main_posix out = subprocess.check_output([llvm_config, '--version']) File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 421, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 503, in run with Popen(*popenargs, **kwargs) as process: File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 971, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 1863, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'llvm-config' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 191, in <module> main() File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 181, in main main_posix('linux', '.so') File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 107, in main_posix raise RuntimeError("%s failed executing, please point LLVM_CONFIG " RuntimeError: llvm-config failed executing, please point LLVM_CONFIG to the path for llvm-config error: command '/home/runner/.cache/uv/builds-v0/.tmpcyKh8S/bin/python' failed with exit code 1 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7111/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7111/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7110
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7110/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7110/comments
https://api.github.com/repos/huggingface/datasets/issues/7110/events
https://github.com/huggingface/datasets/pull/7110
2,474,747,695
PR_kwDODunzps54zz3r
7,110
Fix ConnectionError for gated datasets and unauthenticated users
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7110). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Note that the CI error is unrelated to this PR and should be addressed in another PR. See:\r\n- #7111", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005354 / 0.011353 (-0.005999) | 0.004031 / 0.011008 (-0.006977) | 0.062470 / 0.038508 (0.023962) | 0.030882 / 0.023109 (0.007773) | 0.244816 / 0.275898 (-0.031082) | 0.264324 / 0.323480 (-0.059156) | 0.004164 / 0.007986 (-0.003822) | 0.002858 / 0.004328 (-0.001471) | 0.049008 / 0.004250 (0.044758) | 0.042139 / 0.037052 (0.005086) | 0.279496 / 0.258489 (0.021007) | 0.279408 / 0.293841 (-0.014433) | 0.029701 / 0.128546 (-0.098845) | 0.012501 / 0.075646 (-0.063145) | 0.203267 / 0.419271 (-0.216004) | 0.035964 / 0.043533 (-0.007569) | 0.239361 / 0.255139 (-0.015778) | 0.258942 / 0.283200 (-0.024257) | 0.017956 / 0.141683 (-0.123727) | 1.160468 / 1.452155 (-0.291687) | 1.203475 / 1.492716 (-0.289242) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004639 / 0.018006 (-0.013367) | 0.298020 / 0.000490 (0.297530) | 0.000212 / 0.000200 (0.000012) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019371 / 0.037411 (-0.018040) | 0.063311 / 0.014526 (0.048785) | 0.076412 / 0.176557 (-0.100145) | 0.122574 / 0.737135 (-0.614561) | 0.078076 / 0.296338 (-0.218263) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275381 / 0.215209 (0.060172) | 2.713220 / 2.077655 (0.635565) | 1.441940 / 1.504120 (-0.062179) | 1.325545 / 1.541195 (-0.215650) | 1.363859 / 1.468490 (-0.104631) | 0.715147 / 4.584777 (-3.869630) | 2.356482 / 3.745712 (-1.389230) | 2.882792 / 5.269862 (-2.387069) | 1.833399 / 4.565676 (-2.732278) | 0.077872 / 0.424275 (-0.346403) | 0.005172 / 0.007607 (-0.002435) | 0.326361 / 0.226044 (0.100316) | 3.239202 / 2.268929 (0.970273) | 1.837745 / 55.444624 (-53.606879) | 1.517299 / 6.876477 (-5.359178) | 1.552938 / 2.142072 (-0.589134) | 0.801496 / 4.805227 (-4.003731) | 0.133351 / 6.500664 (-6.367314) | 0.042052 / 0.075469 (-0.033418) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.957887 / 1.841788 (-0.883901) | 11.625291 / 8.074308 (3.550983) | 9.679413 / 10.191392 (-0.511979) | 0.140271 / 0.680424 (-0.540153) | 0.013991 / 0.534201 (-0.520210) | 0.299874 / 0.579283 (-0.279409) | 0.267164 / 0.434364 (-0.167200) | 0.338143 / 0.540337 (-0.202194) | 0.434105 / 1.386936 (-0.952831) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005833 / 0.011353 (-0.005520) | 0.003761 / 0.011008 (-0.007247) | 0.049699 / 0.038508 (0.011191) | 0.032786 / 0.023109 (0.009677) | 0.265100 / 0.275898 (-0.010798) | 0.291045 / 0.323480 (-0.032435) | 0.004281 / 0.007986 (-0.003705) | 0.002737 / 0.004328 (-0.001591) | 0.048524 / 0.004250 (0.044274) | 0.040783 / 0.037052 (0.003731) | 0.281122 / 0.258489 (0.022633) | 0.311349 / 0.293841 (0.017508) | 0.032143 / 0.128546 (-0.096403) | 0.011747 / 0.075646 (-0.063899) | 0.059432 / 0.419271 (-0.359840) | 0.034362 / 0.043533 (-0.009171) | 0.261061 / 0.255139 (0.005922) | 0.279536 / 0.283200 (-0.003663) | 0.019172 / 0.141683 (-0.122510) | 1.160069 / 1.452155 (-0.292086) | 1.224160 / 1.492716 (-0.268556) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093596 / 0.018006 (0.075590) | 0.302862 / 0.000490 (0.302372) | 0.000208 / 0.000200 (0.000008) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022785 / 0.037411 (-0.014626) | 0.079263 / 0.014526 (0.064737) | 0.091340 / 0.176557 (-0.085216) | 0.129453 / 0.737135 (-0.607682) | 0.091349 / 0.296338 (-0.204989) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298166 / 0.215209 (0.082957) | 3.003146 / 2.077655 (0.925491) | 1.575903 / 1.504120 (0.071783) | 1.445231 / 1.541195 (-0.095963) | 1.477116 / 1.468490 (0.008625) | 0.726496 / 4.584777 (-3.858281) | 0.959827 / 3.745712 (-2.785885) | 2.941142 / 5.269862 (-2.328720) | 1.878581 / 4.565676 (-2.687096) | 0.078475 / 0.424275 (-0.345800) | 0.005137 / 0.007607 (-0.002470) | 0.352078 / 0.226044 (0.126034) | 3.486113 / 2.268929 (1.217184) | 1.965024 / 55.444624 (-53.479600) | 1.667223 / 6.876477 (-5.209254) | 1.665254 / 2.142072 (-0.476819) | 0.803543 / 4.805227 (-4.001684) | 0.133003 / 6.500664 (-6.367661) | 0.041462 / 0.075469 (-0.034008) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.045534 / 1.841788 (-0.796254) | 12.124988 / 8.074308 (4.050680) | 10.418723 / 10.191392 (0.227331) | 0.142453 / 0.680424 (-0.537971) | 0.015686 / 0.534201 (-0.518515) | 0.300557 / 0.579283 (-0.278726) | 0.119851 / 0.434364 (-0.314512) | 0.342297 / 0.540337 (-0.198040) | 0.441263 / 1.386936 (-0.945673) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#90b1d94ef419cb26f0bb24d982897dca39aa8a46 \"CML watermark\")\n", "lgtm!" ]
"2024-08-20T05:26:54"
"2024-08-20T15:11:35"
"2024-08-20T09:14:35"
MEMBER
null
Fix `ConnectionError` for gated datasets and unauthenticated users. See: - https://github.com/huggingface/dataset-viewer/issues/3025 Note that a recent change in the Hub returns dataset info for gated datasets and unauthenticated users, instead of raising a `GatedRepoError` as before. See: - https://github.com/huggingface/huggingface_hub/issues/2457 This PR adds an additional check (/auth-check) for gated datasets and raises `DatasetNotFoundError` for unauthenticated users, as it was the case before the change in the Hub. - Fix suggested by @Pierrci (thanks @Wauplin for pointing it out). Fix #7109.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7110/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7110/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7110.diff", "html_url": "https://github.com/huggingface/datasets/pull/7110", "merged_at": "2024-08-20T09:14:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/7110.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7110" }
true
https://api.github.com/repos/huggingface/datasets/issues/7109
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7109/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7109/comments
https://api.github.com/repos/huggingface/datasets/issues/7109/events
https://github.com/huggingface/datasets/issues/7109
2,473,367,848
I_kwDODunzps6TbJko
7,109
ConnectionError for gated datasets and unauthenticated users
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
"2024-08-19T13:27:45"
"2024-08-20T09:14:36"
"2024-08-20T09:14:35"
MEMBER
null
Since the Hub returns dataset info for gated datasets and unauthenticated users, there is dead code: https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/load.py#L1846-L1852 We should remove the dead code and properly handle this case: currently we are raising a `ConnectionError` instead of a `DatasetNotFoundError` (as before). See: - https://github.com/huggingface/dataset-viewer/issues/3025 - https://github.com/huggingface/huggingface_hub/issues/2457
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7109/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7109/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7108
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7108/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7108/comments
https://api.github.com/repos/huggingface/datasets/issues/7108/events
https://github.com/huggingface/datasets/issues/7108
2,470,665,327
I_kwDODunzps6TQ1xv
7,108
website broken: Create a new dataset repository, doesn't create a new repo in Firefox
{ "avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4", "events_url": "https://api.github.com/users/neoneye/events{/privacy}", "followers_url": "https://api.github.com/users/neoneye/followers", "following_url": "https://api.github.com/users/neoneye/following{/other_user}", "gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/neoneye", "id": 147971, "login": "neoneye", "node_id": "MDQ6VXNlcjE0Nzk3MQ==", "organizations_url": "https://api.github.com/users/neoneye/orgs", "received_events_url": "https://api.github.com/users/neoneye/received_events", "repos_url": "https://api.github.com/users/neoneye/repos", "site_admin": false, "starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neoneye/subscriptions", "type": "User", "url": "https://api.github.com/users/neoneye" }
[]
closed
false
null
[]
null
[ "I don't reproduce, I was able to create a new repo: https://huggingface.co/datasets/severo/reproduce-datasets-issues-7108. Can you confirm it's still broken?", "I have just tried again.\r\n\r\nFirefox: The `Create dataset` doesn't work. It has worked in the past. It's my preferred browser.\r\n\r\nChrome: The `Create dataset` works.\r\n\r\nIt seems to be a Firefox specific issue.", "I have updated Firefox 129.0 (64 bit), and now the `Create dataset` is working again in Firefox.\r\n\r\nUX: It would be nice with better error messages on HuggingFace.", "maybe an issue with the cookie. cc @Wauplin @coyotte508 " ]
"2024-08-16T17:23:00"
"2024-08-19T13:21:12"
"2024-08-19T06:52:48"
NONE
null
### Describe the bug This issue is also reported here: https://discuss.huggingface.co/t/create-a-new-dataset-repository-broken-page/102644 This page is broken. https://huggingface.co/new-dataset I fill in the form with my text, and click `Create Dataset`. ![Screenshot 2024-08-16 at 15 55 37](https://github.com/user-attachments/assets/de16627b-7a55-4bcf-9f0b-a48227aabfe6) Then the form gets wiped. And no repo got created. No error message visible in the developer console. ![Screenshot 2024-08-16 at 15 56 54](https://github.com/user-attachments/assets/0520164b-431c-40a5-9634-11fd62c4f4c3) # Idea for improvement For better UX, if the repo cannot be created, then show an error message, that something went wrong. # Work around, that works for me ```python from huggingface_hub import HfApi, HfFolder repo_id = 'simon-arc-solve-fractal-v3' api = HfApi() username = api.whoami()['name'] repo_url = api.create_repo(repo_id=repo_id, exist_ok=True, private=True, repo_type="dataset") ``` ### Steps to reproduce the bug Go https://huggingface.co/new-dataset Fill in the form. Click `Create dataset`. Now the form is cleared. And the page doesn't jump anywhere. ### Expected behavior The moment the user clicks `Create dataset`, the repo gets created and the page jumps to the created repo. ### Environment info Firefox 128.0.3 (64-bit) macOS Sonoma 14.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7108/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7108/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7107
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7107/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7107/comments
https://api.github.com/repos/huggingface/datasets/issues/7107/events
https://github.com/huggingface/datasets/issues/7107
2,470,444,732
I_kwDODunzps6TP_68
7,107
load_dataset broken in 2.21.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/1911631?v=4", "events_url": "https://api.github.com/users/anjor/events{/privacy}", "followers_url": "https://api.github.com/users/anjor/followers", "following_url": "https://api.github.com/users/anjor/following{/other_user}", "gists_url": "https://api.github.com/users/anjor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anjor", "id": 1911631, "login": "anjor", "node_id": "MDQ6VXNlcjE5MTE2MzE=", "organizations_url": "https://api.github.com/users/anjor/orgs", "received_events_url": "https://api.github.com/users/anjor/received_events", "repos_url": "https://api.github.com/users/anjor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anjor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anjor/subscriptions", "type": "User", "url": "https://api.github.com/users/anjor" }
[]
closed
false
null
[]
null
[ "There seems to be a PR related to the load_dataset path that went into 2.21.0 -- https://github.com/huggingface/datasets/pull/6862/files\r\n\r\nTaking a look at it now", "+1\r\n\r\nDowngrading to 2.20.0 fixed my issue, hopefully helpful for others.", "I tried adding a simple test to `test_load.py` with the alpaca eval dataset but the test didn't fail :(. \r\n\r\nSo looks like this might have something to do with the environment? ", "There was an issue with the script of the \"tatsu-lab/alpaca_eval\" dataset.\r\n\r\nI was fixed with this PR: \r\n- [Fix FileNotFoundError](https://huggingface.co/datasets/tatsu-lab/alpaca_eval/discussions/2)\r\n\r\nIt should work now if you retry to load the dataset." ]
"2024-08-16T14:59:51"
"2024-08-18T09:28:43"
"2024-08-18T09:27:12"
NONE
null
### Describe the bug `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)` used to work till 2.20.0 but doesn't work in 2.21.0 In 2.20.0: ![Screenshot 2024-08-16 at 3 57 10 PM](https://github.com/user-attachments/assets/0516489b-8187-486d-bee8-88af3381dee9) in 2.21.0: ![Screenshot 2024-08-16 at 3 57 24 PM](https://github.com/user-attachments/assets/bc257570-f461-41e4-8717-90a69ed7c24f) ### Steps to reproduce the bug 1. Spin up a new google collab 2. `pip install datasets==2.21.0` 3. `import datasets` 4. `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)` 5. Will throw an error. ### Expected behavior Try steps 1-5 again but replace datasets version with 2.20.0, it will work ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-6.1.85+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.5 - PyArrow version: 17.0.0 - Pandas version: 2.1.4 - `fsspec` version: 2024.5.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7107/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7107/timeline
null
completed
null
null
false
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
33
Edit dataset card