url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.02B
1.56B
node_id
stringlengths
18
19
number
int64
3.04k
5.48k
title
stringlengths
1
165
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
null
comments
sequence
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
2
36.2k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/5381
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5381/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5381/comments
https://api.github.com/repos/huggingface/datasets/issues/5381/events
https://github.com/huggingface/datasets/issues/5381
1,504,498,387
I_kwDODunzps5ZrNLT
5,381
Wrong URL for the_pile dataset
{ "login": "LeoGrin", "id": 45738728, "node_id": "MDQ6VXNlcjQ1NzM4NzI4", "avatar_url": "https://avatars.githubusercontent.com/u/45738728?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LeoGrin", "html_url": "https://github.com/LeoGrin", "followers_url": "https://api.github.com/users/LeoGrin/followers", "following_url": "https://api.github.com/users/LeoGrin/following{/other_user}", "gists_url": "https://api.github.com/users/LeoGrin/gists{/gist_id}", "starred_url": "https://api.github.com/users/LeoGrin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LeoGrin/subscriptions", "organizations_url": "https://api.github.com/users/LeoGrin/orgs", "repos_url": "https://api.github.com/users/LeoGrin/repos", "events_url": "https://api.github.com/users/LeoGrin/events{/privacy}", "received_events_url": "https://api.github.com/users/LeoGrin/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi! This error can happen if there is a local file/folder with the same name as the requested dataset. And to avoid it, rename the local file/folder.\r\n\r\nSoon, it will be possible to explicitly request a Hub dataset as follows:https://github.com/huggingface/datasets/issues/5228#issuecomment-1313494020" ]
2022-12-20T12:40:14
2022-12-20T14:26:52
null
NONE
null
### Describe the bug When trying to load `the_pile` dataset from the library, I get a `FileNotFound` error. ### Steps to reproduce the bug Steps to reproduce: Run: ``` from datasets import load_dataset dataset = load_dataset("the_pile") ``` I get the output: "name": "FileNotFoundError", "message": "Unable to resolve any data file that matches '['**']' at /storage/store/work/lgrinszt/memorization/the_pile with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'BLP', 'BMP', 'DIB', 'BUFR', 'CUR', 'PCX', 'DCX', 'DDS', 'PS', 'EPS', 'FIT', 'FITS', 'FLI', 'FLC', 'FTC', 'FTU', 'GBR', 'GIF', 'GRIB', 'H5', 'HDF', 'PNG', 'APNG', 'JP2', 'J2K', 'JPC', 'JPF', 'JPX', 'J2C', 'ICNS', 'ICO', 'IM', 'IIM', 'TIF', 'TIFF', 'JFIF', 'JPE', 'JPG', 'JPEG', 'MPG', 'MPEG', 'MSP', 'PCD', 'PXR', 'PBM', 'PGM', 'PPM', 'PNM', 'PSD', 'BW', 'RGB', 'RGBA', 'SGI', 'RAS', 'TGA', 'ICB', 'VDA', 'VST', 'WEBP', 'WMF', 'EMF', 'XBM', 'XPM', 'aiff', 'au', 'avr', 'caf', 'flac', 'htk', 'svx', 'mat4', 'mat5', 'mpc2k', 'ogg', 'paf', 'pvf', 'raw', 'rf64', 'sd2', 'sds', 'ircam', 'voc', 'w64', 'wav', 'nist', 'wavex', 'wve', 'xi', 'mp3', 'opus', 'AIFF', 'AU', 'AVR', 'CAF', 'FLAC', 'HTK', 'SVX', 'MAT4', 'MAT5', 'MPC2K', 'OGG', 'PAF', 'PVF', 'RAW', 'RF64', 'SD2', 'SDS', 'IRCAM', 'VOC', 'W64', 'WAV', 'NIST', 'WAVEX', 'WVE', 'XI', 'MP3', 'OPUS', 'zip']" ### Expected behavior `the_pile` dataset should be dowloaded. ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-4.15.0-112-generic-x86_64-with-glibc2.27 - Python version: 3.10.8 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5381/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5380
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5380/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5380/comments
https://api.github.com/repos/huggingface/datasets/issues/5380/events
https://github.com/huggingface/datasets/issues/5380
1,504,404,043
I_kwDODunzps5Zq2JL
5,380
Improve dataset `.skip()` speed in streaming mode
{ "login": "versae", "id": 173537, "node_id": "MDQ6VXNlcjE3MzUzNw==", "avatar_url": "https://avatars.githubusercontent.com/u/173537?v=4", "gravatar_id": "", "url": "https://api.github.com/users/versae", "html_url": "https://github.com/versae", "followers_url": "https://api.github.com/users/versae/followers", "following_url": "https://api.github.com/users/versae/following{/other_user}", "gists_url": "https://api.github.com/users/versae/gists{/gist_id}", "starred_url": "https://api.github.com/users/versae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/versae/subscriptions", "organizations_url": "https://api.github.com/users/versae/orgs", "repos_url": "https://api.github.com/users/versae/repos", "events_url": "https://api.github.com/users/versae/events{/privacy}", "received_events_url": "https://api.github.com/users/versae/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi! I agree `skip` can be inefficient to use in the current state.\r\n\r\nTo make it fast, we could use \"statistics\" stored in Parquet metadata and read only the chunks needed to form a dataset. \r\n\r\nAnd thanks to the \"datasets-server\" project, which aims to store the Parquet versions of the Hub datasets (only the smaller datasets are covered currently), this solution can also be applied to datasets stored in formats other than Parquet. (cc @severo)", "@mariosasko do the current parquet files created by the datasets-server already have the required \"statistics\"? If not, please open an issue on https://github.com/huggingface/datasets-server with some details to make sure we implement it.", "Yes, nothing has to be changed on the datasets-server side. What I mean by \"statistics\" is that we can use the \"row_group\" metadata embedded in a Parquet file (by default) to fetch the requested rows more efficiently.", "Glad to see the feature could be of interest. \r\n\r\nI'm sure there are many possible ways to implement this feature. I don't know enough about the datasets-server, but I guess that it is not instantaneous, in the sense that user-owned private datasets might need hours or days until they are ported to the datasets-server (if at all), which could be cumbersome. Having optionally that information in the `dataset_infos.json` file would make it easier for users to control the skip process a bit.", "re: statistics:\r\n\r\n- https://arrow.apache.org/docs/python/generated/pyarrow.parquet.FileMetaData.html\r\n- https://arrow.apache.org/docs/python/generated/pyarrow.parquet.RowGroupMetaData.html\r\n\r\n```python\r\n>>> import pyarrow.parquet as pq\r\n>>> import hffs\r\n>>> fs = hffs.HfFileSystem(\"glue\", repo_type=\"dataset\", revision=\"refs/convert/parquet\")\r\n>>> metadata = pq.read_metadata(\"ax/glue-test.parquet\", filesystem=fs)\r\n>>> metadata\r\n<pyarrow._parquet.FileMetaData object at 0x7f4537cec400>\r\n created_by: parquet-cpp-arrow version 7.0.0\r\n num_columns: 4\r\n num_rows: 1104\r\n num_row_groups: 2\r\n format_version: 1.0\r\n serialized_size: 2902\r\n>>> metadata.row_group(0)\r\n<pyarrow._parquet.RowGroupMetaData object at 0x7f45564bcbd0>\r\n num_columns: 4\r\n num_rows: 1000\r\n total_byte_size: 164474\r\n>>> metadata.row_group(1)\r\n<pyarrow._parquet.RowGroupMetaData object at 0x7f455005c400>\r\n num_columns: 4\r\n num_rows: 104\r\n total_byte_size: 13064\r\n```", "> user-owned private datasets might need hours or days until they are ported to the datasets-server (if at all)\r\n\r\nprivate datasets are not supported yet (https://github.com/huggingface/datasets-server/issues/39)", "@versae `Dataset.push_to_hub` writes shards in Parquet, so this solution would also work for such datasets (immediately after the push). ", "@mariosasko that is right. However, there are still a good amount of datasets for which the shards are created manually. In our very specific case, we create medium-sized datasets (rarely over 100-200GB) of both text and audio, we prepare the shards by hand and then upload then. It would be great to have immediate access to this download skipping feature for them too." ]
2022-12-20T11:25:23
2023-01-17T08:44:56
null
CONTRIBUTOR
null
### Feature request Add extra information to the `dataset_infos.json` file to include the number of samples/examples in each shard, for example in a new field `num_examples` alongside `num_bytes`. The `.skip()` function could use this information to ignore the download of a shard when in streaming mode, which AFAICT it should speed up the skipping process. ### Motivation When resuming from a checkpoint after a crashed run, using `dataset.skip()` is very convenient to recover the exact state of the data and to not train again over the same examples (assuming same seed, no shuffling). However, I have noticed that for audio datasets in streaming mode this is very costly in terms of time, as shards need to be downloaded every time before skipping the right number of examples. ### Your contribution I took a look already at the code, but it seems a change like this is way deeper than I am able to manage, as it touches the library in several parts. I could give it a try but might need some guidance on the internals.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5380/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5380/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5379
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5379/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5379/comments
https://api.github.com/repos/huggingface/datasets/issues/5379/events
https://github.com/huggingface/datasets/pull/5379
1,504,010,639
PR_kwDODunzps5F1r2k
5,379
feat: depth estimation dataset guide.
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[ { "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false } ]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the changes, looks good to me!", "@stevhliu I have pushed some quality improvements both in terms of code and content. Would you be able to re-review? ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008325 / 0.011353 (-0.003028) | 0.004432 / 0.011008 (-0.006576) | 0.099794 / 0.038508 (0.061286) | 0.029469 / 0.023109 (0.006360) | 0.306554 / 0.275898 (0.030656) | 0.367373 / 0.323480 (0.043893) | 0.007532 / 0.007986 (-0.000454) | 0.003310 / 0.004328 (-0.001018) | 0.077453 / 0.004250 (0.073203) | 0.034836 / 0.037052 (-0.002216) | 0.311696 / 0.258489 (0.053207) | 0.349683 / 0.293841 (0.055842) | 0.033089 / 0.128546 (-0.095457) | 0.011339 / 0.075646 (-0.064307) | 0.321699 / 0.419271 (-0.097573) | 0.040213 / 0.043533 (-0.003320) | 0.304741 / 0.255139 (0.049602) | 0.331569 / 0.283200 (0.048369) | 0.090397 / 0.141683 (-0.051285) | 1.526001 / 1.452155 (0.073847) | 1.558863 / 1.492716 (0.066146) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.179446 / 0.018006 (0.161440) | 0.416308 / 0.000490 (0.415818) | 0.002390 / 0.000200 (0.002190) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023641 / 0.037411 (-0.013770) | 0.096672 / 0.014526 (0.082147) | 0.104330 / 0.176557 (-0.072227) | 0.146338 / 0.737135 (-0.590797) | 0.108278 / 0.296338 (-0.188060) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420194 / 0.215209 (0.204985) | 4.196981 / 2.077655 (2.119326) | 1.861206 / 1.504120 (0.357086) | 1.658748 / 1.541195 (0.117554) | 1.704309 / 1.468490 (0.235819) | 0.691639 / 4.584777 (-3.893138) | 3.346303 / 3.745712 (-0.399409) | 1.932962 / 5.269862 (-3.336900) | 1.299395 / 4.565676 (-3.266281) | 0.081869 / 0.424275 (-0.342406) | 0.012415 / 0.007607 (0.004808) | 0.530805 / 0.226044 (0.304761) | 5.293486 / 2.268929 (3.024558) | 2.328327 / 55.444624 (-53.116297) | 1.964956 / 6.876477 (-4.911521) | 2.002793 / 2.142072 (-0.139280) | 0.813380 / 4.805227 (-3.991847) | 0.150030 / 6.500664 (-6.350634) | 0.065194 / 0.075469 (-0.010275) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259421 / 1.841788 (-0.582367) | 13.667796 / 8.074308 (5.593488) | 13.819121 / 10.191392 (3.627729) | 0.136718 / 0.680424 (-0.543706) | 0.028510 / 0.534201 (-0.505691) | 0.402246 / 0.579283 (-0.177037) | 0.405279 / 0.434364 (-0.029085) | 0.467185 / 0.540337 (-0.073153) | 0.554213 / 1.386936 (-0.832723) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006738 / 0.011353 (-0.004615) | 0.004616 / 0.011008 (-0.006393) | 0.096978 / 0.038508 (0.058470) | 0.027750 / 0.023109 (0.004640) | 0.411505 / 0.275898 (0.135607) | 0.441796 / 0.323480 (0.118316) | 0.005073 / 0.007986 (-0.002913) | 0.003360 / 0.004328 (-0.000968) | 0.074445 / 0.004250 (0.070194) | 0.040654 / 0.037052 (0.003602) | 0.414277 / 0.258489 (0.155788) | 0.448665 / 0.293841 (0.154824) | 0.032346 / 0.128546 (-0.096200) | 0.011533 / 0.075646 (-0.064114) | 0.317349 / 0.419271 (-0.101923) | 0.041934 / 0.043533 (-0.001599) | 0.409102 / 0.255139 (0.153963) | 0.429977 / 0.283200 (0.146777) | 0.089459 / 0.141683 (-0.052224) | 1.518127 / 1.452155 (0.065973) | 1.569902 / 1.492716 (0.077186) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232648 / 0.018006 (0.214642) | 0.413751 / 0.000490 (0.413261) | 0.000404 / 0.000200 (0.000204) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025468 / 0.037411 (-0.011943) | 0.098195 / 0.014526 (0.083669) | 0.108882 / 0.176557 (-0.067674) | 0.150059 / 0.737135 (-0.587076) | 0.110742 / 0.296338 (-0.185597) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445326 / 0.215209 (0.230117) | 4.449200 / 2.077655 (2.371545) | 2.098939 / 1.504120 (0.594819) | 1.861207 / 1.541195 (0.320012) | 1.901385 / 1.468490 (0.432894) | 0.695287 / 4.584777 (-3.889490) | 3.461775 / 3.745712 (-0.283938) | 2.998566 / 5.269862 (-2.271296) | 1.555036 / 4.565676 (-3.010641) | 0.082789 / 0.424275 (-0.341486) | 0.012772 / 0.007607 (0.005165) | 0.564855 / 0.226044 (0.338811) | 5.631049 / 2.268929 (3.362120) | 2.543771 / 55.444624 (-52.900854) | 2.194378 / 6.876477 (-4.682099) | 2.267168 / 2.142072 (0.125095) | 0.803330 / 4.805227 (-4.001898) | 0.151336 / 6.500664 (-6.349328) | 0.067015 / 0.075469 (-0.008454) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.298422 / 1.841788 (-0.543366) | 13.933637 / 8.074308 (5.859329) | 13.570848 / 10.191392 (3.379456) | 0.150787 / 0.680424 (-0.529637) | 0.016911 / 0.534201 (-0.517290) | 0.384771 / 0.579283 (-0.194512) | 0.397505 / 0.434364 (-0.036858) | 0.450931 / 0.540337 (-0.089406) | 0.534501 / 1.386936 (-0.852435) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n", "@lhoestq @nateraw made some changes as per the comments. PTAL and approve as necessary. ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009037 / 0.011353 (-0.002316) | 0.004970 / 0.011008 (-0.006038) | 0.099223 / 0.038508 (0.060715) | 0.034935 / 0.023109 (0.011826) | 0.297027 / 0.275898 (0.021129) | 0.352861 / 0.323480 (0.029382) | 0.007558 / 0.007986 (-0.000427) | 0.003903 / 0.004328 (-0.000425) | 0.075663 / 0.004250 (0.071413) | 0.042577 / 0.037052 (0.005524) | 0.307182 / 0.258489 (0.048693) | 0.344237 / 0.293841 (0.050396) | 0.041438 / 0.128546 (-0.087108) | 0.012159 / 0.075646 (-0.063487) | 0.333771 / 0.419271 (-0.085501) | 0.047847 / 0.043533 (0.004314) | 0.290797 / 0.255139 (0.035658) | 0.320517 / 0.283200 (0.037318) | 0.098334 / 0.141683 (-0.043349) | 1.446187 / 1.452155 (-0.005968) | 1.495506 / 1.492716 (0.002789) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203704 / 0.018006 (0.185698) | 0.441325 / 0.000490 (0.440835) | 0.001173 / 0.000200 (0.000973) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026694 / 0.037411 (-0.010718) | 0.103819 / 0.014526 (0.089294) | 0.116377 / 0.176557 (-0.060179) | 0.158280 / 0.737135 (-0.578856) | 0.119797 / 0.296338 (-0.176541) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405723 / 0.215209 (0.190514) | 4.047633 / 2.077655 (1.969979) | 1.805652 / 1.504120 (0.301532) | 1.611382 / 1.541195 (0.070187) | 1.663117 / 1.468490 (0.194627) | 0.692589 / 4.584777 (-3.892188) | 3.689970 / 3.745712 (-0.055742) | 2.089760 / 5.269862 (-3.180101) | 1.450576 / 4.565676 (-3.115101) | 0.085276 / 0.424275 (-0.338999) | 0.012042 / 0.007607 (0.004434) | 0.513159 / 0.226044 (0.287115) | 5.123235 / 2.268929 (2.854306) | 2.281864 / 55.444624 (-53.162761) | 1.926170 / 6.876477 (-4.950307) | 2.035093 / 2.142072 (-0.106979) | 0.857457 / 4.805227 (-3.947770) | 0.166088 / 6.500664 (-6.334576) | 0.062115 / 0.075469 (-0.013354) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197776 / 1.841788 (-0.644012) | 14.674452 / 8.074308 (6.600144) | 14.275990 / 10.191392 (4.084598) | 0.170848 / 0.680424 (-0.509576) | 0.028613 / 0.534201 (-0.505588) | 0.438650 / 0.579283 (-0.140633) | 0.439323 / 0.434364 (0.004959) | 0.515090 / 0.540337 (-0.025247) | 0.614216 / 1.386936 (-0.772720) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007159 / 0.011353 (-0.004194) | 0.005142 / 0.011008 (-0.005866) | 0.096953 / 0.038508 (0.058445) | 0.033036 / 0.023109 (0.009927) | 0.391790 / 0.275898 (0.115892) | 0.427120 / 0.323480 (0.103640) | 0.005691 / 0.007986 (-0.002294) | 0.004848 / 0.004328 (0.000519) | 0.072258 / 0.004250 (0.068008) | 0.049017 / 0.037052 (0.011965) | 0.387267 / 0.258489 (0.128778) | 0.437112 / 0.293841 (0.143272) | 0.036360 / 0.128546 (-0.092186) | 0.012249 / 0.075646 (-0.063397) | 0.336246 / 0.419271 (-0.083025) | 0.048777 / 0.043533 (0.005244) | 0.397872 / 0.255139 (0.142733) | 0.399768 / 0.283200 (0.116568) | 0.101283 / 0.141683 (-0.040400) | 1.443999 / 1.452155 (-0.008156) | 1.575496 / 1.492716 (0.082779) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220952 / 0.018006 (0.202946) | 0.442220 / 0.000490 (0.441730) | 0.000406 / 0.000200 (0.000206) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028626 / 0.037411 (-0.008786) | 0.109929 / 0.014526 (0.095403) | 0.120989 / 0.176557 (-0.055568) | 0.157377 / 0.737135 (-0.579758) | 0.125522 / 0.296338 (-0.170816) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436565 / 0.215209 (0.221356) | 4.380771 / 2.077655 (2.303117) | 2.200003 / 1.504120 (0.695883) | 2.013289 / 1.541195 (0.472094) | 2.052658 / 1.468490 (0.584168) | 0.703706 / 4.584777 (-3.881071) | 3.823289 / 3.745712 (0.077577) | 2.064882 / 5.269862 (-3.204980) | 1.330834 / 4.565676 (-3.234842) | 0.085945 / 0.424275 (-0.338330) | 0.012511 / 0.007607 (0.004904) | 0.544171 / 0.226044 (0.318127) | 5.476059 / 2.268929 (3.207130) | 2.695586 / 55.444624 (-52.749039) | 2.330239 / 6.876477 (-4.546238) | 2.429290 / 2.142072 (0.287218) | 0.843154 / 4.805227 (-3.962073) | 0.169334 / 6.500664 (-6.331330) | 0.064261 / 0.075469 (-0.011209) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.268344 / 1.841788 (-0.573444) | 14.934342 / 8.074308 (6.860034) | 13.555389 / 10.191392 (3.363997) | 0.142725 / 0.680424 (-0.537699) | 0.017891 / 0.534201 (-0.516310) | 0.424833 / 0.579283 (-0.154450) | 0.420035 / 0.434364 (-0.014329) | 0.491009 / 0.540337 (-0.049329) | 0.586953 / 1.386936 (-0.799983) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n", "Merging this PR with approvals from @stevhliu @lhoestq. ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008586 / 0.011353 (-0.002767) | 0.004659 / 0.011008 (-0.006350) | 0.100343 / 0.038508 (0.061835) | 0.029861 / 0.023109 (0.006751) | 0.301090 / 0.275898 (0.025192) | 0.369528 / 0.323480 (0.046048) | 0.006920 / 0.007986 (-0.001065) | 0.003513 / 0.004328 (-0.000815) | 0.078514 / 0.004250 (0.074263) | 0.035285 / 0.037052 (-0.001767) | 0.311257 / 0.258489 (0.052768) | 0.353995 / 0.293841 (0.060154) | 0.033733 / 0.128546 (-0.094813) | 0.011489 / 0.075646 (-0.064157) | 0.323095 / 0.419271 (-0.096176) | 0.040808 / 0.043533 (-0.002725) | 0.301779 / 0.255139 (0.046640) | 0.348517 / 0.283200 (0.065318) | 0.086962 / 0.141683 (-0.054721) | 1.496270 / 1.452155 (0.044115) | 1.514260 / 1.492716 (0.021544) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.189502 / 0.018006 (0.171496) | 0.419326 / 0.000490 (0.418837) | 0.002160 / 0.000200 (0.001960) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023669 / 0.037411 (-0.013742) | 0.096574 / 0.014526 (0.082048) | 0.105970 / 0.176557 (-0.070587) | 0.148531 / 0.737135 (-0.588605) | 0.109948 / 0.296338 (-0.186391) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424968 / 0.215209 (0.209759) | 4.246292 / 2.077655 (2.168637) | 1.911062 / 1.504120 (0.406943) | 1.700733 / 1.541195 (0.159538) | 1.760756 / 1.468490 (0.292266) | 0.696966 / 4.584777 (-3.887811) | 3.372320 / 3.745712 (-0.373392) | 2.886281 / 5.269862 (-2.383581) | 1.553082 / 4.565676 (-3.012594) | 0.082835 / 0.424275 (-0.341440) | 0.012688 / 0.007607 (0.005081) | 0.536352 / 0.226044 (0.310308) | 5.382510 / 2.268929 (3.113582) | 2.365664 / 55.444624 (-53.078960) | 1.995631 / 6.876477 (-4.880845) | 2.073865 / 2.142072 (-0.068207) | 0.819109 / 4.805227 (-3.986118) | 0.150278 / 6.500664 (-6.350386) | 0.065201 / 0.075469 (-0.010268) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.239835 / 1.841788 (-0.601953) | 13.911847 / 8.074308 (5.837539) | 13.500433 / 10.191392 (3.309041) | 0.137153 / 0.680424 (-0.543271) | 0.028451 / 0.534201 (-0.505750) | 0.394659 / 0.579283 (-0.184625) | 0.404915 / 0.434364 (-0.029449) | 0.458944 / 0.540337 (-0.081394) | 0.542288 / 1.386936 (-0.844648) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006791 / 0.011353 (-0.004562) | 0.004590 / 0.011008 (-0.006419) | 0.098697 / 0.038508 (0.060189) | 0.027634 / 0.023109 (0.004525) | 0.344383 / 0.275898 (0.068485) | 0.385607 / 0.323480 (0.062127) | 0.005413 / 0.007986 (-0.002573) | 0.003447 / 0.004328 (-0.000881) | 0.077268 / 0.004250 (0.073018) | 0.041823 / 0.037052 (0.004770) | 0.342904 / 0.258489 (0.084414) | 0.399371 / 0.293841 (0.105530) | 0.032668 / 0.128546 (-0.095879) | 0.011598 / 0.075646 (-0.064048) | 0.319973 / 0.419271 (-0.099299) | 0.041760 / 0.043533 (-0.001773) | 0.340510 / 0.255139 (0.085371) | 0.377929 / 0.283200 (0.094730) | 0.090889 / 0.141683 (-0.050793) | 1.496068 / 1.452155 (0.043913) | 1.574884 / 1.492716 (0.082168) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230489 / 0.018006 (0.212483) | 0.425234 / 0.000490 (0.424745) | 0.000406 / 0.000200 (0.000206) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024650 / 0.037411 (-0.012761) | 0.102706 / 0.014526 (0.088180) | 0.108017 / 0.176557 (-0.068539) | 0.143645 / 0.737135 (-0.593490) | 0.110556 / 0.296338 (-0.185782) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.468038 / 0.215209 (0.252829) | 4.670514 / 2.077655 (2.592860) | 2.446620 / 1.504120 (0.942500) | 2.241255 / 1.541195 (0.700060) | 2.286409 / 1.468490 (0.817919) | 0.698923 / 4.584777 (-3.885854) | 3.401121 / 3.745712 (-0.344592) | 1.892399 / 5.269862 (-3.377462) | 1.163101 / 4.565676 (-3.402575) | 0.082567 / 0.424275 (-0.341708) | 0.012662 / 0.007607 (0.005055) | 0.571262 / 0.226044 (0.345218) | 5.731740 / 2.268929 (3.462812) | 2.879649 / 55.444624 (-52.564975) | 2.533846 / 6.876477 (-4.342631) | 2.654789 / 2.142072 (0.512717) | 0.811345 / 4.805227 (-3.993882) | 0.152495 / 6.500664 (-6.348169) | 0.067748 / 0.075469 (-0.007721) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.267852 / 1.841788 (-0.573935) | 14.114920 / 8.074308 (6.040612) | 14.355403 / 10.191392 (4.164011) | 0.150393 / 0.680424 (-0.530031) | 0.016855 / 0.534201 (-0.517346) | 0.378710 / 0.579283 (-0.200573) | 0.385380 / 0.434364 (-0.048984) | 0.439054 / 0.540337 (-0.101284) | 0.524343 / 1.386936 (-0.862593) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n" ]
2022-12-20T05:32:11
2023-01-13T12:30:31
2023-01-13T12:23:34
MEMBER
null
This PR adds a guide for prepping datasets for depth estimation. PR to add documentation images is up here: https://huggingface.co/datasets/huggingface/documentation-images/discussions/22
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5379/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5379/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5379", "html_url": "https://github.com/huggingface/datasets/pull/5379", "diff_url": "https://github.com/huggingface/datasets/pull/5379.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5379.patch", "merged_at": "2023-01-13T12:23:34" }
true
https://api.github.com/repos/huggingface/datasets/issues/5378
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5378/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5378/comments
https://api.github.com/repos/huggingface/datasets/issues/5378/events
https://github.com/huggingface/datasets/issues/5378
1,503,887,508
I_kwDODunzps5Zo4CU
5,378
The dataset "the_pile", subset "enron_emails" , load_dataset() failure
{ "login": "shaoyuta", "id": 52023469, "node_id": "MDQ6VXNlcjUyMDIzNDY5", "avatar_url": "https://avatars.githubusercontent.com/u/52023469?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shaoyuta", "html_url": "https://github.com/shaoyuta", "followers_url": "https://api.github.com/users/shaoyuta/followers", "following_url": "https://api.github.com/users/shaoyuta/following{/other_user}", "gists_url": "https://api.github.com/users/shaoyuta/gists{/gist_id}", "starred_url": "https://api.github.com/users/shaoyuta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shaoyuta/subscriptions", "organizations_url": "https://api.github.com/users/shaoyuta/orgs", "repos_url": "https://api.github.com/users/shaoyuta/repos", "events_url": "https://api.github.com/users/shaoyuta/events{/privacy}", "received_events_url": "https://api.github.com/users/shaoyuta/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for reporting @shaoyuta. We are investigating it.\r\n\r\nWe are transferring the issue to \"the_pile\" Community tab on the Hub: https://huggingface.co/datasets/the_pile/discussions/4" ]
2022-12-20T02:19:13
2022-12-20T07:52:54
2022-12-20T07:52:54
NONE
null
### Describe the bug When run "datasets.load_dataset("the_pile","enron_emails")" failure ![image](https://user-images.githubusercontent.com/52023469/208565302-cfab7b89-0b97-4fa6-a5ba-c11b0b629b1a.png) ### Steps to reproduce the bug Run below code in python cli: >>> import datasets >>> datasets.load_dataset("the_pile","enron_emails") ### Expected behavior Load dataset "the_pile", "enron_emails" successfully. ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.7.1 - Platform: Linux-5.15.0-53-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - PyArrow version: 10.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5378/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5378/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5377
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5377/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5377/comments
https://api.github.com/repos/huggingface/datasets/issues/5377/events
https://github.com/huggingface/datasets/pull/5377
1,503,477,833
PR_kwDODunzps5Fz5lw
5,377
Add a parallel implementation of to_tf_dataset()
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Failing because the test server uses Py3.7 but the `SharedMemory` features require Py3.8! I forgot we still support 3.7 for another couple of months. I'm not sure exactly how to proceed, whether I should leave this PR until then, or just gate the feature behind a version check and skip the tests until the Python version catches up.", "I haven't played with `NumpyMultiprocessingGenerator` so I can't really help here, but this sounds promising :) Otherwise I think it's also fine to allow `num_workers` only for py>=3.8 for now. You can skip the test on 3.7 and make sure to raise an informative error if someone wants to use `num_workers` with 3.7", "Lots of comments here - I'll reply to the specific code comments underneath them, but in response to the general comments:\r\n\r\n@gante: I think this approach is much more performant than a `multiprocessing.Pool`. The reason is that when results are returned from a process `Pool`, the returned Python objects are pickled by the child processes, sent down a pipe and unpickled by the parent process. This creates a huge single-process bottleneck as the parent has to unpickle lots of large NumPy arrays, which is quite slow.\r\n\r\nWhen you use a `SharedMemory` approach, the data is just **there** for the parent process - the child and the parent are writing to exactly the same array in memory, and no pickling or unpickling occurs. This means the parent can just immediately copy the array (which is much faster than unpickling) and yield it to `tf.data`. We're taking advantage of the fact that we know the data is just big NumPy arrays and we don't need the full generality of `pickle`.\r\n\r\n@lhoestq: Sounds good! I'll add a clear error and skip the tests on Py<=3.7.", "Also, an extra technicality, just for information in case anyone looks at this PR later: Recent versions of Python allow [pickled objects to store out-of-band data](https://peps.python.org/pep-0574/). This allows for very efficient zero-copy unpickling of objects like NumPy arrays, with the unpickled object having a view on the same memory as the original. \r\n\r\nHowever, this explicitly does **not** work when the object is unpickled by a different process than the one that created it. For this to work you must explicitly allocate shared memory and create the array there, which pickle cannot handle for you. As a result, if you just benchmark unpickling vs copying of NumPy arrays it can seem like unpickling is very fast - but this is only true when the pickle was created in the unpickling process!", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008666 / 0.011353 (-0.002687) | 0.004624 / 0.011008 (-0.006384) | 0.099247 / 0.038508 (0.060739) | 0.029766 / 0.023109 (0.006657) | 0.303347 / 0.275898 (0.027449) | 0.370022 / 0.323480 (0.046542) | 0.007128 / 0.007986 (-0.000857) | 0.003446 / 0.004328 (-0.000883) | 0.076670 / 0.004250 (0.072420) | 0.038892 / 0.037052 (0.001840) | 0.313035 / 0.258489 (0.054546) | 0.350503 / 0.293841 (0.056662) | 0.033732 / 0.128546 (-0.094815) | 0.011644 / 0.075646 (-0.064003) | 0.323295 / 0.419271 (-0.095977) | 0.040336 / 0.043533 (-0.003196) | 0.302253 / 0.255139 (0.047114) | 0.337199 / 0.283200 (0.053999) | 0.089454 / 0.141683 (-0.052229) | 1.624906 / 1.452155 (0.172752) | 1.546187 / 1.492716 (0.053470) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184614 / 0.018006 (0.166608) | 0.427397 / 0.000490 (0.426907) | 0.003342 / 0.000200 (0.003142) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023684 / 0.037411 (-0.013727) | 0.100095 / 0.014526 (0.085569) | 0.104996 / 0.176557 (-0.071560) | 0.144719 / 0.737135 (-0.592416) | 0.110759 / 0.296338 (-0.185579) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421108 / 0.215209 (0.205899) | 4.214094 / 2.077655 (2.136440) | 1.906231 / 1.504120 (0.402111) | 1.698000 / 1.541195 (0.156806) | 1.744856 / 1.468490 (0.276366) | 0.693671 / 4.584777 (-3.891106) | 3.362522 / 3.745712 (-0.383190) | 1.878470 / 5.269862 (-3.391392) | 1.167563 / 4.565676 (-3.398113) | 0.082455 / 0.424275 (-0.341820) | 0.012261 / 0.007607 (0.004654) | 0.525196 / 0.226044 (0.299152) | 5.257553 / 2.268929 (2.988624) | 2.298286 / 55.444624 (-53.146339) | 1.956106 / 6.876477 (-4.920371) | 2.006308 / 2.142072 (-0.135764) | 0.811069 / 4.805227 (-3.994158) | 0.150368 / 6.500664 (-6.350296) | 0.065699 / 0.075469 (-0.009771) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224516 / 1.841788 (-0.617272) | 13.619084 / 8.074308 (5.544776) | 14.096666 / 10.191392 (3.905274) | 0.151068 / 0.680424 (-0.529356) | 0.028819 / 0.534201 (-0.505382) | 0.402071 / 0.579283 (-0.177212) | 0.408647 / 0.434364 (-0.025717) | 0.466605 / 0.540337 (-0.073733) | 0.547094 / 1.386936 (-0.839842) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006935 / 0.011353 (-0.004418) | 0.004590 / 0.011008 (-0.006419) | 0.099398 / 0.038508 (0.060890) | 0.028145 / 0.023109 (0.005036) | 0.426582 / 0.275898 (0.150684) | 0.465712 / 0.323480 (0.142233) | 0.005254 / 0.007986 (-0.002731) | 0.004956 / 0.004328 (0.000627) | 0.075616 / 0.004250 (0.071365) | 0.039871 / 0.037052 (0.002819) | 0.428859 / 0.258489 (0.170370) | 0.470839 / 0.293841 (0.176998) | 0.032150 / 0.128546 (-0.096396) | 0.011778 / 0.075646 (-0.063868) | 0.322358 / 0.419271 (-0.096913) | 0.041974 / 0.043533 (-0.001559) | 0.427459 / 0.255139 (0.172320) | 0.446685 / 0.283200 (0.163485) | 0.092000 / 0.141683 (-0.049683) | 1.509231 / 1.452155 (0.057076) | 1.578950 / 1.492716 (0.086234) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.168047 / 0.018006 (0.150041) | 0.418993 / 0.000490 (0.418503) | 0.002855 / 0.000200 (0.002655) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025652 / 0.037411 (-0.011759) | 0.100141 / 0.014526 (0.085616) | 0.107293 / 0.176557 (-0.069264) | 0.142857 / 0.737135 (-0.594278) | 0.110933 / 0.296338 (-0.185406) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.477556 / 0.215209 (0.262347) | 4.777951 / 2.077655 (2.700296) | 2.461885 / 1.504120 (0.957765) | 2.252307 / 1.541195 (0.711112) | 2.307983 / 1.468490 (0.839493) | 0.697570 / 4.584777 (-3.887207) | 3.370323 / 3.745712 (-0.375389) | 3.131333 / 5.269862 (-2.138529) | 1.594839 / 4.565676 (-2.970838) | 0.082333 / 0.424275 (-0.341942) | 0.012574 / 0.007607 (0.004967) | 0.583704 / 0.226044 (0.357660) | 5.817675 / 2.268929 (3.548746) | 2.927054 / 55.444624 (-52.517570) | 2.582929 / 6.876477 (-4.293548) | 2.634275 / 2.142072 (0.492202) | 0.806407 / 4.805227 (-3.998821) | 0.151438 / 6.500664 (-6.349226) | 0.067429 / 0.075469 (-0.008040) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.267011 / 1.841788 (-0.574776) | 13.989515 / 8.074308 (5.915207) | 14.087968 / 10.191392 (3.896576) | 0.142130 / 0.680424 (-0.538293) | 0.017201 / 0.534201 (-0.517000) | 0.383394 / 0.579283 (-0.195889) | 0.381921 / 0.434364 (-0.052443) | 0.439169 / 0.540337 (-0.101168) | 0.524215 / 1.386936 (-0.862721) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#be2ebc8f3cfeb532c933be2443094603bafcab04 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008489 / 0.011353 (-0.002864) | 0.004617 / 0.011008 (-0.006391) | 0.102035 / 0.038508 (0.063527) | 0.029850 / 0.023109 (0.006741) | 0.296789 / 0.275898 (0.020891) | 0.367270 / 0.323480 (0.043790) | 0.006934 / 0.007986 (-0.001052) | 0.004923 / 0.004328 (0.000595) | 0.079150 / 0.004250 (0.074900) | 0.036884 / 0.037052 (-0.000169) | 0.305747 / 0.258489 (0.047258) | 0.348510 / 0.293841 (0.054669) | 0.034074 / 0.128546 (-0.094472) | 0.011650 / 0.075646 (-0.063997) | 0.324226 / 0.419271 (-0.095045) | 0.041763 / 0.043533 (-0.001770) | 0.300887 / 0.255139 (0.045748) | 0.333393 / 0.283200 (0.050193) | 0.093838 / 0.141683 (-0.047844) | 1.499801 / 1.452155 (0.047646) | 1.505988 / 1.492716 (0.013272) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198610 / 0.018006 (0.180604) | 0.407380 / 0.000490 (0.406891) | 0.000367 / 0.000200 (0.000167) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022858 / 0.037411 (-0.014554) | 0.095727 / 0.014526 (0.081202) | 0.104014 / 0.176557 (-0.072543) | 0.138764 / 0.737135 (-0.598371) | 0.105860 / 0.296338 (-0.190478) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416352 / 0.215209 (0.201143) | 4.150007 / 2.077655 (2.072352) | 1.878727 / 1.504120 (0.374607) | 1.678978 / 1.541195 (0.137783) | 1.711990 / 1.468490 (0.243500) | 0.691722 / 4.584777 (-3.893055) | 3.386466 / 3.745712 (-0.359246) | 1.835730 / 5.269862 (-3.434132) | 1.149975 / 4.565676 (-3.415702) | 0.081914 / 0.424275 (-0.342362) | 0.012238 / 0.007607 (0.004631) | 0.522945 / 0.226044 (0.296900) | 5.251793 / 2.268929 (2.982864) | 2.306907 / 55.444624 (-53.137717) | 1.968400 / 6.876477 (-4.908076) | 1.981154 / 2.142072 (-0.160919) | 0.810126 / 4.805227 (-3.995101) | 0.147876 / 6.500664 (-6.352788) | 0.064042 / 0.075469 (-0.011428) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.199150 / 1.841788 (-0.642637) | 13.913473 / 8.074308 (5.839165) | 14.079132 / 10.191392 (3.887740) | 0.137387 / 0.680424 (-0.543037) | 0.028456 / 0.534201 (-0.505745) | 0.394162 / 0.579283 (-0.185122) | 0.402051 / 0.434364 (-0.032313) | 0.461944 / 0.540337 (-0.078394) | 0.542648 / 1.386936 (-0.844288) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006393 / 0.011353 (-0.004960) | 0.004599 / 0.011008 (-0.006409) | 0.097389 / 0.038508 (0.058881) | 0.027719 / 0.023109 (0.004610) | 0.341060 / 0.275898 (0.065162) | 0.379604 / 0.323480 (0.056124) | 0.004955 / 0.007986 (-0.003030) | 0.003369 / 0.004328 (-0.000959) | 0.075390 / 0.004250 (0.071139) | 0.038518 / 0.037052 (0.001466) | 0.347085 / 0.258489 (0.088596) | 0.393468 / 0.293841 (0.099627) | 0.031482 / 0.128546 (-0.097064) | 0.011585 / 0.075646 (-0.064061) | 0.317969 / 0.419271 (-0.101302) | 0.041389 / 0.043533 (-0.002144) | 0.343812 / 0.255139 (0.088673) | 0.371047 / 0.283200 (0.087848) | 0.090020 / 0.141683 (-0.051663) | 1.461690 / 1.452155 (0.009536) | 1.552458 / 1.492716 (0.059741) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.188691 / 0.018006 (0.170684) | 0.415635 / 0.000490 (0.415145) | 0.005285 / 0.000200 (0.005085) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024695 / 0.037411 (-0.012716) | 0.098939 / 0.014526 (0.084413) | 0.108472 / 0.176557 (-0.068085) | 0.152635 / 0.737135 (-0.584501) | 0.109947 / 0.296338 (-0.186391) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471975 / 0.215209 (0.256766) | 4.716437 / 2.077655 (2.638782) | 2.420148 / 1.504120 (0.916028) | 2.219864 / 1.541195 (0.678669) | 2.238647 / 1.468490 (0.770157) | 0.697628 / 4.584777 (-3.887149) | 3.530720 / 3.745712 (-0.214993) | 3.327354 / 5.269862 (-1.942508) | 1.665877 / 4.565676 (-2.899800) | 0.082650 / 0.424275 (-0.341625) | 0.012593 / 0.007607 (0.004986) | 0.576109 / 0.226044 (0.350065) | 5.744691 / 2.268929 (3.475762) | 2.863473 / 55.444624 (-52.581152) | 2.529616 / 6.876477 (-4.346861) | 2.562802 / 2.142072 (0.420730) | 0.805631 / 4.805227 (-3.999597) | 0.150788 / 6.500664 (-6.349876) | 0.065743 / 0.075469 (-0.009726) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.295134 / 1.841788 (-0.546654) | 14.096046 / 8.074308 (6.021738) | 13.901399 / 10.191392 (3.710007) | 0.127481 / 0.680424 (-0.552943) | 0.016666 / 0.534201 (-0.517535) | 0.381819 / 0.579283 (-0.197464) | 0.382629 / 0.434364 (-0.051735) | 0.439354 / 0.540337 (-0.100984) | 0.527662 / 1.386936 (-0.859274) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0fe2ad43f59e65d39f2f2ce7442c76990493deb7 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008509 / 0.011353 (-0.002844) | 0.004523 / 0.011008 (-0.006485) | 0.100616 / 0.038508 (0.062108) | 0.029573 / 0.023109 (0.006464) | 0.306414 / 0.275898 (0.030516) | 0.377034 / 0.323480 (0.053554) | 0.007621 / 0.007986 (-0.000365) | 0.003335 / 0.004328 (-0.000993) | 0.078598 / 0.004250 (0.074348) | 0.036902 / 0.037052 (-0.000150) | 0.318146 / 0.258489 (0.059657) | 0.355626 / 0.293841 (0.061785) | 0.033441 / 0.128546 (-0.095105) | 0.011552 / 0.075646 (-0.064094) | 0.322973 / 0.419271 (-0.096299) | 0.040564 / 0.043533 (-0.002968) | 0.306451 / 0.255139 (0.051312) | 0.337591 / 0.283200 (0.054392) | 0.086822 / 0.141683 (-0.054861) | 1.484601 / 1.452155 (0.032447) | 1.542777 / 1.492716 (0.050061) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201711 / 0.018006 (0.183705) | 0.418387 / 0.000490 (0.417898) | 0.002753 / 0.000200 (0.002553) | 0.000263 / 0.000054 (0.000209) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023016 / 0.037411 (-0.014395) | 0.097313 / 0.014526 (0.082787) | 0.103435 / 0.176557 (-0.073122) | 0.142665 / 0.737135 (-0.594470) | 0.107397 / 0.296338 (-0.188942) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422739 / 0.215209 (0.207530) | 4.220126 / 2.077655 (2.142471) | 1.865447 / 1.504120 (0.361327) | 1.649647 / 1.541195 (0.108453) | 1.711655 / 1.468490 (0.243165) | 0.704269 / 4.584777 (-3.880508) | 3.407390 / 3.745712 (-0.338322) | 1.929224 / 5.269862 (-3.340638) | 1.281225 / 4.565676 (-3.284452) | 0.082924 / 0.424275 (-0.341351) | 0.012588 / 0.007607 (0.004981) | 0.531025 / 0.226044 (0.304980) | 5.339441 / 2.268929 (3.070512) | 2.298969 / 55.444624 (-53.145656) | 1.952145 / 6.876477 (-4.924332) | 2.034754 / 2.142072 (-0.107318) | 0.823672 / 4.805227 (-3.981555) | 0.151465 / 6.500664 (-6.349199) | 0.066663 / 0.075469 (-0.008807) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.258981 / 1.841788 (-0.582807) | 13.791640 / 8.074308 (5.717332) | 14.001514 / 10.191392 (3.810122) | 0.149805 / 0.680424 (-0.530619) | 0.028614 / 0.534201 (-0.505587) | 0.400266 / 0.579283 (-0.179017) | 0.405891 / 0.434364 (-0.028473) | 0.471903 / 0.540337 (-0.068435) | 0.563656 / 1.386936 (-0.823280) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006751 / 0.011353 (-0.004601) | 0.004665 / 0.011008 (-0.006343) | 0.098362 / 0.038508 (0.059854) | 0.027451 / 0.023109 (0.004342) | 0.421859 / 0.275898 (0.145961) | 0.458089 / 0.323480 (0.134609) | 0.004885 / 0.007986 (-0.003101) | 0.003459 / 0.004328 (-0.000870) | 0.075871 / 0.004250 (0.071621) | 0.036591 / 0.037052 (-0.000462) | 0.423307 / 0.258489 (0.164818) | 0.467040 / 0.293841 (0.173199) | 0.031837 / 0.128546 (-0.096710) | 0.011604 / 0.075646 (-0.064042) | 0.321132 / 0.419271 (-0.098140) | 0.041806 / 0.043533 (-0.001727) | 0.421653 / 0.255139 (0.166514) | 0.445896 / 0.283200 (0.162696) | 0.087998 / 0.141683 (-0.053685) | 1.475818 / 1.452155 (0.023664) | 1.559487 / 1.492716 (0.066770) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203096 / 0.018006 (0.185090) | 0.401381 / 0.000490 (0.400892) | 0.004037 / 0.000200 (0.003837) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023757 / 0.037411 (-0.013654) | 0.099919 / 0.014526 (0.085393) | 0.108384 / 0.176557 (-0.068173) | 0.143780 / 0.737135 (-0.593355) | 0.111528 / 0.296338 (-0.184811) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.475896 / 0.215209 (0.260686) | 4.754567 / 2.077655 (2.676912) | 2.444986 / 1.504120 (0.940866) | 2.231055 / 1.541195 (0.689860) | 2.283646 / 1.468490 (0.815156) | 0.701303 / 4.584777 (-3.883474) | 3.381597 / 3.745712 (-0.364115) | 1.878714 / 5.269862 (-3.391148) | 1.171566 / 4.565676 (-3.394111) | 0.083106 / 0.424275 (-0.341169) | 0.012575 / 0.007607 (0.004967) | 0.582570 / 0.226044 (0.356526) | 5.813677 / 2.268929 (3.544748) | 2.908578 / 55.444624 (-52.536046) | 2.548459 / 6.876477 (-4.328017) | 2.581211 / 2.142072 (0.439139) | 0.807925 / 4.805227 (-3.997302) | 0.153516 / 6.500664 (-6.347148) | 0.068763 / 0.075469 (-0.006706) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249595 / 1.841788 (-0.592193) | 14.208573 / 8.074308 (6.134265) | 14.179174 / 10.191392 (3.987781) | 0.156005 / 0.680424 (-0.524419) | 0.017045 / 0.534201 (-0.517156) | 0.377414 / 0.579283 (-0.201869) | 0.395291 / 0.434364 (-0.039073) | 0.444642 / 0.540337 (-0.095695) | 0.531626 / 1.386936 (-0.855311) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#52888645daa6854928474df6308bd997c8878ced \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008871 / 0.011353 (-0.002482) | 0.004616 / 0.011008 (-0.006392) | 0.100910 / 0.038508 (0.062402) | 0.030381 / 0.023109 (0.007272) | 0.304636 / 0.275898 (0.028737) | 0.384258 / 0.323480 (0.060778) | 0.007019 / 0.007986 (-0.000966) | 0.004262 / 0.004328 (-0.000066) | 0.077082 / 0.004250 (0.072832) | 0.035235 / 0.037052 (-0.001817) | 0.318293 / 0.258489 (0.059804) | 0.356578 / 0.293841 (0.062737) | 0.033568 / 0.128546 (-0.094978) | 0.011583 / 0.075646 (-0.064063) | 0.322442 / 0.419271 (-0.096830) | 0.041941 / 0.043533 (-0.001592) | 0.310469 / 0.255139 (0.055330) | 0.335626 / 0.283200 (0.052427) | 0.088195 / 0.141683 (-0.053487) | 1.466778 / 1.452155 (0.014623) | 1.512459 / 1.492716 (0.019743) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184126 / 0.018006 (0.166120) | 0.413392 / 0.000490 (0.412902) | 0.002191 / 0.000200 (0.001992) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023426 / 0.037411 (-0.013985) | 0.096240 / 0.014526 (0.081715) | 0.105908 / 0.176557 (-0.070648) | 0.146331 / 0.737135 (-0.590804) | 0.107441 / 0.296338 (-0.188898) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420018 / 0.215209 (0.204809) | 4.198129 / 2.077655 (2.120474) | 1.998726 / 1.504120 (0.494606) | 1.870410 / 1.541195 (0.329215) | 1.925160 / 1.468490 (0.456670) | 0.688790 / 4.584777 (-3.895987) | 3.430629 / 3.745712 (-0.315083) | 2.875616 / 5.269862 (-2.394246) | 1.566269 / 4.565676 (-2.999408) | 0.082431 / 0.424275 (-0.341844) | 0.012409 / 0.007607 (0.004802) | 0.536178 / 0.226044 (0.310134) | 5.342918 / 2.268929 (3.073989) | 2.410814 / 55.444624 (-53.033811) | 2.056518 / 6.876477 (-4.819958) | 2.240148 / 2.142072 (0.098075) | 0.804848 / 4.805227 (-4.000379) | 0.147325 / 6.500664 (-6.353340) | 0.064217 / 0.075469 (-0.011252) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285725 / 1.841788 (-0.556063) | 13.909739 / 8.074308 (5.835431) | 14.025774 / 10.191392 (3.834382) | 0.142413 / 0.680424 (-0.538011) | 0.028390 / 0.534201 (-0.505811) | 0.402345 / 0.579283 (-0.176939) | 0.404341 / 0.434364 (-0.030023) | 0.463055 / 0.540337 (-0.077282) | 0.556811 / 1.386936 (-0.830125) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006557 / 0.011353 (-0.004795) | 0.004668 / 0.011008 (-0.006340) | 0.098839 / 0.038508 (0.060331) | 0.027618 / 0.023109 (0.004508) | 0.409338 / 0.275898 (0.133440) | 0.444048 / 0.323480 (0.120568) | 0.004881 / 0.007986 (-0.003105) | 0.003434 / 0.004328 (-0.000895) | 0.076497 / 0.004250 (0.072247) | 0.038932 / 0.037052 (0.001880) | 0.411419 / 0.258489 (0.152930) | 0.451167 / 0.293841 (0.157326) | 0.031649 / 0.128546 (-0.096897) | 0.011691 / 0.075646 (-0.063955) | 0.321586 / 0.419271 (-0.097685) | 0.041984 / 0.043533 (-0.001549) | 0.407717 / 0.255139 (0.152578) | 0.434687 / 0.283200 (0.151487) | 0.086419 / 0.141683 (-0.055264) | 1.491755 / 1.452155 (0.039601) | 1.569081 / 1.492716 (0.076364) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231746 / 0.018006 (0.213739) | 0.412271 / 0.000490 (0.411781) | 0.000403 / 0.000200 (0.000203) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024264 / 0.037411 (-0.013147) | 0.100478 / 0.014526 (0.085952) | 0.107065 / 0.176557 (-0.069491) | 0.140724 / 0.737135 (-0.596412) | 0.110631 / 0.296338 (-0.185707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.472476 / 0.215209 (0.257267) | 4.738919 / 2.077655 (2.661265) | 2.438049 / 1.504120 (0.933929) | 2.237855 / 1.541195 (0.696660) | 2.282885 / 1.468490 (0.814395) | 0.690420 / 4.584777 (-3.894357) | 3.426487 / 3.745712 (-0.319225) | 1.842443 / 5.269862 (-3.427418) | 1.154466 / 4.565676 (-3.411210) | 0.082166 / 0.424275 (-0.342109) | 0.012309 / 0.007607 (0.004701) | 0.574730 / 0.226044 (0.348686) | 5.737566 / 2.268929 (3.468638) | 2.882405 / 55.444624 (-52.562220) | 2.540276 / 6.876477 (-4.336201) | 2.552356 / 2.142072 (0.410283) | 0.796413 / 4.805227 (-4.008815) | 0.152705 / 6.500664 (-6.347959) | 0.068273 / 0.075469 (-0.007196) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244423 / 1.841788 (-0.597365) | 13.827750 / 8.074308 (5.753442) | 14.074083 / 10.191392 (3.882691) | 0.140291 / 0.680424 (-0.540133) | 0.017337 / 0.534201 (-0.516864) | 0.389314 / 0.579283 (-0.189969) | 0.390914 / 0.434364 (-0.043450) | 0.450333 / 0.540337 (-0.090004) | 0.543860 / 1.386936 (-0.843076) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2cdcddc51d3cda24c2d79ad137af9e55d0a38044 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009490 / 0.011353 (-0.001863) | 0.005211 / 0.011008 (-0.005798) | 0.100884 / 0.038508 (0.062376) | 0.035834 / 0.023109 (0.012725) | 0.293623 / 0.275898 (0.017724) | 0.378118 / 0.323480 (0.054638) | 0.008106 / 0.007986 (0.000120) | 0.005339 / 0.004328 (0.001010) | 0.076311 / 0.004250 (0.072061) | 0.045954 / 0.037052 (0.008902) | 0.308163 / 0.258489 (0.049674) | 0.353470 / 0.293841 (0.059629) | 0.038539 / 0.128546 (-0.090008) | 0.012174 / 0.075646 (-0.063472) | 0.334875 / 0.419271 (-0.084396) | 0.048602 / 0.043533 (0.005069) | 0.295803 / 0.255139 (0.040664) | 0.318894 / 0.283200 (0.035695) | 0.105487 / 0.141683 (-0.036195) | 1.433628 / 1.452155 (-0.018526) | 1.466843 / 1.492716 (-0.025873) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203426 / 0.018006 (0.185419) | 0.456877 / 0.000490 (0.456387) | 0.001452 / 0.000200 (0.001252) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028308 / 0.037411 (-0.009103) | 0.108965 / 0.014526 (0.094439) | 0.119552 / 0.176557 (-0.057005) | 0.156371 / 0.737135 (-0.580765) | 0.124141 / 0.296338 (-0.172197) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400183 / 0.215209 (0.184973) | 3.990983 / 2.077655 (1.913329) | 1.806729 / 1.504120 (0.302609) | 1.611944 / 1.541195 (0.070750) | 1.740019 / 1.468490 (0.271529) | 0.699600 / 4.584777 (-3.885177) | 3.868711 / 3.745712 (0.122999) | 3.249758 / 5.269862 (-2.020103) | 1.832213 / 4.565676 (-2.733463) | 0.085282 / 0.424275 (-0.338993) | 0.012726 / 0.007607 (0.005119) | 0.509385 / 0.226044 (0.283341) | 5.066913 / 2.268929 (2.797984) | 2.325710 / 55.444624 (-53.118914) | 1.962238 / 6.876477 (-4.914239) | 2.017576 / 2.142072 (-0.124496) | 0.839444 / 4.805227 (-3.965783) | 0.166936 / 6.500664 (-6.333728) | 0.064546 / 0.075469 (-0.010923) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196396 / 1.841788 (-0.645392) | 15.077063 / 8.074308 (7.002755) | 14.268103 / 10.191392 (4.076711) | 0.163782 / 0.680424 (-0.516642) | 0.028794 / 0.534201 (-0.505407) | 0.440564 / 0.579283 (-0.138719) | 0.439826 / 0.434364 (0.005463) | 0.514786 / 0.540337 (-0.025551) | 0.603353 / 1.386936 (-0.783583) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007874 / 0.011353 (-0.003479) | 0.005347 / 0.011008 (-0.005661) | 0.099461 / 0.038508 (0.060953) | 0.034010 / 0.023109 (0.010901) | 0.384650 / 0.275898 (0.108752) | 0.423827 / 0.323480 (0.100347) | 0.006201 / 0.007986 (-0.001784) | 0.004212 / 0.004328 (-0.000117) | 0.074354 / 0.004250 (0.070104) | 0.051675 / 0.037052 (0.014623) | 0.392488 / 0.258489 (0.133999) | 0.425828 / 0.293841 (0.131987) | 0.037444 / 0.128546 (-0.091103) | 0.012388 / 0.075646 (-0.063258) | 0.334482 / 0.419271 (-0.084789) | 0.050715 / 0.043533 (0.007182) | 0.378323 / 0.255139 (0.123184) | 0.395450 / 0.283200 (0.112250) | 0.108403 / 0.141683 (-0.033280) | 1.426803 / 1.452155 (-0.025352) | 1.532417 / 1.492716 (0.039701) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219989 / 0.018006 (0.201982) | 0.454101 / 0.000490 (0.453611) | 0.000407 / 0.000200 (0.000207) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030590 / 0.037411 (-0.006822) | 0.113483 / 0.014526 (0.098957) | 0.122603 / 0.176557 (-0.053954) | 0.161031 / 0.737135 (-0.576104) | 0.128039 / 0.296338 (-0.168300) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430458 / 0.215209 (0.215249) | 4.286594 / 2.077655 (2.208940) | 2.056666 / 1.504120 (0.552546) | 1.861142 / 1.541195 (0.319948) | 1.937185 / 1.468490 (0.468695) | 0.701881 / 4.584777 (-3.882896) | 3.970144 / 3.745712 (0.224432) | 2.107118 / 5.269862 (-3.162744) | 1.351561 / 4.565676 (-3.214115) | 0.085470 / 0.424275 (-0.338805) | 0.012366 / 0.007607 (0.004759) | 0.525212 / 0.226044 (0.299168) | 5.301553 / 2.268929 (3.032625) | 2.593862 / 55.444624 (-52.850763) | 2.287315 / 6.876477 (-4.589161) | 2.368249 / 2.142072 (0.226176) | 0.855656 / 4.805227 (-3.949571) | 0.167846 / 6.500664 (-6.332818) | 0.064521 / 0.075469 (-0.010948) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.237008 / 1.841788 (-0.604779) | 15.784303 / 8.074308 (7.709995) | 14.613081 / 10.191392 (4.421689) | 0.161012 / 0.680424 (-0.519412) | 0.017928 / 0.534201 (-0.516273) | 0.423905 / 0.579283 (-0.155378) | 0.428316 / 0.434364 (-0.006048) | 0.500226 / 0.540337 (-0.040112) | 0.606725 / 1.386936 (-0.780211) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#08473e2ee66acb7e6f82d3591bb9b03924a661ed \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008874 / 0.011353 (-0.002479) | 0.004581 / 0.011008 (-0.006428) | 0.100180 / 0.038508 (0.061672) | 0.029990 / 0.023109 (0.006880) | 0.301616 / 0.275898 (0.025718) | 0.343662 / 0.323480 (0.020183) | 0.007111 / 0.007986 (-0.000875) | 0.003428 / 0.004328 (-0.000900) | 0.078031 / 0.004250 (0.073780) | 0.037332 / 0.037052 (0.000279) | 0.301977 / 0.258489 (0.043488) | 0.345581 / 0.293841 (0.051740) | 0.034305 / 0.128546 (-0.094241) | 0.011660 / 0.075646 (-0.063986) | 0.322289 / 0.419271 (-0.096982) | 0.041488 / 0.043533 (-0.002045) | 0.301612 / 0.255139 (0.046473) | 0.328174 / 0.283200 (0.044974) | 0.085561 / 0.141683 (-0.056122) | 1.482114 / 1.452155 (0.029959) | 1.556194 / 1.492716 (0.063478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.186989 / 0.018006 (0.168983) | 0.421499 / 0.000490 (0.421009) | 0.001193 / 0.000200 (0.000993) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023551 / 0.037411 (-0.013861) | 0.099868 / 0.014526 (0.085343) | 0.105233 / 0.176557 (-0.071324) | 0.141628 / 0.737135 (-0.595507) | 0.109004 / 0.296338 (-0.187335) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415189 / 0.215209 (0.199979) | 4.145716 / 2.077655 (2.068061) | 1.837917 / 1.504120 (0.333797) | 1.635043 / 1.541195 (0.093848) | 1.683299 / 1.468490 (0.214809) | 0.688538 / 4.584777 (-3.896239) | 3.412628 / 3.745712 (-0.333084) | 1.877456 / 5.269862 (-3.392405) | 1.154129 / 4.565676 (-3.411547) | 0.081850 / 0.424275 (-0.342425) | 0.012309 / 0.007607 (0.004702) | 0.522830 / 0.226044 (0.296785) | 5.238685 / 2.268929 (2.969756) | 2.277840 / 55.444624 (-53.166784) | 1.941787 / 6.876477 (-4.934690) | 1.999688 / 2.142072 (-0.142385) | 0.807590 / 4.805227 (-3.997637) | 0.148157 / 6.500664 (-6.352507) | 0.064898 / 0.075469 (-0.010571) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253859 / 1.841788 (-0.587929) | 13.676097 / 8.074308 (5.601789) | 14.237837 / 10.191392 (4.046444) | 0.137178 / 0.680424 (-0.543246) | 0.028971 / 0.534201 (-0.505230) | 0.400380 / 0.579283 (-0.178903) | 0.409990 / 0.434364 (-0.024374) | 0.462552 / 0.540337 (-0.077786) | 0.552153 / 1.386936 (-0.834783) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006831 / 0.011353 (-0.004522) | 0.004627 / 0.011008 (-0.006381) | 0.099883 / 0.038508 (0.061375) | 0.028072 / 0.023109 (0.004962) | 0.343556 / 0.275898 (0.067658) | 0.386792 / 0.323480 (0.063312) | 0.005080 / 0.007986 (-0.002906) | 0.003508 / 0.004328 (-0.000820) | 0.077803 / 0.004250 (0.073552) | 0.040038 / 0.037052 (0.002985) | 0.345089 / 0.258489 (0.086600) | 0.396078 / 0.293841 (0.102238) | 0.032241 / 0.128546 (-0.096305) | 0.011711 / 0.075646 (-0.063935) | 0.320531 / 0.419271 (-0.098740) | 0.043658 / 0.043533 (0.000125) | 0.344696 / 0.255139 (0.089557) | 0.389847 / 0.283200 (0.106648) | 0.092328 / 0.141683 (-0.049355) | 1.477290 / 1.452155 (0.025136) | 1.548698 / 1.492716 (0.055982) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236073 / 0.018006 (0.218067) | 0.422113 / 0.000490 (0.421624) | 0.000431 / 0.000200 (0.000231) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024738 / 0.037411 (-0.012673) | 0.100546 / 0.014526 (0.086020) | 0.107550 / 0.176557 (-0.069006) | 0.146056 / 0.737135 (-0.591079) | 0.112665 / 0.296338 (-0.183674) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.490259 / 0.215209 (0.275050) | 4.907994 / 2.077655 (2.830339) | 2.547175 / 1.504120 (1.043055) | 2.344419 / 1.541195 (0.803224) | 2.403985 / 1.468490 (0.935495) | 0.696011 / 4.584777 (-3.888766) | 3.442426 / 3.745712 (-0.303286) | 1.878702 / 5.269862 (-3.391159) | 1.158280 / 4.565676 (-3.407396) | 0.082300 / 0.424275 (-0.341975) | 0.012513 / 0.007607 (0.004906) | 0.602696 / 0.226044 (0.376651) | 6.014592 / 2.268929 (3.745663) | 3.014466 / 55.444624 (-52.430159) | 2.669376 / 6.876477 (-4.207101) | 2.724485 / 2.142072 (0.582412) | 0.799795 / 4.805227 (-4.005432) | 0.151220 / 6.500664 (-6.349444) | 0.067486 / 0.075469 (-0.007983) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281265 / 1.841788 (-0.560523) | 14.362284 / 8.074308 (6.287976) | 14.313690 / 10.191392 (4.122298) | 0.142870 / 0.680424 (-0.537554) | 0.017206 / 0.534201 (-0.516995) | 0.380084 / 0.579283 (-0.199199) | 0.388161 / 0.434364 (-0.046203) | 0.442617 / 0.540337 (-0.097721) | 0.528487 / 1.386936 (-0.858449) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#452b7f8ae78967dc662f5436e751233d46c62e78 \"CML watermark\")\n", "@lhoestq @amyeroberts @gante I did a substantial rewrite and all tests are passing now (Windows seems to time out or something and I can't figure out why - not sure if that's related to this PR!). I also confirmed tests are passing locally with Py==3.10. \r\n\r\nAside from incorporating everyone's comments, I also made a context manager to create and handle shared memory - this ensures that shared memory is cleaned up even if execution is interrupted. Also, shared memory names include a UUID string now to avoid collisions. Finally, string arrays are now split up into fixed-width character arrays in the workers so that they can be passed through shared memory, and the parent process reconstructs them into string arrays.", "Update: `test_arrow_dataset.py` ran fine in this branch on my Windows machine (Py 3.10), so I have no idea what's up with those tests", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008852 / 0.011353 (-0.002500) | 0.004545 / 0.011008 (-0.006464) | 0.099814 / 0.038508 (0.061306) | 0.030314 / 0.023109 (0.007205) | 0.310426 / 0.275898 (0.034528) | 0.366893 / 0.323480 (0.043413) | 0.007183 / 0.007986 (-0.000802) | 0.003476 / 0.004328 (-0.000853) | 0.077566 / 0.004250 (0.073315) | 0.038269 / 0.037052 (0.001217) | 0.319133 / 0.258489 (0.060644) | 0.352399 / 0.293841 (0.058558) | 0.033847 / 0.128546 (-0.094700) | 0.011568 / 0.075646 (-0.064078) | 0.321355 / 0.419271 (-0.097917) | 0.040719 / 0.043533 (-0.002814) | 0.304812 / 0.255139 (0.049673) | 0.329512 / 0.283200 (0.046312) | 0.088045 / 0.141683 (-0.053638) | 1.514182 / 1.452155 (0.062027) | 1.529459 / 1.492716 (0.036742) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216749 / 0.018006 (0.198743) | 0.409909 / 0.000490 (0.409419) | 0.002790 / 0.000200 (0.002590) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023390 / 0.037411 (-0.014021) | 0.095955 / 0.014526 (0.081430) | 0.104749 / 0.176557 (-0.071807) | 0.143414 / 0.737135 (-0.593721) | 0.109011 / 0.296338 (-0.187328) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420410 / 0.215209 (0.205201) | 4.185745 / 2.077655 (2.108090) | 1.910207 / 1.504120 (0.406087) | 1.679330 / 1.541195 (0.138135) | 1.727134 / 1.468490 (0.258644) | 0.692379 / 4.584777 (-3.892398) | 3.358731 / 3.745712 (-0.386982) | 2.914657 / 5.269862 (-2.355205) | 1.506083 / 4.565676 (-3.059594) | 0.081922 / 0.424275 (-0.342353) | 0.012691 / 0.007607 (0.005084) | 0.530942 / 0.226044 (0.304897) | 5.357642 / 2.268929 (3.088714) | 2.387347 / 55.444624 (-53.057277) | 2.030001 / 6.876477 (-4.846476) | 2.026405 / 2.142072 (-0.115667) | 0.809406 / 4.805227 (-3.995821) | 0.149003 / 6.500664 (-6.351661) | 0.066910 / 0.075469 (-0.008559) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.278160 / 1.841788 (-0.563627) | 13.632742 / 8.074308 (5.558434) | 13.995537 / 10.191392 (3.804145) | 0.136507 / 0.680424 (-0.543917) | 0.028817 / 0.534201 (-0.505384) | 0.394842 / 0.579283 (-0.184441) | 0.399526 / 0.434364 (-0.034838) | 0.459174 / 0.540337 (-0.081163) | 0.536877 / 1.386936 (-0.850059) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006814 / 0.011353 (-0.004539) | 0.004456 / 0.011008 (-0.006552) | 0.098386 / 0.038508 (0.059878) | 0.028124 / 0.023109 (0.005015) | 0.409004 / 0.275898 (0.133106) | 0.446746 / 0.323480 (0.123266) | 0.005108 / 0.007986 (-0.002877) | 0.004807 / 0.004328 (0.000479) | 0.075751 / 0.004250 (0.071500) | 0.039297 / 0.037052 (0.002244) | 0.413198 / 0.258489 (0.154709) | 0.452124 / 0.293841 (0.158283) | 0.032534 / 0.128546 (-0.096012) | 0.011689 / 0.075646 (-0.063957) | 0.325465 / 0.419271 (-0.093806) | 0.041347 / 0.043533 (-0.002185) | 0.411489 / 0.255139 (0.156350) | 0.447120 / 0.283200 (0.163920) | 0.093058 / 0.141683 (-0.048625) | 1.489903 / 1.452155 (0.037748) | 1.580771 / 1.492716 (0.088055) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192619 / 0.018006 (0.174613) | 0.399201 / 0.000490 (0.398711) | 0.002894 / 0.000200 (0.002694) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025120 / 0.037411 (-0.012292) | 0.100126 / 0.014526 (0.085600) | 0.108669 / 0.176557 (-0.067887) | 0.148687 / 0.737135 (-0.588448) | 0.112286 / 0.296338 (-0.184052) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438866 / 0.215209 (0.223657) | 4.382418 / 2.077655 (2.304764) | 2.106450 / 1.504120 (0.602330) | 1.885105 / 1.541195 (0.343910) | 1.922948 / 1.468490 (0.454458) | 0.693145 / 4.584777 (-3.891632) | 3.378206 / 3.745712 (-0.367506) | 1.867295 / 5.269862 (-3.402566) | 1.164999 / 4.565676 (-3.400678) | 0.081918 / 0.424275 (-0.342357) | 0.012225 / 0.007607 (0.004618) | 0.547114 / 0.226044 (0.321069) | 5.454208 / 2.268929 (3.185279) | 2.532112 / 55.444624 (-52.912512) | 2.192573 / 6.876477 (-4.683904) | 2.225364 / 2.142072 (0.083291) | 0.797165 / 4.805227 (-4.008062) | 0.151185 / 6.500664 (-6.349480) | 0.067512 / 0.075469 (-0.007957) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303905 / 1.841788 (-0.537883) | 14.107678 / 8.074308 (6.033370) | 14.147630 / 10.191392 (3.956238) | 0.156597 / 0.680424 (-0.523827) | 0.017037 / 0.534201 (-0.517164) | 0.383202 / 0.579283 (-0.196081) | 0.385340 / 0.434364 (-0.049024) | 0.443338 / 0.540337 (-0.097000) | 0.542345 / 1.386936 (-0.844591) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#38228533a03767aab713a3806aac0e8503668c68 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009982 / 0.011353 (-0.001371) | 0.005327 / 0.011008 (-0.005681) | 0.099092 / 0.038508 (0.060584) | 0.035824 / 0.023109 (0.012715) | 0.303258 / 0.275898 (0.027360) | 0.335379 / 0.323480 (0.011899) | 0.008192 / 0.007986 (0.000207) | 0.004242 / 0.004328 (-0.000087) | 0.076277 / 0.004250 (0.072026) | 0.043851 / 0.037052 (0.006799) | 0.307750 / 0.258489 (0.049261) | 0.348459 / 0.293841 (0.054618) | 0.038943 / 0.128546 (-0.089604) | 0.012128 / 0.075646 (-0.063519) | 0.334143 / 0.419271 (-0.085128) | 0.047865 / 0.043533 (0.004332) | 0.300909 / 0.255139 (0.045770) | 0.320879 / 0.283200 (0.037680) | 0.103812 / 0.141683 (-0.037871) | 1.468646 / 1.452155 (0.016491) | 1.557660 / 1.492716 (0.064944) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244108 / 0.018006 (0.226102) | 0.554895 / 0.000490 (0.554405) | 0.005311 / 0.000200 (0.005111) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028771 / 0.037411 (-0.008640) | 0.108133 / 0.014526 (0.093608) | 0.120098 / 0.176557 (-0.056458) | 0.159815 / 0.737135 (-0.577320) | 0.125437 / 0.296338 (-0.170901) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397675 / 0.215209 (0.182466) | 3.975839 / 2.077655 (1.898184) | 1.797803 / 1.504120 (0.293683) | 1.612517 / 1.541195 (0.071322) | 1.659086 / 1.468490 (0.190596) | 0.679822 / 4.584777 (-3.904955) | 3.688321 / 3.745712 (-0.057391) | 2.155285 / 5.269862 (-3.114576) | 1.466453 / 4.565676 (-3.099223) | 0.084102 / 0.424275 (-0.340173) | 0.012074 / 0.007607 (0.004467) | 0.503744 / 0.226044 (0.277699) | 5.075599 / 2.268929 (2.806670) | 2.312149 / 55.444624 (-53.132476) | 1.975028 / 6.876477 (-4.901449) | 2.069554 / 2.142072 (-0.072519) | 0.828329 / 4.805227 (-3.976898) | 0.162816 / 6.500664 (-6.337849) | 0.063813 / 0.075469 (-0.011656) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.173327 / 1.841788 (-0.668461) | 15.281584 / 8.074308 (7.207276) | 14.450851 / 10.191392 (4.259459) | 0.165621 / 0.680424 (-0.514802) | 0.028779 / 0.534201 (-0.505422) | 0.438483 / 0.579283 (-0.140800) | 0.438477 / 0.434364 (0.004113) | 0.517703 / 0.540337 (-0.022634) | 0.615119 / 1.386936 (-0.771817) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007013 / 0.011353 (-0.004340) | 0.005272 / 0.011008 (-0.005736) | 0.097203 / 0.038508 (0.058695) | 0.033103 / 0.023109 (0.009994) | 0.380203 / 0.275898 (0.104305) | 0.414868 / 0.323480 (0.091388) | 0.006326 / 0.007986 (-0.001659) | 0.005433 / 0.004328 (0.001104) | 0.074299 / 0.004250 (0.070049) | 0.049418 / 0.037052 (0.012366) | 0.388771 / 0.258489 (0.130282) | 0.435169 / 0.293841 (0.141328) | 0.036170 / 0.128546 (-0.092377) | 0.012452 / 0.075646 (-0.063195) | 0.331215 / 0.419271 (-0.088056) | 0.048577 / 0.043533 (0.005044) | 0.381491 / 0.255139 (0.126352) | 0.396731 / 0.283200 (0.113531) | 0.106435 / 0.141683 (-0.035248) | 1.446437 / 1.452155 (-0.005718) | 1.542337 / 1.492716 (0.049621) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216714 / 0.018006 (0.198707) | 0.562460 / 0.000490 (0.561970) | 0.003636 / 0.000200 (0.003436) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028726 / 0.037411 (-0.008686) | 0.111993 / 0.014526 (0.097467) | 0.125325 / 0.176557 (-0.051232) | 0.157779 / 0.737135 (-0.579356) | 0.130633 / 0.296338 (-0.165705) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440520 / 0.215209 (0.225311) | 4.396283 / 2.077655 (2.318628) | 2.204714 / 1.504120 (0.700594) | 2.011667 / 1.541195 (0.470473) | 2.050518 / 1.468490 (0.582028) | 0.695204 / 4.584777 (-3.889573) | 3.779699 / 3.745712 (0.033987) | 2.096064 / 5.269862 (-3.173798) | 1.325446 / 4.565676 (-3.240230) | 0.085315 / 0.424275 (-0.338960) | 0.012178 / 0.007607 (0.004570) | 0.550478 / 0.226044 (0.324434) | 5.471872 / 2.268929 (3.202943) | 2.687147 / 55.444624 (-52.757478) | 2.348465 / 6.876477 (-4.528011) | 2.409700 / 2.142072 (0.267628) | 0.839468 / 4.805227 (-3.965760) | 0.167030 / 6.500664 (-6.333635) | 0.063243 / 0.075469 (-0.012226) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257347 / 1.841788 (-0.584441) | 15.157821 / 8.074308 (7.083512) | 14.646381 / 10.191392 (4.454989) | 0.185550 / 0.680424 (-0.494874) | 0.018441 / 0.534201 (-0.515760) | 0.423330 / 0.579283 (-0.155954) | 0.426204 / 0.434364 (-0.008160) | 0.498985 / 0.540337 (-0.041352) | 0.608432 / 1.386936 (-0.778504) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0f96e349ec5665e1e4135b5a108ba5db227bd3b1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010856 / 0.011353 (-0.000497) | 0.005897 / 0.011008 (-0.005111) | 0.117826 / 0.038508 (0.079317) | 0.041899 / 0.023109 (0.018790) | 0.353804 / 0.275898 (0.077906) | 0.431021 / 0.323480 (0.107541) | 0.009288 / 0.007986 (0.001303) | 0.004556 / 0.004328 (0.000227) | 0.089344 / 0.004250 (0.085094) | 0.052224 / 0.037052 (0.015172) | 0.373242 / 0.258489 (0.114753) | 0.420667 / 0.293841 (0.126826) | 0.044191 / 0.128546 (-0.084355) | 0.014083 / 0.075646 (-0.061564) | 0.400373 / 0.419271 (-0.018898) | 0.056119 / 0.043533 (0.012586) | 0.363302 / 0.255139 (0.108163) | 0.382073 / 0.283200 (0.098873) | 0.118646 / 0.141683 (-0.023037) | 1.696576 / 1.452155 (0.244422) | 1.756518 / 1.492716 (0.263802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216388 / 0.018006 (0.198382) | 0.485732 / 0.000490 (0.485242) | 0.004012 / 0.000200 (0.003812) | 0.000104 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032095 / 0.037411 (-0.005316) | 0.128954 / 0.014526 (0.114429) | 0.137564 / 0.176557 (-0.038993) | 0.184315 / 0.737135 (-0.552820) | 0.144707 / 0.296338 (-0.151631) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.472792 / 0.215209 (0.257583) | 4.723044 / 2.077655 (2.645390) | 2.115075 / 1.504120 (0.610955) | 1.898993 / 1.541195 (0.357798) | 1.972894 / 1.468490 (0.504404) | 0.807210 / 4.584777 (-3.777567) | 4.493139 / 3.745712 (0.747427) | 2.501053 / 5.269862 (-2.768808) | 1.686121 / 4.565676 (-2.879556) | 0.099545 / 0.424275 (-0.324730) | 0.014360 / 0.007607 (0.006753) | 0.596235 / 0.226044 (0.370191) | 5.944285 / 2.268929 (3.675357) | 2.654944 / 55.444624 (-52.789681) | 2.281451 / 6.876477 (-4.595026) | 2.448407 / 2.142072 (0.306334) | 1.000512 / 4.805227 (-3.804716) | 0.196413 / 6.500664 (-6.304251) | 0.075810 / 0.075469 (0.000341) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.435707 / 1.841788 (-0.406081) | 17.931070 / 8.074308 (9.856762) | 16.635522 / 10.191392 (6.444130) | 0.189119 / 0.680424 (-0.491304) | 0.034392 / 0.534201 (-0.499809) | 0.519041 / 0.579283 (-0.060242) | 0.516159 / 0.434364 (0.081795) | 0.601180 / 0.540337 (0.060843) | 0.713180 / 1.386936 (-0.673756) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008741 / 0.011353 (-0.002612) | 0.006102 / 0.011008 (-0.004906) | 0.114787 / 0.038508 (0.076279) | 0.039610 / 0.023109 (0.016501) | 0.451730 / 0.275898 (0.175832) | 0.488820 / 0.323480 (0.165340) | 0.006979 / 0.007986 (-0.001006) | 0.006458 / 0.004328 (0.002130) | 0.086505 / 0.004250 (0.082254) | 0.057684 / 0.037052 (0.020632) | 0.451354 / 0.258489 (0.192865) | 0.523143 / 0.293841 (0.229302) | 0.043224 / 0.128546 (-0.085323) | 0.014671 / 0.075646 (-0.060975) | 0.398030 / 0.419271 (-0.021241) | 0.063650 / 0.043533 (0.020117) | 0.448324 / 0.255139 (0.193185) | 0.476560 / 0.283200 (0.193361) | 0.125772 / 0.141683 (-0.015911) | 1.801051 / 1.452155 (0.348896) | 1.872736 / 1.492716 (0.380020) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256146 / 0.018006 (0.238139) | 0.486915 / 0.000490 (0.486425) | 0.000513 / 0.000200 (0.000313) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035242 / 0.037411 (-0.002170) | 0.134322 / 0.014526 (0.119797) | 0.144786 / 0.176557 (-0.031770) | 0.188786 / 0.737135 (-0.548349) | 0.151737 / 0.296338 (-0.144602) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.506047 / 0.215209 (0.290838) | 5.028253 / 2.077655 (2.950598) | 2.393070 / 1.504120 (0.888950) | 2.157847 / 1.541195 (0.616652) | 2.229412 / 1.468490 (0.760922) | 0.828973 / 4.584777 (-3.755804) | 4.741470 / 3.745712 (0.995758) | 4.048118 / 5.269862 (-1.221744) | 2.573818 / 4.565676 (-1.991859) | 0.101019 / 0.424275 (-0.323256) | 0.014640 / 0.007607 (0.007033) | 0.632591 / 0.226044 (0.406546) | 6.289153 / 2.268929 (4.020224) | 2.977261 / 55.444624 (-52.467363) | 2.554396 / 6.876477 (-4.322081) | 2.619446 / 2.142072 (0.477374) | 0.988376 / 4.805227 (-3.816851) | 0.196895 / 6.500664 (-6.303769) | 0.076355 / 0.075469 (0.000886) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.493570 / 1.841788 (-0.348218) | 18.422758 / 8.074308 (10.348449) | 17.007352 / 10.191392 (6.815960) | 0.191903 / 0.680424 (-0.488521) | 0.020974 / 0.534201 (-0.513227) | 0.500573 / 0.579283 (-0.078710) | 0.489381 / 0.434364 (0.055017) | 0.580765 / 0.540337 (0.040428) | 0.698907 / 1.386936 (-0.688029) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fa9baa268a6d285ab0a61cc37413392c94cfe2e8 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008979 / 0.011353 (-0.002374) | 0.004497 / 0.011008 (-0.006511) | 0.102227 / 0.038508 (0.063719) | 0.031302 / 0.023109 (0.008193) | 0.298488 / 0.275898 (0.022590) | 0.372589 / 0.323480 (0.049109) | 0.007261 / 0.007986 (-0.000725) | 0.003542 / 0.004328 (-0.000786) | 0.078503 / 0.004250 (0.074253) | 0.039474 / 0.037052 (0.002422) | 0.310991 / 0.258489 (0.052502) | 0.353245 / 0.293841 (0.059404) | 0.033798 / 0.128546 (-0.094749) | 0.011634 / 0.075646 (-0.064012) | 0.321141 / 0.419271 (-0.098131) | 0.041264 / 0.043533 (-0.002268) | 0.300900 / 0.255139 (0.045761) | 0.326255 / 0.283200 (0.043055) | 0.092477 / 0.141683 (-0.049205) | 1.478921 / 1.452155 (0.026766) | 1.514915 / 1.492716 (0.022198) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184415 / 0.018006 (0.166408) | 0.428986 / 0.000490 (0.428497) | 0.002590 / 0.000200 (0.002390) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023730 / 0.037411 (-0.013681) | 0.099846 / 0.014526 (0.085320) | 0.107075 / 0.176557 (-0.069482) | 0.147475 / 0.737135 (-0.589661) | 0.111802 / 0.296338 (-0.184537) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413704 / 0.215209 (0.198494) | 4.144498 / 2.077655 (2.066843) | 1.855900 / 1.504120 (0.351780) | 1.647958 / 1.541195 (0.106763) | 1.712437 / 1.468490 (0.243947) | 0.688382 / 4.584777 (-3.896395) | 3.432136 / 3.745712 (-0.313576) | 2.837211 / 5.269862 (-2.432651) | 1.519004 / 4.565676 (-3.046672) | 0.082429 / 0.424275 (-0.341846) | 0.012610 / 0.007607 (0.005003) | 0.525078 / 0.226044 (0.299034) | 5.272932 / 2.268929 (3.004003) | 2.340482 / 55.444624 (-53.104143) | 2.007372 / 6.876477 (-4.869104) | 2.060567 / 2.142072 (-0.081506) | 0.806476 / 4.805227 (-3.998752) | 0.149421 / 6.500664 (-6.351243) | 0.066252 / 0.075469 (-0.009218) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.235078 / 1.841788 (-0.606710) | 13.870758 / 8.074308 (5.796450) | 14.104582 / 10.191392 (3.913190) | 0.159375 / 0.680424 (-0.521049) | 0.029233 / 0.534201 (-0.504968) | 0.392184 / 0.579283 (-0.187099) | 0.407909 / 0.434364 (-0.026455) | 0.458757 / 0.540337 (-0.081581) | 0.547681 / 1.386936 (-0.839255) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007194 / 0.011353 (-0.004159) | 0.004578 / 0.011008 (-0.006431) | 0.098936 / 0.038508 (0.060428) | 0.029639 / 0.023109 (0.006530) | 0.347241 / 0.275898 (0.071343) | 0.378838 / 0.323480 (0.055358) | 0.005632 / 0.007986 (-0.002353) | 0.003469 / 0.004328 (-0.000860) | 0.075536 / 0.004250 (0.071285) | 0.043301 / 0.037052 (0.006249) | 0.348091 / 0.258489 (0.089602) | 0.388595 / 0.293841 (0.094754) | 0.033512 / 0.128546 (-0.095034) | 0.011754 / 0.075646 (-0.063892) | 0.321003 / 0.419271 (-0.098268) | 0.044634 / 0.043533 (0.001101) | 0.346688 / 0.255139 (0.091549) | 0.366346 / 0.283200 (0.083147) | 0.093650 / 0.141683 (-0.048033) | 1.509913 / 1.452155 (0.057759) | 1.596414 / 1.492716 (0.103698) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230466 / 0.018006 (0.212459) | 0.417106 / 0.000490 (0.416617) | 0.000959 / 0.000200 (0.000759) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025581 / 0.037411 (-0.011830) | 0.105246 / 0.014526 (0.090720) | 0.108997 / 0.176557 (-0.067560) | 0.144342 / 0.737135 (-0.592794) | 0.113911 / 0.296338 (-0.182427) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.479608 / 0.215209 (0.264399) | 4.766081 / 2.077655 (2.688426) | 2.446597 / 1.504120 (0.942477) | 2.228278 / 1.541195 (0.687083) | 2.289943 / 1.468490 (0.821453) | 0.703146 / 4.584777 (-3.881631) | 3.414150 / 3.745712 (-0.331562) | 2.957730 / 5.269862 (-2.312132) | 1.531524 / 4.565676 (-3.034152) | 0.083449 / 0.424275 (-0.340826) | 0.012684 / 0.007607 (0.005077) | 0.587622 / 0.226044 (0.361578) | 5.888791 / 2.268929 (3.619863) | 2.884200 / 55.444624 (-52.560424) | 2.543739 / 6.876477 (-4.332737) | 2.596245 / 2.142072 (0.454173) | 0.813070 / 4.805227 (-3.992157) | 0.152706 / 6.500664 (-6.347958) | 0.069257 / 0.075469 (-0.006212) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.302945 / 1.841788 (-0.538842) | 14.484051 / 8.074308 (6.409743) | 14.216143 / 10.191392 (4.024751) | 0.154537 / 0.680424 (-0.525886) | 0.016909 / 0.534201 (-0.517292) | 0.389433 / 0.579283 (-0.189850) | 0.393280 / 0.434364 (-0.041084) | 0.446884 / 0.540337 (-0.093453) | 0.534394 / 1.386936 (-0.852542) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2bcdeb952c57c5f22643061d49d16014a7b6426a \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008822 / 0.011353 (-0.002530) | 0.004826 / 0.011008 (-0.006182) | 0.102710 / 0.038508 (0.064202) | 0.030353 / 0.023109 (0.007244) | 0.297224 / 0.275898 (0.021326) | 0.371861 / 0.323480 (0.048381) | 0.007266 / 0.007986 (-0.000720) | 0.003632 / 0.004328 (-0.000696) | 0.079960 / 0.004250 (0.075710) | 0.036908 / 0.037052 (-0.000144) | 0.309582 / 0.258489 (0.051093) | 0.350108 / 0.293841 (0.056267) | 0.034280 / 0.128546 (-0.094266) | 0.011739 / 0.075646 (-0.063907) | 0.323217 / 0.419271 (-0.096054) | 0.043491 / 0.043533 (-0.000042) | 0.298454 / 0.255139 (0.043315) | 0.326735 / 0.283200 (0.043535) | 0.093955 / 0.141683 (-0.047728) | 1.494313 / 1.452155 (0.042159) | 1.562104 / 1.492716 (0.069388) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.182796 / 0.018006 (0.164790) | 0.420133 / 0.000490 (0.419643) | 0.002537 / 0.000200 (0.002337) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023143 / 0.037411 (-0.014269) | 0.098560 / 0.014526 (0.084034) | 0.105060 / 0.176557 (-0.071496) | 0.140269 / 0.737135 (-0.596866) | 0.109120 / 0.296338 (-0.187219) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419907 / 0.215209 (0.204698) | 4.196179 / 2.077655 (2.118524) | 1.887663 / 1.504120 (0.383543) | 1.686232 / 1.541195 (0.145037) | 1.741741 / 1.468490 (0.273251) | 0.696222 / 4.584777 (-3.888555) | 3.400250 / 3.745712 (-0.345462) | 1.875058 / 5.269862 (-3.394803) | 1.159466 / 4.565676 (-3.406211) | 0.082520 / 0.424275 (-0.341755) | 0.012408 / 0.007607 (0.004801) | 0.525212 / 0.226044 (0.299168) | 5.283691 / 2.268929 (3.014762) | 2.314487 / 55.444624 (-53.130138) | 1.966212 / 6.876477 (-4.910265) | 2.023458 / 2.142072 (-0.118615) | 0.808896 / 4.805227 (-3.996331) | 0.148973 / 6.500664 (-6.351691) | 0.065378 / 0.075469 (-0.010091) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223833 / 1.841788 (-0.617955) | 14.053651 / 8.074308 (5.979343) | 14.072165 / 10.191392 (3.880773) | 0.156006 / 0.680424 (-0.524418) | 0.028665 / 0.534201 (-0.505536) | 0.392099 / 0.579283 (-0.187184) | 0.401460 / 0.434364 (-0.032904) | 0.462184 / 0.540337 (-0.078153) | 0.540459 / 1.386936 (-0.846477) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006907 / 0.011353 (-0.004446) | 0.004585 / 0.011008 (-0.006423) | 0.099027 / 0.038508 (0.060519) | 0.028317 / 0.023109 (0.005208) | 0.421068 / 0.275898 (0.145170) | 0.450712 / 0.323480 (0.127233) | 0.005229 / 0.007986 (-0.002756) | 0.004873 / 0.004328 (0.000545) | 0.077374 / 0.004250 (0.073124) | 0.042530 / 0.037052 (0.005477) | 0.417392 / 0.258489 (0.158903) | 0.462605 / 0.293841 (0.168764) | 0.032195 / 0.128546 (-0.096351) | 0.011777 / 0.075646 (-0.063870) | 0.321927 / 0.419271 (-0.097344) | 0.041999 / 0.043533 (-0.001533) | 0.419402 / 0.255139 (0.164263) | 0.437179 / 0.283200 (0.153979) | 0.089549 / 0.141683 (-0.052134) | 1.469525 / 1.452155 (0.017370) | 1.586407 / 1.492716 (0.093691) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209533 / 0.018006 (0.191526) | 0.413886 / 0.000490 (0.413396) | 0.003357 / 0.000200 (0.003157) | 0.000121 / 0.000054 (0.000067) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026133 / 0.037411 (-0.011278) | 0.103128 / 0.014526 (0.088602) | 0.110604 / 0.176557 (-0.065952) | 0.153055 / 0.737135 (-0.584080) | 0.112257 / 0.296338 (-0.184081) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471281 / 0.215209 (0.256072) | 4.708361 / 2.077655 (2.630706) | 2.572681 / 1.504120 (1.068561) | 2.370536 / 1.541195 (0.829341) | 2.456010 / 1.468490 (0.987520) | 0.694173 / 4.584777 (-3.890603) | 3.434511 / 3.745712 (-0.311201) | 1.877169 / 5.269862 (-3.392693) | 1.158387 / 4.565676 (-3.407289) | 0.081849 / 0.424275 (-0.342426) | 0.012176 / 0.007607 (0.004569) | 0.581736 / 0.226044 (0.355692) | 5.803173 / 2.268929 (3.534245) | 3.040003 / 55.444624 (-52.404621) | 2.704698 / 6.876477 (-4.171779) | 2.760138 / 2.142072 (0.618065) | 0.802557 / 4.805227 (-4.002671) | 0.151397 / 6.500664 (-6.349268) | 0.068308 / 0.075469 (-0.007161) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.304062 / 1.841788 (-0.537725) | 14.364809 / 8.074308 (6.290501) | 14.192131 / 10.191392 (4.000739) | 0.150025 / 0.680424 (-0.530399) | 0.017020 / 0.534201 (-0.517181) | 0.389235 / 0.579283 (-0.190048) | 0.387557 / 0.434364 (-0.046807) | 0.454636 / 0.540337 (-0.085702) | 0.558182 / 1.386936 (-0.828754) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#663e5eddca188abbb37e2f803846f02fe4ca0d9b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008519 / 0.011353 (-0.002834) | 0.004538 / 0.011008 (-0.006470) | 0.102066 / 0.038508 (0.063558) | 0.029700 / 0.023109 (0.006591) | 0.304573 / 0.275898 (0.028675) | 0.366232 / 0.323480 (0.042752) | 0.007154 / 0.007986 (-0.000832) | 0.003497 / 0.004328 (-0.000831) | 0.079119 / 0.004250 (0.074868) | 0.036088 / 0.037052 (-0.000964) | 0.311076 / 0.258489 (0.052587) | 0.352205 / 0.293841 (0.058364) | 0.033706 / 0.128546 (-0.094840) | 0.011657 / 0.075646 (-0.063990) | 0.324024 / 0.419271 (-0.095247) | 0.040777 / 0.043533 (-0.002756) | 0.302661 / 0.255139 (0.047522) | 0.329091 / 0.283200 (0.045891) | 0.086774 / 0.141683 (-0.054909) | 1.485874 / 1.452155 (0.033720) | 1.535726 / 1.492716 (0.043009) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194284 / 0.018006 (0.176277) | 0.412875 / 0.000490 (0.412385) | 0.003348 / 0.000200 (0.003148) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022432 / 0.037411 (-0.014979) | 0.095008 / 0.014526 (0.080482) | 0.103268 / 0.176557 (-0.073288) | 0.140121 / 0.737135 (-0.597014) | 0.106619 / 0.296338 (-0.189719) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414786 / 0.215209 (0.199577) | 4.146345 / 2.077655 (2.068690) | 1.873703 / 1.504120 (0.369583) | 1.673498 / 1.541195 (0.132303) | 1.716993 / 1.468490 (0.248502) | 0.692098 / 4.584777 (-3.892679) | 3.380991 / 3.745712 (-0.364721) | 1.846811 / 5.269862 (-3.423050) | 1.159617 / 4.565676 (-3.406059) | 0.081867 / 0.424275 (-0.342408) | 0.012371 / 0.007607 (0.004764) | 0.526228 / 0.226044 (0.300184) | 5.273139 / 2.268929 (3.004211) | 2.327147 / 55.444624 (-53.117477) | 1.968366 / 6.876477 (-4.908111) | 2.018053 / 2.142072 (-0.124019) | 0.816098 / 4.805227 (-3.989130) | 0.149438 / 6.500664 (-6.351226) | 0.065000 / 0.075469 (-0.010469) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244408 / 1.841788 (-0.597380) | 13.774354 / 8.074308 (5.700046) | 14.178923 / 10.191392 (3.987531) | 0.150032 / 0.680424 (-0.530392) | 0.029736 / 0.534201 (-0.504465) | 0.399134 / 0.579283 (-0.180149) | 0.404214 / 0.434364 (-0.030150) | 0.462096 / 0.540337 (-0.078242) | 0.542256 / 1.386936 (-0.844680) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006776 / 0.011353 (-0.004577) | 0.004586 / 0.011008 (-0.006422) | 0.097658 / 0.038508 (0.059150) | 0.027627 / 0.023109 (0.004517) | 0.423794 / 0.275898 (0.147896) | 0.447443 / 0.323480 (0.123963) | 0.005099 / 0.007986 (-0.002886) | 0.004846 / 0.004328 (0.000517) | 0.075135 / 0.004250 (0.070884) | 0.038068 / 0.037052 (0.001016) | 0.420999 / 0.258489 (0.162510) | 0.460368 / 0.293841 (0.166527) | 0.032107 / 0.128546 (-0.096439) | 0.011775 / 0.075646 (-0.063871) | 0.323854 / 0.419271 (-0.095418) | 0.045538 / 0.043533 (0.002005) | 0.420949 / 0.255139 (0.165810) | 0.441906 / 0.283200 (0.158706) | 0.091955 / 0.141683 (-0.049728) | 1.523736 / 1.452155 (0.071581) | 1.587865 / 1.492716 (0.095148) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.263297 / 0.018006 (0.245290) | 0.416170 / 0.000490 (0.415680) | 0.023161 / 0.000200 (0.022961) | 0.000243 / 0.000054 (0.000188) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024000 / 0.037411 (-0.013412) | 0.097787 / 0.014526 (0.083262) | 0.106884 / 0.176557 (-0.069672) | 0.140861 / 0.737135 (-0.596274) | 0.108228 / 0.296338 (-0.188111) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.477222 / 0.215209 (0.262013) | 4.774729 / 2.077655 (2.697074) | 2.451575 / 1.504120 (0.947455) | 2.251255 / 1.541195 (0.710060) | 2.281154 / 1.468490 (0.812664) | 0.699394 / 4.584777 (-3.885383) | 3.421575 / 3.745712 (-0.324137) | 2.704713 / 5.269862 (-2.565148) | 1.508464 / 4.565676 (-3.057212) | 0.082199 / 0.424275 (-0.342076) | 0.012586 / 0.007607 (0.004979) | 0.588783 / 0.226044 (0.362739) | 5.878434 / 2.268929 (3.609505) | 2.927422 / 55.444624 (-52.517202) | 2.574357 / 6.876477 (-4.302120) | 2.603626 / 2.142072 (0.461554) | 0.804706 / 4.805227 (-4.000521) | 0.152919 / 6.500664 (-6.347745) | 0.069316 / 0.075469 (-0.006153) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.280025 / 1.841788 (-0.561763) | 13.968407 / 8.074308 (5.894099) | 13.874506 / 10.191392 (3.683114) | 0.154711 / 0.680424 (-0.525713) | 0.016827 / 0.534201 (-0.517374) | 0.377775 / 0.579283 (-0.201508) | 0.393035 / 0.434364 (-0.041329) | 0.439405 / 0.540337 (-0.100932) | 0.528135 / 1.386936 (-0.858801) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#00b27a59b8af9075967b800e3b0f1de8616aa0ce \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009035 / 0.011353 (-0.002318) | 0.004518 / 0.011008 (-0.006490) | 0.102077 / 0.038508 (0.063569) | 0.030169 / 0.023109 (0.007060) | 0.297713 / 0.275898 (0.021815) | 0.364976 / 0.323480 (0.041496) | 0.007079 / 0.007986 (-0.000906) | 0.003438 / 0.004328 (-0.000890) | 0.079667 / 0.004250 (0.075416) | 0.035890 / 0.037052 (-0.001162) | 0.306065 / 0.258489 (0.047576) | 0.352133 / 0.293841 (0.058292) | 0.033800 / 0.128546 (-0.094746) | 0.011613 / 0.075646 (-0.064034) | 0.322917 / 0.419271 (-0.096354) | 0.040973 / 0.043533 (-0.002560) | 0.300896 / 0.255139 (0.045757) | 0.331540 / 0.283200 (0.048341) | 0.089579 / 0.141683 (-0.052103) | 1.466755 / 1.452155 (0.014600) | 1.522120 / 1.492716 (0.029404) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193172 / 0.018006 (0.175166) | 0.408878 / 0.000490 (0.408389) | 0.001586 / 0.000200 (0.001386) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023496 / 0.037411 (-0.013915) | 0.098046 / 0.014526 (0.083520) | 0.104599 / 0.176557 (-0.071957) | 0.139054 / 0.737135 (-0.598081) | 0.111163 / 0.296338 (-0.185175) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417374 / 0.215209 (0.202165) | 4.145808 / 2.077655 (2.068153) | 1.847101 / 1.504120 (0.342981) | 1.637207 / 1.541195 (0.096012) | 1.676906 / 1.468490 (0.208416) | 0.689851 / 4.584777 (-3.894926) | 3.402099 / 3.745712 (-0.343614) | 1.896808 / 5.269862 (-3.373054) | 1.257876 / 4.565676 (-3.307801) | 0.081744 / 0.424275 (-0.342531) | 0.012206 / 0.007607 (0.004599) | 0.524830 / 0.226044 (0.298786) | 5.251344 / 2.268929 (2.982416) | 2.277907 / 55.444624 (-53.166717) | 1.933985 / 6.876477 (-4.942491) | 2.038500 / 2.142072 (-0.103573) | 0.808696 / 4.805227 (-3.996532) | 0.149488 / 6.500664 (-6.351176) | 0.065323 / 0.075469 (-0.010146) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.204294 / 1.841788 (-0.637493) | 13.696526 / 8.074308 (5.622218) | 13.947195 / 10.191392 (3.755802) | 0.136812 / 0.680424 (-0.543611) | 0.028625 / 0.534201 (-0.505576) | 0.397662 / 0.579283 (-0.181621) | 0.403423 / 0.434364 (-0.030941) | 0.465288 / 0.540337 (-0.075049) | 0.551919 / 1.386936 (-0.835017) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006467 / 0.011353 (-0.004886) | 0.004562 / 0.011008 (-0.006447) | 0.097514 / 0.038508 (0.059006) | 0.027471 / 0.023109 (0.004362) | 0.425504 / 0.275898 (0.149606) | 0.458856 / 0.323480 (0.135376) | 0.004816 / 0.007986 (-0.003169) | 0.003264 / 0.004328 (-0.001065) | 0.074947 / 0.004250 (0.070697) | 0.037147 / 0.037052 (0.000095) | 0.429513 / 0.258489 (0.171024) | 0.463971 / 0.293841 (0.170130) | 0.031638 / 0.128546 (-0.096908) | 0.011545 / 0.075646 (-0.064101) | 0.320261 / 0.419271 (-0.099010) | 0.041570 / 0.043533 (-0.001963) | 0.424809 / 0.255139 (0.169670) | 0.447158 / 0.283200 (0.163959) | 0.088418 / 0.141683 (-0.053265) | 1.492242 / 1.452155 (0.040087) | 1.545523 / 1.492716 (0.052807) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217865 / 0.018006 (0.199859) | 0.399925 / 0.000490 (0.399436) | 0.004853 / 0.000200 (0.004653) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024275 / 0.037411 (-0.013137) | 0.098249 / 0.014526 (0.083723) | 0.107110 / 0.176557 (-0.069446) | 0.143870 / 0.737135 (-0.593265) | 0.108796 / 0.296338 (-0.187542) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.470856 / 0.215209 (0.255647) | 4.687921 / 2.077655 (2.610266) | 2.448631 / 1.504120 (0.944511) | 2.247748 / 1.541195 (0.706553) | 2.287713 / 1.468490 (0.819223) | 0.687534 / 4.584777 (-3.897243) | 3.421099 / 3.745712 (-0.324613) | 2.977280 / 5.269862 (-2.292582) | 1.274837 / 4.565676 (-3.290839) | 0.081611 / 0.424275 (-0.342664) | 0.012603 / 0.007607 (0.004996) | 0.574600 / 0.226044 (0.348556) | 5.802826 / 2.268929 (3.533898) | 2.913178 / 55.444624 (-52.531446) | 2.589486 / 6.876477 (-4.286991) | 2.630004 / 2.142072 (0.487932) | 0.790087 / 4.805227 (-4.015140) | 0.150019 / 6.500664 (-6.350645) | 0.067346 / 0.075469 (-0.008123) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266521 / 1.841788 (-0.575267) | 13.818770 / 8.074308 (5.744462) | 13.872277 / 10.191392 (3.680885) | 0.147375 / 0.680424 (-0.533049) | 0.016837 / 0.534201 (-0.517363) | 0.376421 / 0.579283 (-0.202862) | 0.400236 / 0.434364 (-0.034128) | 0.436623 / 0.540337 (-0.103714) | 0.527173 / 1.386936 (-0.859763) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5f347cf8443aa35401ba6a4159600b92bc6a156b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009341 / 0.011353 (-0.002012) | 0.005188 / 0.011008 (-0.005820) | 0.101831 / 0.038508 (0.063323) | 0.035141 / 0.023109 (0.012032) | 0.299324 / 0.275898 (0.023426) | 0.334749 / 0.323480 (0.011269) | 0.007958 / 0.007986 (-0.000027) | 0.005482 / 0.004328 (0.001153) | 0.077070 / 0.004250 (0.072820) | 0.044733 / 0.037052 (0.007680) | 0.310398 / 0.258489 (0.051909) | 0.347925 / 0.293841 (0.054084) | 0.038141 / 0.128546 (-0.090405) | 0.012135 / 0.075646 (-0.063512) | 0.333799 / 0.419271 (-0.085472) | 0.048881 / 0.043533 (0.005348) | 0.301336 / 0.255139 (0.046197) | 0.314592 / 0.283200 (0.031393) | 0.103635 / 0.141683 (-0.038048) | 1.437321 / 1.452155 (-0.014833) | 1.598781 / 1.492716 (0.106065) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248911 / 0.018006 (0.230905) | 0.528932 / 0.000490 (0.528442) | 0.002495 / 0.000200 (0.002295) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027903 / 0.037411 (-0.009509) | 0.106716 / 0.014526 (0.092190) | 0.122650 / 0.176557 (-0.053907) | 0.162481 / 0.737135 (-0.574654) | 0.126402 / 0.296338 (-0.169937) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.352819 / 0.215209 (0.137610) | 3.522761 / 2.077655 (1.445106) | 1.576761 / 1.504120 (0.072641) | 1.411631 / 1.541195 (-0.129563) | 1.449689 / 1.468490 (-0.018801) | 0.608987 / 4.584777 (-3.975790) | 3.705121 / 3.745712 (-0.040592) | 2.085071 / 5.269862 (-3.184790) | 1.308653 / 4.565676 (-3.257024) | 0.083763 / 0.424275 (-0.340512) | 0.011957 / 0.007607 (0.004350) | 0.502182 / 0.226044 (0.276137) | 5.008829 / 2.268929 (2.739900) | 2.244687 / 55.444624 (-53.199937) | 1.891411 / 6.876477 (-4.985065) | 1.940789 / 2.142072 (-0.201284) | 0.825966 / 4.805227 (-3.979261) | 0.165267 / 6.500664 (-6.335397) | 0.063020 / 0.075469 (-0.012449) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196707 / 1.841788 (-0.645081) | 14.236877 / 8.074308 (6.162569) | 14.872954 / 10.191392 (4.681562) | 0.168560 / 0.680424 (-0.511864) | 0.029038 / 0.534201 (-0.505163) | 0.440192 / 0.579283 (-0.139091) | 0.437021 / 0.434364 (0.002657) | 0.519612 / 0.540337 (-0.020725) | 0.612013 / 1.386936 (-0.774923) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007170 / 0.011353 (-0.004183) | 0.005303 / 0.011008 (-0.005705) | 0.098503 / 0.038508 (0.059995) | 0.032573 / 0.023109 (0.009463) | 0.398203 / 0.275898 (0.122305) | 0.446075 / 0.323480 (0.122595) | 0.005712 / 0.007986 (-0.002274) | 0.004165 / 0.004328 (-0.000164) | 0.074273 / 0.004250 (0.070023) | 0.049587 / 0.037052 (0.012534) | 0.399458 / 0.258489 (0.140969) | 0.459167 / 0.293841 (0.165327) | 0.036063 / 0.128546 (-0.092483) | 0.012394 / 0.075646 (-0.063253) | 0.332559 / 0.419271 (-0.086713) | 0.048499 / 0.043533 (0.004967) | 0.404044 / 0.255139 (0.148905) | 0.410462 / 0.283200 (0.127262) | 0.104104 / 0.141683 (-0.037579) | 1.488141 / 1.452155 (0.035986) | 1.535517 / 1.492716 (0.042801) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292976 / 0.018006 (0.274970) | 0.569139 / 0.000490 (0.568649) | 0.000553 / 0.000200 (0.000353) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030144 / 0.037411 (-0.007267) | 0.098699 / 0.014526 (0.084173) | 0.114437 / 0.176557 (-0.062120) | 0.156657 / 0.737135 (-0.580478) | 0.117449 / 0.296338 (-0.178890) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441921 / 0.215209 (0.226712) | 4.413090 / 2.077655 (2.335435) | 2.190458 / 1.504120 (0.686338) | 2.008919 / 1.541195 (0.467724) | 2.049657 / 1.468490 (0.581167) | 0.691751 / 4.584777 (-3.893026) | 3.767524 / 3.745712 (0.021812) | 3.395564 / 5.269862 (-1.874297) | 1.633480 / 4.565676 (-2.932196) | 0.084880 / 0.424275 (-0.339395) | 0.012133 / 0.007607 (0.004526) | 0.555372 / 0.226044 (0.329327) | 5.522820 / 2.268929 (3.253892) | 2.723331 / 55.444624 (-52.721293) | 2.337583 / 6.876477 (-4.538894) | 2.368746 / 2.142072 (0.226674) | 0.830127 / 4.805227 (-3.975100) | 0.166239 / 6.500664 (-6.334425) | 0.064279 / 0.075469 (-0.011190) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.123421 / 1.841788 (-0.718367) | 14.413392 / 8.074308 (6.339084) | 12.865143 / 10.191392 (2.673751) | 0.132198 / 0.680424 (-0.548226) | 0.016138 / 0.534201 (-0.518063) | 0.380760 / 0.579283 (-0.198523) | 0.387223 / 0.434364 (-0.047141) | 0.445574 / 0.540337 (-0.094764) | 0.535658 / 1.386936 (-0.851278) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a89564d3d17b5960db2435662cb9c49f8ad7488a \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008316 / 0.011353 (-0.003037) | 0.004503 / 0.011008 (-0.006505) | 0.100565 / 0.038508 (0.062057) | 0.030388 / 0.023109 (0.007279) | 0.304417 / 0.275898 (0.028519) | 0.369655 / 0.323480 (0.046175) | 0.007796 / 0.007986 (-0.000190) | 0.003450 / 0.004328 (-0.000878) | 0.078694 / 0.004250 (0.074443) | 0.038068 / 0.037052 (0.001016) | 0.316353 / 0.258489 (0.057864) | 0.352344 / 0.293841 (0.058503) | 0.033271 / 0.128546 (-0.095276) | 0.011427 / 0.075646 (-0.064220) | 0.322367 / 0.419271 (-0.096904) | 0.041497 / 0.043533 (-0.002036) | 0.305876 / 0.255139 (0.050737) | 0.332279 / 0.283200 (0.049079) | 0.086719 / 0.141683 (-0.054964) | 1.488367 / 1.452155 (0.036212) | 1.528943 / 1.492716 (0.036227) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.171072 / 0.018006 (0.153066) | 0.421048 / 0.000490 (0.420558) | 0.003622 / 0.000200 (0.003422) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022632 / 0.037411 (-0.014779) | 0.095304 / 0.014526 (0.080778) | 0.106254 / 0.176557 (-0.070302) | 0.138437 / 0.737135 (-0.598698) | 0.107258 / 0.296338 (-0.189080) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423201 / 0.215209 (0.207992) | 4.208397 / 2.077655 (2.130742) | 1.899800 / 1.504120 (0.395680) | 1.682782 / 1.541195 (0.141587) | 1.708840 / 1.468490 (0.240350) | 0.694492 / 4.584777 (-3.890285) | 3.380369 / 3.745712 (-0.365344) | 1.851731 / 5.269862 (-3.418130) | 1.151615 / 4.565676 (-3.414061) | 0.082446 / 0.424275 (-0.341829) | 0.012483 / 0.007607 (0.004876) | 0.533688 / 0.226044 (0.307643) | 5.373434 / 2.268929 (3.104505) | 2.346403 / 55.444624 (-53.098221) | 1.978505 / 6.876477 (-4.897971) | 2.005875 / 2.142072 (-0.136198) | 0.820785 / 4.805227 (-3.984442) | 0.150728 / 6.500664 (-6.349936) | 0.065761 / 0.075469 (-0.009708) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244550 / 1.841788 (-0.597237) | 13.219096 / 8.074308 (5.144788) | 13.960463 / 10.191392 (3.769071) | 0.135572 / 0.680424 (-0.544852) | 0.028746 / 0.534201 (-0.505455) | 0.393082 / 0.579283 (-0.186201) | 0.402852 / 0.434364 (-0.031512) | 0.461191 / 0.540337 (-0.079147) | 0.543500 / 1.386936 (-0.843436) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006316 / 0.011353 (-0.005037) | 0.004394 / 0.011008 (-0.006615) | 0.096478 / 0.038508 (0.057970) | 0.026965 / 0.023109 (0.003855) | 0.340371 / 0.275898 (0.064473) | 0.368334 / 0.323480 (0.044854) | 0.004744 / 0.007986 (-0.003242) | 0.004652 / 0.004328 (0.000324) | 0.074479 / 0.004250 (0.070228) | 0.036358 / 0.037052 (-0.000694) | 0.342968 / 0.258489 (0.084479) | 0.383675 / 0.293841 (0.089834) | 0.031439 / 0.128546 (-0.097107) | 0.011529 / 0.075646 (-0.064117) | 0.319560 / 0.419271 (-0.099711) | 0.041370 / 0.043533 (-0.002163) | 0.342594 / 0.255139 (0.087455) | 0.363237 / 0.283200 (0.080038) | 0.087316 / 0.141683 (-0.054367) | 1.468690 / 1.452155 (0.016535) | 1.553974 / 1.492716 (0.061257) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198366 / 0.018006 (0.180360) | 0.401581 / 0.000490 (0.401091) | 0.000400 / 0.000200 (0.000200) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023150 / 0.037411 (-0.014261) | 0.097797 / 0.014526 (0.083271) | 0.106198 / 0.176557 (-0.070359) | 0.139599 / 0.737135 (-0.597536) | 0.108361 / 0.296338 (-0.187978) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.472962 / 0.215209 (0.257753) | 4.702688 / 2.077655 (2.625033) | 2.401002 / 1.504120 (0.896882) | 2.193857 / 1.541195 (0.652663) | 2.219188 / 1.468490 (0.750697) | 0.689993 / 4.584777 (-3.894784) | 3.369409 / 3.745712 (-0.376304) | 1.824801 / 5.269862 (-3.445061) | 1.150815 / 4.565676 (-3.414862) | 0.082197 / 0.424275 (-0.342078) | 0.012287 / 0.007607 (0.004679) | 0.581963 / 0.226044 (0.355918) | 5.786943 / 2.268929 (3.518015) | 2.871235 / 55.444624 (-52.573389) | 2.516009 / 6.876477 (-4.360468) | 2.535669 / 2.142072 (0.393597) | 0.804733 / 4.805227 (-4.000494) | 0.150545 / 6.500664 (-6.350119) | 0.066964 / 0.075469 (-0.008505) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285431 / 1.841788 (-0.556356) | 14.097108 / 8.074308 (6.022800) | 13.821497 / 10.191392 (3.630105) | 0.141922 / 0.680424 (-0.538502) | 0.016964 / 0.534201 (-0.517237) | 0.374784 / 0.579283 (-0.204500) | 0.381034 / 0.434364 (-0.053330) | 0.435487 / 0.540337 (-0.104850) | 0.521894 / 1.386936 (-0.865042) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#462000c2b12a11f1fc26853e842d3f6e40287737 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009486 / 0.011353 (-0.001867) | 0.005363 / 0.011008 (-0.005645) | 0.101008 / 0.038508 (0.062500) | 0.036355 / 0.023109 (0.013246) | 0.290575 / 0.275898 (0.014677) | 0.391634 / 0.323480 (0.068154) | 0.009085 / 0.007986 (0.001099) | 0.005780 / 0.004328 (0.001451) | 0.077848 / 0.004250 (0.073598) | 0.049062 / 0.037052 (0.012009) | 0.310900 / 0.258489 (0.052411) | 0.358224 / 0.293841 (0.064383) | 0.038838 / 0.128546 (-0.089708) | 0.012244 / 0.075646 (-0.063402) | 0.333701 / 0.419271 (-0.085570) | 0.048021 / 0.043533 (0.004488) | 0.289584 / 0.255139 (0.034445) | 0.317556 / 0.283200 (0.034356) | 0.109807 / 0.141683 (-0.031876) | 1.465966 / 1.452155 (0.013811) | 1.526341 / 1.492716 (0.033625) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246221 / 0.018006 (0.228215) | 0.580659 / 0.000490 (0.580169) | 0.000627 / 0.000200 (0.000427) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028352 / 0.037411 (-0.009059) | 0.110569 / 0.014526 (0.096043) | 0.126456 / 0.176557 (-0.050100) | 0.163633 / 0.737135 (-0.573503) | 0.128252 / 0.296338 (-0.168087) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397271 / 0.215209 (0.182062) | 3.975336 / 2.077655 (1.897682) | 1.786957 / 1.504120 (0.282837) | 1.598468 / 1.541195 (0.057273) | 1.645299 / 1.468490 (0.176809) | 0.686221 / 4.584777 (-3.898556) | 3.753184 / 3.745712 (0.007472) | 2.089505 / 5.269862 (-3.180356) | 1.325799 / 4.565676 (-3.239878) | 0.084608 / 0.424275 (-0.339667) | 0.012343 / 0.007607 (0.004736) | 0.509951 / 0.226044 (0.283907) | 5.092102 / 2.268929 (2.823174) | 2.297551 / 55.444624 (-53.147073) | 1.938177 / 6.876477 (-4.938300) | 2.012448 / 2.142072 (-0.129625) | 0.835206 / 4.805227 (-3.970021) | 0.166373 / 6.500664 (-6.334291) | 0.063996 / 0.075469 (-0.011473) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212936 / 1.841788 (-0.628851) | 15.067370 / 8.074308 (6.993062) | 14.165214 / 10.191392 (3.973822) | 0.157041 / 0.680424 (-0.523383) | 0.029612 / 0.534201 (-0.504589) | 0.440006 / 0.579283 (-0.139277) | 0.439165 / 0.434364 (0.004801) | 0.524970 / 0.540337 (-0.015368) | 0.608305 / 1.386936 (-0.778631) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007433 / 0.011353 (-0.003920) | 0.005310 / 0.011008 (-0.005698) | 0.097194 / 0.038508 (0.058686) | 0.033265 / 0.023109 (0.010156) | 0.369908 / 0.275898 (0.094010) | 0.411508 / 0.323480 (0.088028) | 0.006000 / 0.007986 (-0.001986) | 0.005647 / 0.004328 (0.001319) | 0.075597 / 0.004250 (0.071347) | 0.051951 / 0.037052 (0.014899) | 0.378469 / 0.258489 (0.119980) | 0.424849 / 0.293841 (0.131008) | 0.036700 / 0.128546 (-0.091846) | 0.012535 / 0.075646 (-0.063111) | 0.333197 / 0.419271 (-0.086074) | 0.049046 / 0.043533 (0.005513) | 0.381845 / 0.255139 (0.126706) | 0.397846 / 0.283200 (0.114646) | 0.109152 / 0.141683 (-0.032531) | 1.432407 / 1.452155 (-0.019748) | 1.555509 / 1.492716 (0.062793) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265433 / 0.018006 (0.247427) | 0.559590 / 0.000490 (0.559100) | 0.000492 / 0.000200 (0.000292) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029748 / 0.037411 (-0.007663) | 0.110490 / 0.014526 (0.095964) | 0.124125 / 0.176557 (-0.052431) | 0.160089 / 0.737135 (-0.577046) | 0.128755 / 0.296338 (-0.167583) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443976 / 0.215209 (0.228767) | 4.416960 / 2.077655 (2.339305) | 2.239408 / 1.504120 (0.735288) | 2.055341 / 1.541195 (0.514147) | 2.093479 / 1.468490 (0.624988) | 0.688846 / 4.584777 (-3.895930) | 3.797526 / 3.745712 (0.051814) | 3.578137 / 5.269862 (-1.691725) | 2.015073 / 4.565676 (-2.550603) | 0.084126 / 0.424275 (-0.340149) | 0.012581 / 0.007607 (0.004974) | 0.549774 / 0.226044 (0.323730) | 5.492185 / 2.268929 (3.223256) | 2.739851 / 55.444624 (-52.704773) | 2.371091 / 6.876477 (-4.505386) | 2.400178 / 2.142072 (0.258105) | 0.831227 / 4.805227 (-3.974001) | 0.166156 / 6.500664 (-6.334508) | 0.063901 / 0.075469 (-0.011568) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.236127 / 1.841788 (-0.605660) | 15.236884 / 8.074308 (7.162576) | 14.434351 / 10.191392 (4.242959) | 0.163725 / 0.680424 (-0.516699) | 0.018009 / 0.534201 (-0.516192) | 0.430612 / 0.579283 (-0.148671) | 0.420426 / 0.434364 (-0.013938) | 0.497062 / 0.540337 (-0.043275) | 0.590924 / 1.386936 (-0.796012) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#63377dc53fc94f19bc2b0bbfb118a90d01a1d020 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010862 / 0.011353 (-0.000491) | 0.005741 / 0.011008 (-0.005267) | 0.111911 / 0.038508 (0.073403) | 0.042316 / 0.023109 (0.019207) | 0.347665 / 0.275898 (0.071767) | 0.377335 / 0.323480 (0.053855) | 0.009400 / 0.007986 (0.001414) | 0.006814 / 0.004328 (0.002486) | 0.087194 / 0.004250 (0.082943) | 0.046878 / 0.037052 (0.009826) | 0.348920 / 0.258489 (0.090430) | 0.393347 / 0.293841 (0.099507) | 0.044212 / 0.128546 (-0.084334) | 0.013925 / 0.075646 (-0.061722) | 0.386076 / 0.419271 (-0.033195) | 0.054195 / 0.043533 (0.010662) | 0.358486 / 0.255139 (0.103347) | 0.360132 / 0.283200 (0.076932) | 0.109783 / 0.141683 (-0.031900) | 1.679875 / 1.452155 (0.227720) | 1.794379 / 1.492716 (0.301663) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221927 / 0.018006 (0.203921) | 0.487352 / 0.000490 (0.486863) | 0.003494 / 0.000200 (0.003294) | 0.000091 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032201 / 0.037411 (-0.005210) | 0.125861 / 0.014526 (0.111335) | 0.133905 / 0.176557 (-0.042652) | 0.183319 / 0.737135 (-0.553817) | 0.142646 / 0.296338 (-0.153693) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442720 / 0.215209 (0.227511) | 4.602619 / 2.077655 (2.524964) | 2.050214 / 1.504120 (0.546094) | 1.837968 / 1.541195 (0.296773) | 1.961199 / 1.468490 (0.492709) | 0.793426 / 4.584777 (-3.791351) | 4.472078 / 3.745712 (0.726366) | 2.364903 / 5.269862 (-2.904959) | 1.515076 / 4.565676 (-3.050600) | 0.103087 / 0.424275 (-0.321188) | 0.014676 / 0.007607 (0.007068) | 0.576887 / 0.226044 (0.350843) | 5.785525 / 2.268929 (3.516596) | 2.765231 / 55.444624 (-52.679393) | 2.365364 / 6.876477 (-4.511113) | 2.448335 / 2.142072 (0.306262) | 0.978726 / 4.805227 (-3.826501) | 0.191417 / 6.500664 (-6.309247) | 0.073295 / 0.075469 (-0.002174) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.378995 / 1.841788 (-0.462792) | 16.583655 / 8.074308 (8.509347) | 14.944731 / 10.191392 (4.753339) | 0.168916 / 0.680424 (-0.511508) | 0.035272 / 0.534201 (-0.498928) | 0.489729 / 0.579283 (-0.089554) | 0.496231 / 0.434364 (0.061867) | 0.576218 / 0.540337 (0.035880) | 0.673558 / 1.386936 (-0.713378) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008104 / 0.011353 (-0.003249) | 0.005179 / 0.011008 (-0.005829) | 0.103908 / 0.038508 (0.065400) | 0.034661 / 0.023109 (0.011552) | 0.398119 / 0.275898 (0.122221) | 0.411765 / 0.323480 (0.088286) | 0.006016 / 0.007986 (-0.001970) | 0.005637 / 0.004328 (0.001308) | 0.073662 / 0.004250 (0.069412) | 0.052411 / 0.037052 (0.015359) | 0.391826 / 0.258489 (0.133337) | 0.455217 / 0.293841 (0.161376) | 0.039924 / 0.128546 (-0.088622) | 0.013390 / 0.075646 (-0.062256) | 0.390319 / 0.419271 (-0.028953) | 0.054312 / 0.043533 (0.010779) | 0.395492 / 0.255139 (0.140353) | 0.446324 / 0.283200 (0.163124) | 0.116461 / 0.141683 (-0.025222) | 1.502163 / 1.452155 (0.050008) | 1.731541 / 1.492716 (0.238825) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282612 / 0.018006 (0.264606) | 0.503170 / 0.000490 (0.502680) | 0.005307 / 0.000200 (0.005107) | 0.000100 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029071 / 0.037411 (-0.008340) | 0.123831 / 0.014526 (0.109306) | 0.133284 / 0.176557 (-0.043272) | 0.172029 / 0.737135 (-0.565106) | 0.140639 / 0.296338 (-0.155700) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.496812 / 0.215209 (0.281603) | 4.958915 / 2.077655 (2.881260) | 2.559188 / 1.504120 (1.055068) | 2.262434 / 1.541195 (0.721240) | 2.371126 / 1.468490 (0.902636) | 0.780150 / 4.584777 (-3.804627) | 4.417060 / 3.745712 (0.671348) | 2.401909 / 5.269862 (-2.867953) | 1.527943 / 4.565676 (-3.037733) | 0.100074 / 0.424275 (-0.324201) | 0.014853 / 0.007607 (0.007246) | 0.630192 / 0.226044 (0.404147) | 6.409685 / 2.268929 (4.140757) | 3.224718 / 55.444624 (-52.219906) | 2.795301 / 6.876477 (-4.081176) | 2.927205 / 2.142072 (0.785132) | 0.989537 / 4.805227 (-3.815690) | 0.199775 / 6.500664 (-6.300889) | 0.076725 / 0.075469 (0.001256) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.433504 / 1.841788 (-0.408284) | 17.117134 / 8.074308 (9.042825) | 16.606367 / 10.191392 (6.414975) | 0.165653 / 0.680424 (-0.514771) | 0.020818 / 0.534201 (-0.513383) | 0.496782 / 0.579283 (-0.082501) | 0.473895 / 0.434364 (0.039531) | 0.576796 / 0.540337 (0.036459) | 0.703272 / 1.386936 (-0.683664) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6627fb6f2639ac3b1435b3386545612db038a42e \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012501 / 0.011353 (0.001148) | 0.006437 / 0.011008 (-0.004571) | 0.129387 / 0.038508 (0.090878) | 0.035847 / 0.023109 (0.012737) | 0.339243 / 0.275898 (0.063345) | 0.423274 / 0.323480 (0.099794) | 0.008489 / 0.007986 (0.000503) | 0.004596 / 0.004328 (0.000268) | 0.103322 / 0.004250 (0.099071) | 0.043570 / 0.037052 (0.006517) | 0.357004 / 0.258489 (0.098515) | 0.426511 / 0.293841 (0.132670) | 0.062923 / 0.128546 (-0.065623) | 0.021168 / 0.075646 (-0.054478) | 0.387485 / 0.419271 (-0.031787) | 0.059745 / 0.043533 (0.016213) | 0.341101 / 0.255139 (0.085962) | 0.365530 / 0.283200 (0.082331) | 0.102110 / 0.141683 (-0.039573) | 1.729408 / 1.452155 (0.277253) | 1.759510 / 1.492716 (0.266794) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187065 / 0.018006 (0.169059) | 0.499685 / 0.000490 (0.499196) | 0.004677 / 0.000200 (0.004478) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025827 / 0.037411 (-0.011584) | 0.113780 / 0.014526 (0.099255) | 0.146060 / 0.176557 (-0.030496) | 0.158169 / 0.737135 (-0.578966) | 0.136133 / 0.296338 (-0.160206) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.608421 / 0.215209 (0.393211) | 5.907395 / 2.077655 (3.829741) | 2.193140 / 1.504120 (0.689021) | 1.870315 / 1.541195 (0.329120) | 1.885660 / 1.468490 (0.417170) | 1.227637 / 4.584777 (-3.357140) | 5.319242 / 3.745712 (1.573530) | 2.991595 / 5.269862 (-2.278267) | 2.043906 / 4.565676 (-2.521771) | 0.151829 / 0.424275 (-0.272447) | 0.018974 / 0.007607 (0.011367) | 0.778035 / 0.226044 (0.551991) | 7.705796 / 2.268929 (5.436868) | 2.990156 / 55.444624 (-52.454468) | 2.372643 / 6.876477 (-4.503834) | 2.240847 / 2.142072 (0.098775) | 1.407209 / 4.805227 (-3.398018) | 0.242336 / 6.500664 (-6.258328) | 0.069847 / 0.075469 (-0.005622) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.445817 / 1.841788 (-0.395970) | 16.059632 / 8.074308 (7.985324) | 18.541971 / 10.191392 (8.350579) | 0.237830 / 0.680424 (-0.442594) | 0.041060 / 0.534201 (-0.493141) | 0.496765 / 0.579283 (-0.082518) | 0.609666 / 0.434364 (0.175302) | 0.584614 / 0.540337 (0.044277) | 0.680858 / 1.386936 (-0.706078) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009037 / 0.011353 (-0.002315) | 0.005961 / 0.011008 (-0.005047) | 0.127204 / 0.038508 (0.088696) | 0.030664 / 0.023109 (0.007555) | 0.417968 / 0.275898 (0.142070) | 0.515316 / 0.323480 (0.191836) | 0.006549 / 0.007986 (-0.001436) | 0.004456 / 0.004328 (0.000128) | 0.083715 / 0.004250 (0.079464) | 0.043701 / 0.037052 (0.006648) | 0.521153 / 0.258489 (0.262664) | 0.565456 / 0.293841 (0.271615) | 0.055298 / 0.128546 (-0.073248) | 0.018103 / 0.075646 (-0.057544) | 0.403990 / 0.419271 (-0.015282) | 0.060162 / 0.043533 (0.016629) | 0.486383 / 0.255139 (0.231244) | 0.470342 / 0.283200 (0.187142) | 0.102269 / 0.141683 (-0.039414) | 1.643241 / 1.452155 (0.191086) | 1.763850 / 1.492716 (0.271133) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185602 / 0.018006 (0.167596) | 0.489163 / 0.000490 (0.488674) | 0.000426 / 0.000200 (0.000226) | 0.000086 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026689 / 0.037411 (-0.010722) | 0.111520 / 0.014526 (0.096994) | 0.119838 / 0.176557 (-0.056719) | 0.153698 / 0.737135 (-0.583437) | 0.130969 / 0.296338 (-0.165370) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.616170 / 0.215209 (0.400961) | 6.219702 / 2.077655 (4.142048) | 2.533554 / 1.504120 (1.029434) | 2.256009 / 1.541195 (0.714815) | 2.217617 / 1.468490 (0.749127) | 1.156920 / 4.584777 (-3.427857) | 5.175759 / 3.745712 (1.430046) | 2.848419 / 5.269862 (-2.421442) | 1.943864 / 4.565676 (-2.621813) | 0.138342 / 0.424275 (-0.285933) | 0.013140 / 0.007607 (0.005533) | 0.782105 / 0.226044 (0.556060) | 7.602003 / 2.268929 (5.333075) | 3.629577 / 55.444624 (-51.815047) | 2.713849 / 6.876477 (-4.162628) | 2.663888 / 2.142072 (0.521816) | 1.418381 / 4.805227 (-3.386847) | 0.250649 / 6.500664 (-6.250015) | 0.073564 / 0.075469 (-0.001905) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.483739 / 1.841788 (-0.358049) | 16.386204 / 8.074308 (8.311896) | 20.685262 / 10.191392 (10.493870) | 0.237084 / 0.680424 (-0.443340) | 0.039097 / 0.534201 (-0.495104) | 0.525399 / 0.579283 (-0.053884) | 0.587541 / 0.434364 (0.153177) | 0.566605 / 0.540337 (0.026268) | 0.677384 / 1.386936 (-0.709552) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b3b67d42733dabb15ce4997c8324f8e047ce12bd \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.014050 / 0.011353 (0.002697) | 0.005981 / 0.011008 (-0.005028) | 0.126307 / 0.038508 (0.087799) | 0.035400 / 0.023109 (0.012290) | 0.387821 / 0.275898 (0.111923) | 0.462785 / 0.323480 (0.139305) | 0.009427 / 0.007986 (0.001441) | 0.005081 / 0.004328 (0.000753) | 0.097273 / 0.004250 (0.093023) | 0.044699 / 0.037052 (0.007647) | 0.396025 / 0.258489 (0.137536) | 0.450137 / 0.293841 (0.156296) | 0.055660 / 0.128546 (-0.072886) | 0.022710 / 0.075646 (-0.052936) | 0.443784 / 0.419271 (0.024513) | 0.065756 / 0.043533 (0.022223) | 0.379350 / 0.255139 (0.124211) | 0.396783 / 0.283200 (0.113583) | 0.114088 / 0.141683 (-0.027594) | 1.856834 / 1.452155 (0.404679) | 1.839292 / 1.492716 (0.346576) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206748 / 0.018006 (0.188742) | 0.517711 / 0.000490 (0.517222) | 0.008302 / 0.000200 (0.008102) | 0.000494 / 0.000054 (0.000440) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033987 / 0.037411 (-0.003424) | 0.131067 / 0.014526 (0.116542) | 0.155539 / 0.176557 (-0.021018) | 0.188598 / 0.737135 (-0.548537) | 0.156000 / 0.296338 (-0.140338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.641413 / 0.215209 (0.426204) | 6.156680 / 2.077655 (4.079025) | 2.428858 / 1.504120 (0.924738) | 2.086195 / 1.541195 (0.545000) | 2.109604 / 1.468490 (0.641114) | 1.209426 / 4.584777 (-3.375351) | 5.139398 / 3.745712 (1.393686) | 3.041337 / 5.269862 (-2.228524) | 2.294809 / 4.565676 (-2.270868) | 0.142206 / 0.424275 (-0.282069) | 0.015167 / 0.007607 (0.007560) | 0.816269 / 0.226044 (0.590224) | 7.953931 / 2.268929 (5.685002) | 3.201793 / 55.444624 (-52.242832) | 2.448620 / 6.876477 (-4.427857) | 2.521670 / 2.142072 (0.379597) | 1.484094 / 4.805227 (-3.321133) | 0.255069 / 6.500664 (-6.245595) | 0.076031 / 0.075469 (0.000561) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.590951 / 1.841788 (-0.250836) | 17.661353 / 8.074308 (9.587045) | 21.097837 / 10.191392 (10.906445) | 0.229265 / 0.680424 (-0.451159) | 0.042618 / 0.534201 (-0.491583) | 0.535942 / 0.579283 (-0.043342) | 0.590195 / 0.434364 (0.155831) | 0.623985 / 0.540337 (0.083648) | 0.742637 / 1.386936 (-0.644299) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009264 / 0.011353 (-0.002088) | 0.008798 / 0.011008 (-0.002210) | 0.122208 / 0.038508 (0.083700) | 0.034835 / 0.023109 (0.011726) | 0.462618 / 0.275898 (0.186720) | 0.505632 / 0.323480 (0.182152) | 0.006320 / 0.007986 (-0.001665) | 0.005383 / 0.004328 (0.001054) | 0.091229 / 0.004250 (0.086979) | 0.045828 / 0.037052 (0.008775) | 0.477507 / 0.258489 (0.219018) | 0.539616 / 0.293841 (0.245775) | 0.061913 / 0.128546 (-0.066633) | 0.019390 / 0.075646 (-0.056257) | 0.420016 / 0.419271 (0.000745) | 0.065958 / 0.043533 (0.022425) | 0.468603 / 0.255139 (0.213464) | 0.486246 / 0.283200 (0.203046) | 0.107924 / 0.141683 (-0.033759) | 1.843614 / 1.452155 (0.391459) | 1.988159 / 1.492716 (0.495442) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.247043 / 0.018006 (0.229037) | 0.515580 / 0.000490 (0.515090) | 0.005630 / 0.000200 (0.005430) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030674 / 0.037411 (-0.006737) | 0.130783 / 0.014526 (0.116258) | 0.147669 / 0.176557 (-0.028888) | 0.175656 / 0.737135 (-0.561479) | 0.138317 / 0.296338 (-0.158022) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.727119 / 0.215209 (0.511909) | 6.848208 / 2.077655 (4.770553) | 3.121418 / 1.504120 (1.617298) | 2.701799 / 1.541195 (1.160604) | 2.749179 / 1.468490 (1.280689) | 1.312058 / 4.584777 (-3.272719) | 5.400562 / 3.745712 (1.654850) | 3.058142 / 5.269862 (-2.211719) | 2.076361 / 4.565676 (-2.489316) | 0.142169 / 0.424275 (-0.282106) | 0.014340 / 0.007607 (0.006733) | 0.853534 / 0.226044 (0.627490) | 8.734484 / 2.268929 (6.465556) | 3.968130 / 55.444624 (-51.476495) | 3.118032 / 6.876477 (-3.758444) | 3.078757 / 2.142072 (0.936684) | 1.460694 / 4.805227 (-3.344533) | 0.261858 / 6.500664 (-6.238806) | 0.081089 / 0.075469 (0.005620) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.611473 / 1.841788 (-0.230315) | 17.660545 / 8.074308 (9.586237) | 20.526023 / 10.191392 (10.334631) | 0.223320 / 0.680424 (-0.457103) | 0.027939 / 0.534201 (-0.506261) | 0.542704 / 0.579283 (-0.036579) | 0.563826 / 0.434364 (0.129462) | 0.639936 / 0.540337 (0.099599) | 0.755974 / 1.386936 (-0.630962) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#942141e13ba2be853e2231d9edbfa38044e2632d \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008776 / 0.011353 (-0.002577) | 0.004532 / 0.011008 (-0.006476) | 0.100373 / 0.038508 (0.061865) | 0.029706 / 0.023109 (0.006597) | 0.304374 / 0.275898 (0.028476) | 0.337223 / 0.323480 (0.013743) | 0.007021 / 0.007986 (-0.000965) | 0.003420 / 0.004328 (-0.000908) | 0.077754 / 0.004250 (0.073504) | 0.034411 / 0.037052 (-0.002642) | 0.302926 / 0.258489 (0.044437) | 0.342654 / 0.293841 (0.048813) | 0.034528 / 0.128546 (-0.094018) | 0.011926 / 0.075646 (-0.063721) | 0.322971 / 0.419271 (-0.096301) | 0.041384 / 0.043533 (-0.002149) | 0.306433 / 0.255139 (0.051294) | 0.332293 / 0.283200 (0.049093) | 0.084972 / 0.141683 (-0.056711) | 1.493426 / 1.452155 (0.041271) | 1.570446 / 1.492716 (0.077729) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.189090 / 0.018006 (0.171084) | 0.433904 / 0.000490 (0.433414) | 0.001323 / 0.000200 (0.001124) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023531 / 0.037411 (-0.013880) | 0.097774 / 0.014526 (0.083248) | 0.106383 / 0.176557 (-0.070174) | 0.139158 / 0.737135 (-0.597977) | 0.109443 / 0.296338 (-0.186896) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419078 / 0.215209 (0.203869) | 4.182657 / 2.077655 (2.105002) | 1.887276 / 1.504120 (0.383156) | 1.679542 / 1.541195 (0.138347) | 1.718035 / 1.468490 (0.249545) | 0.692628 / 4.584777 (-3.892149) | 3.361354 / 3.745712 (-0.384358) | 1.928583 / 5.269862 (-3.341278) | 1.317291 / 4.565676 (-3.248386) | 0.081799 / 0.424275 (-0.342476) | 0.012318 / 0.007607 (0.004711) | 0.525927 / 0.226044 (0.299883) | 5.285905 / 2.268929 (3.016977) | 2.317524 / 55.444624 (-53.127100) | 1.966478 / 6.876477 (-4.909998) | 2.054869 / 2.142072 (-0.087204) | 0.807579 / 4.805227 (-3.997649) | 0.149854 / 6.500664 (-6.350810) | 0.065285 / 0.075469 (-0.010184) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180516 / 1.841788 (-0.661271) | 13.889734 / 8.074308 (5.815426) | 14.076163 / 10.191392 (3.884771) | 0.156276 / 0.680424 (-0.524148) | 0.029187 / 0.534201 (-0.505013) | 0.403859 / 0.579283 (-0.175424) | 0.404998 / 0.434364 (-0.029366) | 0.471467 / 0.540337 (-0.068871) | 0.564526 / 1.386936 (-0.822410) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006739 / 0.011353 (-0.004614) | 0.004644 / 0.011008 (-0.006364) | 0.097326 / 0.038508 (0.058818) | 0.027728 / 0.023109 (0.004619) | 0.413537 / 0.275898 (0.137639) | 0.452012 / 0.323480 (0.128532) | 0.005346 / 0.007986 (-0.002639) | 0.003338 / 0.004328 (-0.000991) | 0.075670 / 0.004250 (0.071420) | 0.038825 / 0.037052 (0.001772) | 0.415612 / 0.258489 (0.157123) | 0.454680 / 0.293841 (0.160839) | 0.031866 / 0.128546 (-0.096680) | 0.011616 / 0.075646 (-0.064031) | 0.319527 / 0.419271 (-0.099745) | 0.041283 / 0.043533 (-0.002250) | 0.412046 / 0.255139 (0.156907) | 0.435244 / 0.283200 (0.152044) | 0.088400 / 0.141683 (-0.053283) | 1.478125 / 1.452155 (0.025970) | 1.553677 / 1.492716 (0.060960) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229919 / 0.018006 (0.211913) | 0.415446 / 0.000490 (0.414956) | 0.000386 / 0.000200 (0.000186) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024365 / 0.037411 (-0.013046) | 0.098225 / 0.014526 (0.083699) | 0.106674 / 0.176557 (-0.069883) | 0.144755 / 0.737135 (-0.592380) | 0.109221 / 0.296338 (-0.187117) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457665 / 0.215209 (0.242456) | 4.597849 / 2.077655 (2.520195) | 2.171275 / 1.504120 (0.667155) | 1.945547 / 1.541195 (0.404352) | 2.014043 / 1.468490 (0.545553) | 0.699732 / 4.584777 (-3.885045) | 3.420711 / 3.745712 (-0.325001) | 3.298702 / 5.269862 (-1.971159) | 1.390324 / 4.565676 (-3.175353) | 0.082668 / 0.424275 (-0.341607) | 0.012556 / 0.007607 (0.004949) | 0.550406 / 0.226044 (0.324361) | 5.501060 / 2.268929 (3.232132) | 2.659841 / 55.444624 (-52.784783) | 2.243443 / 6.876477 (-4.633034) | 2.266006 / 2.142072 (0.123934) | 0.806295 / 4.805227 (-3.998933) | 0.151399 / 6.500664 (-6.349265) | 0.067048 / 0.075469 (-0.008421) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291404 / 1.841788 (-0.550384) | 14.164728 / 8.074308 (6.090419) | 13.980219 / 10.191392 (3.788827) | 0.140599 / 0.680424 (-0.539824) | 0.016880 / 0.534201 (-0.517321) | 0.379073 / 0.579283 (-0.200210) | 0.385770 / 0.434364 (-0.048594) | 0.442516 / 0.540337 (-0.097822) | 0.533569 / 1.386936 (-0.853367) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#29fa15df972353f51fc434cf8eceb574b60a415f \"CML watermark\")\n", "Tests seem to be failing for unrelated reasons.", "Tests are failing because of a bug on the Hub side - this is being fixed :)\r\n\r\nlmk once the TF documentation page is updated and we can merge !", "@lhoestq Docs updated!" ]
2022-12-19T19:40:27
2023-01-25T16:28:44
2023-01-25T16:21:40
MEMBER
null
Hey all! Here's a first draft of the PR to add a multiprocessing implementation for `to_tf_dataset()`. It worked in some quick testing for me, but obviously I need to do some much more rigorous testing/benchmarking, and add some proper library tests. The core idea is that we do everything using `multiprocessing` and `numpy`, and just wrap a `tf.data.Dataset` around the output. We could also rewrite the existing single-threaded implementation based on this code, which might simplify it a bit. Checklist: - [X] Add initial draft - [x] Check that it works regardless of whether the `collate_fn` or dataset returns `tf` or `np` arrays - [x] Check that it works with `tf.string` return data - [x] Check indices are correctly reshuffled each epoch - [x] Make sure workers don't try to initialize a GPU device!! - [x] Check `fit()` with multiple epochs works fine and that the progress bar is correct - [x] Check there are no memory leaks or zombie processes - [x] Benchmark performance - [x] Tweak params for dataset inference - can we speed things up there a bit? - [x] Add tests to the library - [x] Add a PR to `transformers` to expose the `num_workers` argument via `prepare_tf_dataset` (will merge after this one is released) - [x] Stop TF console spam!! (almost) - [x] Add a method for creating SHM that doesn't crash if it was left and still linked - [x] Add a barrier for Py <= 3.7 because it doesn't support SharedMemory - [x] Support string dtypes by converting them into fixed-width character arrays
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5377/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5377/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5377", "html_url": "https://github.com/huggingface/datasets/pull/5377", "diff_url": "https://github.com/huggingface/datasets/pull/5377.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5377.patch", "merged_at": "2023-01-25T16:21:40" }
true
https://api.github.com/repos/huggingface/datasets/issues/5376
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5376/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5376/comments
https://api.github.com/repos/huggingface/datasets/issues/5376/events
https://github.com/huggingface/datasets/pull/5376
1,502,730,559
PR_kwDODunzps5FxWkM
5,376
set dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5376). All of your documentation changes will be reflected on that endpoint." ]
2022-12-19T10:56:56
2022-12-19T11:01:55
2022-12-19T10:57:16
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5376/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5376/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5376", "html_url": "https://github.com/huggingface/datasets/pull/5376", "diff_url": "https://github.com/huggingface/datasets/pull/5376.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5376.patch", "merged_at": "2022-12-19T10:57:16" }
true
https://api.github.com/repos/huggingface/datasets/issues/5375
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5375/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5375/comments
https://api.github.com/repos/huggingface/datasets/issues/5375/events
https://github.com/huggingface/datasets/pull/5375
1,502,720,404
PR_kwDODunzps5FxUbG
5,375
Release: 2.8.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-19T10:48:26
2022-12-19T10:55:43
2022-12-19T10:53:15
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5375/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5375/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5375", "html_url": "https://github.com/huggingface/datasets/pull/5375", "diff_url": "https://github.com/huggingface/datasets/pull/5375.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5375.patch", "merged_at": "2022-12-19T10:53:15" }
true
https://api.github.com/repos/huggingface/datasets/issues/5374
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5374/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5374/comments
https://api.github.com/repos/huggingface/datasets/issues/5374/events
https://github.com/huggingface/datasets/issues/5374
1,501,872,945
I_kwDODunzps5ZhMMx
5,374
Using too many threads results in: Got disconnected from remote data host. Retrying in 5sec
{ "login": "Muennighoff", "id": 62820084, "node_id": "MDQ6VXNlcjYyODIwMDg0", "avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Muennighoff", "html_url": "https://github.com/Muennighoff", "followers_url": "https://api.github.com/users/Muennighoff/followers", "following_url": "https://api.github.com/users/Muennighoff/following{/other_user}", "gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}", "starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions", "organizations_url": "https://api.github.com/users/Muennighoff/orgs", "repos_url": "https://api.github.com/users/Muennighoff/repos", "events_url": "https://api.github.com/users/Muennighoff/events{/privacy}", "received_events_url": "https://api.github.com/users/Muennighoff/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The data files are hosted on HF at https://huggingface.co/datasets/allenai/c4/tree/main\r\n\r\nYou have 200 runs streaming the same files in parallel. So this is probably a Hub limitation. Maybe rate limiting ? cc @julien-c \r\n\r\nMaybe you can also try to reduce the number of HTTP requests by increasing the block size of each request. This can be done by increasing `DEFAULT_BLOCK_SIZE` in `fsspec.implementations.http`. Default is `5 * 2**20` (5MiB)\r\n\r\nAnyway maybe it's just better to save the dataset locally in that case ?", "you don't get an HTTP error code or something in your stack trace? Kinda hard to debug with this info", "You could try to re-run using this `datasets` branch: [raise-err-when-disconnect](https://github.com/huggingface/datasets/compare/raise-err-when-disconnect?expand=1)\r\nIt should raise the fsspec error", "The weird thing is that I already have it saved locally & it seems to indeed be using the cached one 🧐 ; I'm also using offline mode, so I don't think it has something to do with the Hub.\r\n```\r\nWARNING:datasets.load:Using the latest cached version of the module from /users/muennighoff/.cache/huggingface/modules/datasets_modules/datasets/c4/df532b158939272d032cc63ef19cd5b83e9b4d00c922b833e4cb18b2e9869b01 (last modified on Mon Dec 12 10:45:02 2022) since it couldn't be found locally at c4.\r\n```\r\n\r\n", "No, you passed `streaming=True` so it streams the data from the Hub.\r\nThis message just shows that you use the cached version of the `c4` **module**, aka the python script that is run to generate the examples from the raw data files.\r\n\r\nMaybe the offline mode should also disable `fsspec`/`aiohttp` HTTP calls in `datasets` and not just the `requests` ones.", "> This message just shows that you use the cached version of the c4 module\r\n\r\nAh my bad you're right about the module, but it's also using the downloaded & cached c4 dataset. There's no internet during the runs so it wouldn't work otherwise", "You don't have internet, therefore you get an error while trying to stream ;)" ]
2022-12-18T11:38:58
2022-12-19T16:33:31
null
CONTRIBUTOR
null
### Describe the bug `streaming_download_manager` seems to disconnect if too many runs access the same underlying dataset 🧐 The code works fine for me if I have ~100 runs in parallel, but disconnects once scaling to 200. Possibly related: - https://github.com/huggingface/datasets/pull/3100 - https://github.com/huggingface/datasets/pull/3050 ### Steps to reproduce the bug Running ```python c4 = datasets.load_dataset("c4", "en", split="train", streaming=True).skip(args.start).take(args.end-args.start) df = pd.DataFrame(c4, index=None) ``` with different start & end arguments on 200 CPUs in parallel yields: ``` WARNING:datasets.load:Using the latest cached version of the module from /users/muennighoff/.cache/huggingface/modules/datasets_modules/datasets/c4/df532b158939272d032cc63ef19cd5b83e9b4d00c922b833e4cb18b2e9869b01 (last modified on Mon Dec 12 10:45:02 2022) since it couldn't be found locally at c4. WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [1/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [2/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [3/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [4/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [5/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [6/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [7/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [8/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [9/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [10/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [11/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [12/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [13/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [14/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [15/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [16/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [17/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [18/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [19/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [20/20] ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /pfs/lustrep4/scratch/project_462000119/muennighoff/dec-2022-tasky/inference │ │ _c4.py:68 in <module> │ │ │ │ 65 │ model.eval() │ │ 66 │ │ │ 67 │ c4 = datasets.load_dataset("c4", "en", split="train", streaming=Tru │ │ ❱ 68 │ df = pd.DataFrame(c4, index=None) │ │ 69 │ texts = df["text"].to_list() │ │ 70 │ preds = batch_inference(texts, batch_size=args.batch_size) │ │ 71 │ │ │ │ /opt/cray/pe/python/3.9.12.1/lib/python3.9/site-packages/pandas/core/frame.p │ │ y:684 in __init__ │ │ │ │ 681 │ │ # For data is list-like, or Iterable (will consume into list │ │ 682 │ │ elif is_list_like(data): │ │ 683 │ │ │ if not isinstance(data, (abc.Sequence, ExtensionArray)): │ │ ❱ 684 │ │ │ │ data = list(data) │ │ 685 │ │ │ if len(data) > 0: │ │ 686 │ │ │ │ if is_dataclass(data[0]): │ │ 687 │ │ │ │ │ data = dataclasses_to_dicts(data) │ │ │ │ /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/ │ │ lib/python3.9/site-packages/datasets/iterable_dataset.py:751 in __iter__ │ │ │ │ 748 │ │ yield from ex_iterable.shard_data_sources(shard_idx) │ │ 749 │ │ │ 750 │ def __iter__(self): │ │ ❱ 751 │ │ for key, example in self._iter(): │ │ 752 │ │ │ if self.features: │ │ 753 │ │ │ │ # `IterableDataset` automatically fills missing colum │ │ 754 │ │ │ │ # This is done with `_apply_feature_types`. │ │ │ │ /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/ │ │ lib/python3.9/site-packages/datasets/iterable_dataset.py:741 in _iter │ │ │ │ 738 │ │ │ ex_iterable = self._ex_iterable.shuffle_data_sources(self │ │ 739 │ │ else: │ │ 740 │ │ │ ex_iterable = self._ex_iterable │ │ ❱ 741 │ │ yield from ex_iterable │ │ 742 │ │ │ 743 │ def _iter_shard(self, shard_idx: int): │ │ 744 │ │ if self._shuffling: │ │ │ │ /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/ │ │ lib/python3.9/site-packages/datasets/iterable_dataset.py:617 in __iter__ │ │ │ │ 614 │ │ self.n = n │ │ 615 │ │ │ 616 │ def __iter__(self): │ │ ❱ 617 │ │ yield from islice(self.ex_iterable, self.n) │ │ 618 │ │ │ 619 │ def shuffle_data_sources(self, generator: np.random.Generator) -> │ │ 620 │ │ """Doesn't shuffle the wrapped examples iterable since it wou │ │ │ │ /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/ │ │ lib/python3.9/site-packages/datasets/iterable_dataset.py:594 in __iter__ │ │ │ │ 591 │ │ │ 592 │ def __iter__(self): │ │ 593 │ │ #ex_iterator = iter(self.ex_iterable) │ │ ❱ 594 │ │ yield from islice(self.ex_iterable, self.n, None) │ │ 595 │ │ #for _ in range(self.n): │ │ 596 │ │ # next(ex_iterator) │ │ 597 │ │ #yield from islice(ex_iterator, self.n, None) │ │ │ │ /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/ │ │ lib/python3.9/site-packages/datasets/iterable_dataset.py:106 in __iter__ │ │ │ │ 103 │ │ self.kwargs = kwargs │ │ 104 │ │ │ 105 │ def __iter__(self): │ │ ❱ 106 │ │ yield from self.generate_examples_fn(**self.kwargs) │ │ 107 │ │ │ 108 │ def shuffle_data_sources(self, generator: np.random.Generator) -> │ │ 109 │ │ return ShardShuffledExamplesIterable(self.generate_examples_f │ │ │ │ /users/muennighoff/.cache/huggingface/modules/datasets_modules/datasets/c4/d │ │ f532b158939272d032cc63ef19cd5b83e9b4d00c922b833e4cb18b2e9869b01/c4.py:89 in │ │ _generate_examples │ │ │ │ 86 │ │ for filepath in filepaths: │ │ 87 │ │ │ logger.info("generating examples from = %s", filepath) │ │ 88 │ │ │ with gzip.open(open(filepath, "rb"), "rt", encoding="utf-8" │ │ ❱ 89 │ │ │ │ for line in f: │ │ 90 │ │ │ │ │ if line: │ │ 91 │ │ │ │ │ │ example = json.loads(line) │ │ 92 │ │ │ │ │ │ yield id_, example │ │ │ │ /opt/cray/pe/python/3.9.12.1/lib/python3.9/gzip.py:313 in read1 │ │ │ │ 310 │ │ │ │ 311 │ │ if size < 0: │ │ 312 │ │ │ size = io.DEFAULT_BUFFER_SIZE │ │ ❱ 313 │ │ return self._buffer.read1(size) │ │ 314 │ │ │ 315 │ def peek(self, n): │ │ 316 │ │ self._check_not_closed() │ │ │ │ /opt/cray/pe/python/3.9.12.1/lib/python3.9/_compression.py:68 in readinto │ │ │ │ 65 │ │ │ 66 │ def readinto(self, b): │ │ 67 │ │ with memoryview(b) as view, view.cast("B") as byte_view: │ │ ❱ 68 │ │ │ data = self.read(len(byte_view)) │ │ 69 │ │ │ byte_view[:len(data)] = data │ │ 70 │ │ return len(data) │ │ 71 │ │ │ │ /opt/cray/pe/python/3.9.12.1/lib/python3.9/gzip.py:493 in read │ │ │ │ 490 │ │ │ │ self._new_member = False │ │ 491 │ │ │ │ │ 492 │ │ │ # Read a chunk of data from the file │ │ ❱ 493 │ │ │ buf = self._fp.read(io.DEFAULT_BUFFER_SIZE) │ │ 494 │ │ │ │ │ 495 │ │ │ uncompress = self._decompressor.decompress(buf, size) │ │ 496 │ │ │ if self._decompressor.unconsumed_tail != b"": │ │ │ │ /opt/cray/pe/python/3.9.12.1/lib/python3.9/gzip.py:96 in read │ │ │ │ 93 │ │ │ read = self._read │ │ 94 │ │ │ self._read = None │ │ 95 │ │ │ return self._buffer[read:] + \ │ │ ❱ 96 │ │ │ │ self.file.read(size-self._length+read) │ │ 97 │ │ │ 98 │ def prepend(self, prepend=b''): │ │ 99 │ │ if self._read is None: │ │ │ │ /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/ │ │ lib/python3.9/site-packages/datasets/download/streaming_download_manager.py: │ │ 365 in read_with_retries │ │ │ │ 362 │ │ │ │ ) │ │ 363 │ │ │ │ time.sleep(config.STREAMING_READ_RETRY_INTERVAL) │ │ 364 │ │ else: │ │ ❱ 365 │ │ │ raise ConnectionError("Server Disconnected") │ │ 366 │ │ return out │ │ 367 │ │ │ 368 │ file_obj.read = read_with_retries │ ╰──────────────────────────────────────────────────────────────────────────────╯ ConnectionError: Server Disconnected ``` ### Expected behavior There should be no disconnect I think. ### Environment info ``` datasets=2.7.0 Python 3.9.12 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5374/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5374/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5373
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5373/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5373/comments
https://api.github.com/repos/huggingface/datasets/issues/5373/events
https://github.com/huggingface/datasets/pull/5373
1,501,484,197
PR_kwDODunzps5FtRU4
5,373
Simplify skipping
{ "login": "Muennighoff", "id": 62820084, "node_id": "MDQ6VXNlcjYyODIwMDg0", "avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Muennighoff", "html_url": "https://github.com/Muennighoff", "followers_url": "https://api.github.com/users/Muennighoff/followers", "following_url": "https://api.github.com/users/Muennighoff/following{/other_user}", "gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}", "starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions", "organizations_url": "https://api.github.com/users/Muennighoff/orgs", "repos_url": "https://api.github.com/users/Muennighoff/repos", "events_url": "https://api.github.com/users/Muennighoff/events{/privacy}", "received_events_url": "https://api.github.com/users/Muennighoff/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-17T17:23:52
2022-12-18T21:43:31
2022-12-18T21:40:21
CONTRIBUTOR
null
Was hoping to find a way to speed up the skipping as I'm running into bottlenecks skipping 100M examples on C4 (it takes 12 hours to skip), but didn't find anything better than this small change :( Maybe there's a way to directly skip whole shards to speed it up? 🧐
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5373/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5373/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5373", "html_url": "https://github.com/huggingface/datasets/pull/5373", "diff_url": "https://github.com/huggingface/datasets/pull/5373.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5373.patch", "merged_at": "2022-12-18T21:40:21" }
true
https://api.github.com/repos/huggingface/datasets/issues/5372
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5372/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5372/comments
https://api.github.com/repos/huggingface/datasets/issues/5372/events
https://github.com/huggingface/datasets/pull/5372
1,501,377,802
PR_kwDODunzps5Fs9w5
5,372
Fix streaming pandas.read_excel
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009517 / 0.011353 (-0.001835) | 0.005210 / 0.011008 (-0.005798) | 0.098916 / 0.038508 (0.060408) | 0.036123 / 0.023109 (0.013014) | 0.301564 / 0.275898 (0.025666) | 0.358086 / 0.323480 (0.034606) | 0.008159 / 0.007986 (0.000174) | 0.004122 / 0.004328 (-0.000206) | 0.075899 / 0.004250 (0.071648) | 0.046082 / 0.037052 (0.009030) | 0.302871 / 0.258489 (0.044382) | 0.351162 / 0.293841 (0.057321) | 0.038215 / 0.128546 (-0.090331) | 0.012026 / 0.075646 (-0.063620) | 0.330988 / 0.419271 (-0.088284) | 0.048351 / 0.043533 (0.004818) | 0.291840 / 0.255139 (0.036701) | 0.320387 / 0.283200 (0.037187) | 0.105018 / 0.141683 (-0.036665) | 1.447158 / 1.452155 (-0.004997) | 1.491205 / 1.492716 (-0.001511) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.250870 / 0.018006 (0.232863) | 0.562974 / 0.000490 (0.562484) | 0.001789 / 0.000200 (0.001589) | 0.000252 / 0.000054 (0.000197) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028208 / 0.037411 (-0.009203) | 0.110897 / 0.014526 (0.096371) | 0.120394 / 0.176557 (-0.056163) | 0.164980 / 0.737135 (-0.572156) | 0.126283 / 0.296338 (-0.170056) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397922 / 0.215209 (0.182713) | 3.969233 / 2.077655 (1.891578) | 1.766422 / 1.504120 (0.262302) | 1.577503 / 1.541195 (0.036308) | 1.672344 / 1.468490 (0.203854) | 0.695708 / 4.584777 (-3.889069) | 3.770763 / 3.745712 (0.025051) | 3.369592 / 5.269862 (-1.900269) | 1.851122 / 4.565676 (-2.714554) | 0.084063 / 0.424275 (-0.340212) | 0.012156 / 0.007607 (0.004549) | 0.534639 / 0.226044 (0.308594) | 5.021955 / 2.268929 (2.753027) | 2.215438 / 55.444624 (-53.229186) | 1.890459 / 6.876477 (-4.986018) | 2.071361 / 2.142072 (-0.070712) | 0.834623 / 4.805227 (-3.970604) | 0.165588 / 6.500664 (-6.335076) | 0.064336 / 0.075469 (-0.011133) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.205651 / 1.841788 (-0.636136) | 14.916871 / 8.074308 (6.842563) | 14.559495 / 10.191392 (4.368103) | 0.166889 / 0.680424 (-0.513535) | 0.028645 / 0.534201 (-0.505556) | 0.433634 / 0.579283 (-0.145649) | 0.429849 / 0.434364 (-0.004515) | 0.508617 / 0.540337 (-0.031720) | 0.595261 / 1.386936 (-0.791675) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007696 / 0.011353 (-0.003657) | 0.005434 / 0.011008 (-0.005574) | 0.099234 / 0.038508 (0.060725) | 0.033904 / 0.023109 (0.010795) | 0.379181 / 0.275898 (0.103283) | 0.401858 / 0.323480 (0.078379) | 0.006257 / 0.007986 (-0.001729) | 0.004406 / 0.004328 (0.000077) | 0.073174 / 0.004250 (0.068923) | 0.056033 / 0.037052 (0.018981) | 0.379375 / 0.258489 (0.120886) | 0.425928 / 0.293841 (0.132087) | 0.037476 / 0.128546 (-0.091071) | 0.012520 / 0.075646 (-0.063127) | 0.364975 / 0.419271 (-0.054297) | 0.049341 / 0.043533 (0.005808) | 0.370519 / 0.255139 (0.115380) | 0.390585 / 0.283200 (0.107385) | 0.113339 / 0.141683 (-0.028344) | 1.460575 / 1.452155 (0.008421) | 1.564951 / 1.492716 (0.072235) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246217 / 0.018006 (0.228210) | 0.554358 / 0.000490 (0.553869) | 0.000451 / 0.000200 (0.000251) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029557 / 0.037411 (-0.007855) | 0.110472 / 0.014526 (0.095946) | 0.122652 / 0.176557 (-0.053904) | 0.159396 / 0.737135 (-0.577739) | 0.128852 / 0.296338 (-0.167486) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.447927 / 0.215209 (0.232718) | 4.448292 / 2.077655 (2.370637) | 2.228874 / 1.504120 (0.724754) | 2.030231 / 1.541195 (0.489036) | 2.116417 / 1.468490 (0.647927) | 0.702713 / 4.584777 (-3.882064) | 3.774063 / 3.745712 (0.028351) | 3.521662 / 5.269862 (-1.748200) | 1.476700 / 4.565676 (-3.088976) | 0.084921 / 0.424275 (-0.339354) | 0.012862 / 0.007607 (0.005255) | 0.559142 / 0.226044 (0.333098) | 5.512233 / 2.268929 (3.243305) | 2.750024 / 55.444624 (-52.694600) | 2.388845 / 6.876477 (-4.487632) | 2.541786 / 2.142072 (0.399714) | 0.842256 / 4.805227 (-3.962971) | 0.168088 / 6.500664 (-6.332576) | 0.064211 / 0.075469 (-0.011258) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.239001 / 1.841788 (-0.602787) | 15.286345 / 8.074308 (7.212036) | 13.883981 / 10.191392 (3.692589) | 0.186212 / 0.680424 (-0.494212) | 0.018305 / 0.534201 (-0.515896) | 0.420459 / 0.579283 (-0.158824) | 0.421039 / 0.434364 (-0.013325) | 0.487348 / 0.540337 (-0.052989) | 0.587730 / 1.386936 (-0.799206) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n" ]
2022-12-17T12:58:52
2023-01-06T11:50:58
2023-01-06T11:43:37
MEMBER
null
This PR fixes `xpandas_read_excel`: - Support passing a path string, besides a file-like object - Support passing `use_auth_token` - First assumes the host server supports HTTP range requests; only if a ValueError is thrown (Cannot seek streaming HTTP file), then it preserves previous behavior (see [#3355](https://github.com/huggingface/datasets/pull/3355)). Fix https://huggingface.co/datasets/bigbio/meqsum/discussions/1 Fix: - https://github.com/bigscience-workshop/biomedical/issues/801 Related to: - #3355
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5372/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5372/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5372", "html_url": "https://github.com/huggingface/datasets/pull/5372", "diff_url": "https://github.com/huggingface/datasets/pull/5372.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5372.patch", "merged_at": "2023-01-06T11:43:37" }
true
https://api.github.com/repos/huggingface/datasets/issues/5371
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5371/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5371/comments
https://api.github.com/repos/huggingface/datasets/issues/5371/events
https://github.com/huggingface/datasets/issues/5371
1,501,369,036
I_kwDODunzps5ZfRLM
5,371
Add a robustness benchmark dataset for vision
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[ { "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false } ]
null
[ "Ccing @nazneenrajani @lvwerra @osanseviero " ]
2022-12-17T12:35:13
2022-12-20T06:21:41
null
MEMBER
null
### Name ImageNet-C ### Paper Benchmarking Neural Network Robustness to Common Corruptions and Perturbations ### Data https://github.com/hendrycks/robustness ### Motivation It's a known fact that vision models are brittle when they meet with slightly corrupted and perturbed data. This is also correlated to the robustness aspects of vision models. Researchers use different benchmark datasets to evaluate the robustness aspects of vision models. ImageNet-C is one of them. Having this dataset in 🤗 Datasets would allow researchers to evaluate and study the robustness aspects of vision models. Since the metric associated with these evaluations is top-1 accuracy, researchers should be able to easily take advantage of the evaluation benchmarks on the Hub and perform comprehensive reporting. ImageNet-C is a large dataset. Once it's in, it can act as a reference and we can also reach out to the authors of the other robustness benchmark datasets in vision, such as ObjectNet, WILDS, Metashift, etc. These datasets cater to different aspects. For example, ObjectNet is related to assessing how well a model performs under sub-population shifts. Related thread: https://huggingface.slack.com/archives/C036H4A5U8Z/p1669994598060499
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5371/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5371/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5369
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5369/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5369/comments
https://api.github.com/repos/huggingface/datasets/issues/5369/events
https://github.com/huggingface/datasets/pull/5369
1,500,622,276
PR_kwDODunzps5Fqaj-
5,369
Distributed support
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Alright all the tests are passing - this is ready for review", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.015146 / 0.011353 (0.003793) | 0.006683 / 0.011008 (-0.004326) | 0.125994 / 0.038508 (0.087486) | 0.041345 / 0.023109 (0.018235) | 0.378609 / 0.275898 (0.102711) | 0.483139 / 0.323480 (0.159659) | 0.009669 / 0.007986 (0.001684) | 0.005143 / 0.004328 (0.000814) | 0.092015 / 0.004250 (0.087765) | 0.052728 / 0.037052 (0.015676) | 0.397166 / 0.258489 (0.138677) | 0.465820 / 0.293841 (0.171979) | 0.051025 / 0.128546 (-0.077521) | 0.018451 / 0.075646 (-0.057196) | 0.397311 / 0.419271 (-0.021960) | 0.054842 / 0.043533 (0.011309) | 0.391203 / 0.255139 (0.136064) | 0.412743 / 0.283200 (0.129543) | 0.111356 / 0.141683 (-0.030327) | 1.697526 / 1.452155 (0.245372) | 1.795017 / 1.492716 (0.302301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253737 / 0.018006 (0.235731) | 0.583071 / 0.000490 (0.582581) | 0.005958 / 0.000200 (0.005758) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030397 / 0.037411 (-0.007014) | 0.112242 / 0.014526 (0.097716) | 0.138807 / 0.176557 (-0.037749) | 0.209820 / 0.737135 (-0.527316) | 0.139530 / 0.296338 (-0.156808) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.574111 / 0.215209 (0.358902) | 5.623713 / 2.077655 (3.546058) | 2.416880 / 1.504120 (0.912760) | 1.951013 / 1.541195 (0.409819) | 2.124565 / 1.468490 (0.656075) | 1.268854 / 4.584777 (-3.315923) | 5.942368 / 3.745712 (2.196656) | 5.413814 / 5.269862 (0.143952) | 2.931638 / 4.565676 (-1.634038) | 0.135070 / 0.424275 (-0.289205) | 0.014290 / 0.007607 (0.006683) | 0.708384 / 0.226044 (0.482340) | 7.487994 / 2.268929 (5.219065) | 3.074210 / 55.444624 (-52.370414) | 2.380583 / 6.876477 (-4.495893) | 2.522298 / 2.142072 (0.380226) | 1.336741 / 4.805227 (-3.468486) | 0.236761 / 6.500664 (-6.263903) | 0.076592 / 0.075469 (0.001123) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.629415 / 1.841788 (-0.212373) | 19.000640 / 8.074308 (10.926332) | 21.474058 / 10.191392 (11.282666) | 0.231227 / 0.680424 (-0.449197) | 0.046213 / 0.534201 (-0.487988) | 0.565703 / 0.579283 (-0.013580) | 0.662956 / 0.434364 (0.228592) | 0.656475 / 0.540337 (0.116137) | 0.762534 / 1.386936 (-0.624402) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010952 / 0.011353 (-0.000400) | 0.006259 / 0.011008 (-0.004749) | 0.132430 / 0.038508 (0.093922) | 0.037920 / 0.023109 (0.014811) | 0.483565 / 0.275898 (0.207667) | 0.528190 / 0.323480 (0.204710) | 0.008116 / 0.007986 (0.000130) | 0.006768 / 0.004328 (0.002440) | 0.100520 / 0.004250 (0.096270) | 0.055208 / 0.037052 (0.018155) | 0.484672 / 0.258489 (0.226183) | 0.556937 / 0.293841 (0.263096) | 0.057938 / 0.128546 (-0.070609) | 0.020821 / 0.075646 (-0.054826) | 0.430735 / 0.419271 (0.011464) | 0.066317 / 0.043533 (0.022785) | 0.496652 / 0.255139 (0.241513) | 0.502004 / 0.283200 (0.218804) | 0.125403 / 0.141683 (-0.016280) | 1.833396 / 1.452155 (0.381241) | 1.974517 / 1.492716 (0.481800) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269198 / 0.018006 (0.251191) | 0.620314 / 0.000490 (0.619824) | 0.000535 / 0.000200 (0.000335) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032373 / 0.037411 (-0.005039) | 0.130043 / 0.014526 (0.115517) | 0.146217 / 0.176557 (-0.030339) | 0.200187 / 0.737135 (-0.536948) | 0.152839 / 0.296338 (-0.143499) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.677478 / 0.215209 (0.462268) | 6.678856 / 2.077655 (4.601201) | 3.025870 / 1.504120 (1.521750) | 2.678196 / 1.541195 (1.137001) | 2.740640 / 1.468490 (1.272150) | 1.237163 / 4.584777 (-3.347614) | 5.752621 / 3.745712 (2.006908) | 3.170435 / 5.269862 (-2.099427) | 2.049174 / 4.565676 (-2.516502) | 0.147663 / 0.424275 (-0.276612) | 0.016107 / 0.007607 (0.008500) | 0.849666 / 0.226044 (0.623621) | 8.395212 / 2.268929 (6.126283) | 3.741120 / 55.444624 (-51.703505) | 3.102926 / 6.876477 (-3.773550) | 3.233655 / 2.142072 (1.091583) | 1.520349 / 4.805227 (-3.284878) | 0.267159 / 6.500664 (-6.233505) | 0.083646 / 0.075469 (0.008177) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.640458 / 1.841788 (-0.201330) | 19.043169 / 8.074308 (10.968861) | 22.786126 / 10.191392 (12.594734) | 0.218040 / 0.680424 (-0.462384) | 0.032948 / 0.534201 (-0.501253) | 0.569574 / 0.579283 (-0.009710) | 0.658746 / 0.434364 (0.224382) | 0.650501 / 0.540337 (0.110164) | 0.730588 / 1.386936 (-0.656348) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n", "just added a note :)" ]
2022-12-16T17:43:47
2023-01-16T13:36:12
2023-01-16T13:33:32
MEMBER
null
To split your dataset across your training nodes, you can use the new [`datasets.distributed.split_dataset_by_node`]: ```python import os from datasets.distributed import split_dataset_by_node ds = split_dataset_by_node(ds, rank=int(os.environ["RANK"]), world_size=int(os.environ["WORLD_SIZE"])) ``` This works for both map-style datasets and iterable datasets. The dataset is split for the node at rank `rank` in a pool of nodes of size `world_size`. For map-style datasets: Each node is assigned a chunk of data, e.g. rank 0 is given the first chunk of the dataset. For iterable datasets: If the dataset has a number of shards that is a factor of `world_size` (i.e. if `dataset.n_shards % world_size == 0`), then the shards are evenly assigned across the nodes, which is the most optimized. Otherwise, each node keeps 1 example out of `world_size`, skipping the other examples. This can also be combined with a `torch.utils.data.DataLoader` if you want each node to use multiple workers to load the data. This also supports shuffling. At each epoch, the iterable dataset shards are reshuffled across all the nodes - you just have to call `iterable_ds.set_epoch(epoch_number)`. TODO: - [x] docs for usage in PyTorch - [x] unit tests - [x] integration tests with torch.distributed.launch Related to https://github.com/huggingface/transformers/issues/20770 Close https://github.com/huggingface/datasets/issues/5360
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5369/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5369/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5369", "html_url": "https://github.com/huggingface/datasets/pull/5369", "diff_url": "https://github.com/huggingface/datasets/pull/5369.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5369.patch", "merged_at": "2023-01-16T13:33:32" }
true
https://api.github.com/repos/huggingface/datasets/issues/5368
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5368/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5368/comments
https://api.github.com/repos/huggingface/datasets/issues/5368/events
https://github.com/huggingface/datasets/pull/5368
1,500,322,973
PR_kwDODunzps5FpZyx
5,368
Align remove columns behavior and input dict mutation in `map` with previous behavior
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-16T14:28:47
2022-12-16T16:28:08
2022-12-16T16:25:12
CONTRIBUTOR
null
Align the `remove_columns` behavior and input dict mutation in `map` with the behavior before https://github.com/huggingface/datasets/pull/5252.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5368/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5368/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5368", "html_url": "https://github.com/huggingface/datasets/pull/5368", "diff_url": "https://github.com/huggingface/datasets/pull/5368.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5368.patch", "merged_at": "2022-12-16T16:25:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/5367
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5367/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5367/comments
https://api.github.com/repos/huggingface/datasets/issues/5367/events
https://github.com/huggingface/datasets/pull/5367
1,499,174,749
PR_kwDODunzps5FlevK
5,367
Fix remove columns from lazy dict
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-15T22:04:12
2022-12-15T22:27:53
2022-12-15T22:24:50
MEMBER
null
This was introduced in https://github.com/huggingface/datasets/pull/5252 and causing the transformers CI to break: https://app.circleci.com/pipelines/github/huggingface/transformers/53886/workflows/522faf2e-a053-454c-94f8-a617fde33393/jobs/648597 Basically this code should return a dataset with only one column: ```python from datasets import * ds = Dataset.from_dict({"a": range(5)}) def f(x): x["b"] = x["a"] return x ds = ds.map(f, remove_columns=["a"]) assert ds.column_names == ["b"] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5367/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5367/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5367", "html_url": "https://github.com/huggingface/datasets/pull/5367", "diff_url": "https://github.com/huggingface/datasets/pull/5367.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5367.patch", "merged_at": "2022-12-15T22:24:50" }
true
https://api.github.com/repos/huggingface/datasets/issues/5366
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5366/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5366/comments
https://api.github.com/repos/huggingface/datasets/issues/5366/events
https://github.com/huggingface/datasets/pull/5366
1,498,530,851
PR_kwDODunzps5FjSFl
5,366
ExamplesIterable fixes
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-15T14:23:05
2022-12-15T14:44:47
2022-12-15T14:41:45
MEMBER
null
fix typing and ExamplesIterable.shard_data_sources
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5366/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5366/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5366", "html_url": "https://github.com/huggingface/datasets/pull/5366", "diff_url": "https://github.com/huggingface/datasets/pull/5366.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5366.patch", "merged_at": "2022-12-15T14:41:45" }
true
https://api.github.com/repos/huggingface/datasets/issues/5365
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5365/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5365/comments
https://api.github.com/repos/huggingface/datasets/issues/5365/events
https://github.com/huggingface/datasets/pull/5365
1,498,422,466
PR_kwDODunzps5Fi6ZD
5,365
fix: image array should support other formats than uint8
{ "login": "vigsterkr", "id": 30353, "node_id": "MDQ6VXNlcjMwMzUz", "avatar_url": "https://avatars.githubusercontent.com/u/30353?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vigsterkr", "html_url": "https://github.com/vigsterkr", "followers_url": "https://api.github.com/users/vigsterkr/followers", "following_url": "https://api.github.com/users/vigsterkr/following{/other_user}", "gists_url": "https://api.github.com/users/vigsterkr/gists{/gist_id}", "starred_url": "https://api.github.com/users/vigsterkr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vigsterkr/subscriptions", "organizations_url": "https://api.github.com/users/vigsterkr/orgs", "repos_url": "https://api.github.com/users/vigsterkr/repos", "events_url": "https://api.github.com/users/vigsterkr/events{/privacy}", "received_events_url": "https://api.github.com/users/vigsterkr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi, thanks for working on this! \r\n\r\nI agree that the current type-casting (always cast to `np.uint8` as Tensorflow Datasets does) is a bit too harsh. However, not all dtypes are supported in `Image.fromarray` (e.g. np.int64), so we need to treat these with special care (e.g. downcast to the closest supported dtype, maybe with warnings to let the user know what's happening).\r\n\r\nPS: To avoid the CI failures, we need to handle two more instances of the cast to `np.uint8` (both are in the `image.py` file).", "I've made some changes to the PR.\r\n\r\nNow the encoding procedure behaves as follows:\r\n* for multi-channel arrays: if their dtype is `int`/`uint`, cast to np.uint8 (the only supported dtype for multi-channel arrays), throw an error otherwise\r\n* if the array dtype is of valid kind (\"u\", \"i\", \"f\", ...):\r\n * don't do anything if Pillow natively supports it\r\n * otherwise, downcast until it becomes compatible with Pillow\r\n* raise an error if nothing from above is true", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009537 / 0.011353 (-0.001816) | 0.004946 / 0.011008 (-0.006062) | 0.100552 / 0.038508 (0.062043) | 0.035119 / 0.023109 (0.012009) | 0.295989 / 0.275898 (0.020091) | 0.361326 / 0.323480 (0.037846) | 0.007608 / 0.007986 (-0.000378) | 0.004151 / 0.004328 (-0.000177) | 0.077301 / 0.004250 (0.073050) | 0.042921 / 0.037052 (0.005869) | 0.304804 / 0.258489 (0.046315) | 0.345934 / 0.293841 (0.052093) | 0.038987 / 0.128546 (-0.089559) | 0.012055 / 0.075646 (-0.063591) | 0.334035 / 0.419271 (-0.085236) | 0.052679 / 0.043533 (0.009146) | 0.291700 / 0.255139 (0.036561) | 0.335423 / 0.283200 (0.052223) | 0.107002 / 0.141683 (-0.034680) | 1.516780 / 1.452155 (0.064625) | 1.514137 / 1.492716 (0.021420) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.014719 / 0.018006 (-0.003287) | 0.545251 / 0.000490 (0.544761) | 0.004719 / 0.000200 (0.004519) | 0.000275 / 0.000054 (0.000220) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026633 / 0.037411 (-0.010779) | 0.106911 / 0.014526 (0.092385) | 0.120258 / 0.176557 (-0.056299) | 0.156196 / 0.737135 (-0.580940) | 0.123132 / 0.296338 (-0.173207) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398018 / 0.215209 (0.182809) | 3.973992 / 2.077655 (1.896337) | 1.776436 / 1.504120 (0.272316) | 1.579036 / 1.541195 (0.037841) | 1.643345 / 1.468490 (0.174855) | 0.692408 / 4.584777 (-3.892369) | 3.757243 / 3.745712 (0.011531) | 3.226212 / 5.269862 (-2.043649) | 1.797845 / 4.565676 (-2.767831) | 0.085878 / 0.424275 (-0.338398) | 0.012451 / 0.007607 (0.004844) | 0.509755 / 0.226044 (0.283711) | 5.029035 / 2.268929 (2.760107) | 2.255507 / 55.444624 (-53.189117) | 1.892868 / 6.876477 (-4.983609) | 1.900017 / 2.142072 (-0.242055) | 0.853965 / 4.805227 (-3.951263) | 0.167268 / 6.500664 (-6.333396) | 0.062796 / 0.075469 (-0.012673) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.183361 / 1.841788 (-0.658427) | 15.103797 / 8.074308 (7.029489) | 14.112931 / 10.191392 (3.921539) | 0.167234 / 0.680424 (-0.513190) | 0.029487 / 0.534201 (-0.504713) | 0.444121 / 0.579283 (-0.135162) | 0.437821 / 0.434364 (0.003457) | 0.544900 / 0.540337 (0.004562) | 0.642142 / 1.386936 (-0.744794) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007078 / 0.011353 (-0.004275) | 0.004983 / 0.011008 (-0.006026) | 0.097106 / 0.038508 (0.058598) | 0.033747 / 0.023109 (0.010637) | 0.382030 / 0.275898 (0.106132) | 0.410193 / 0.323480 (0.086713) | 0.006658 / 0.007986 (-0.001327) | 0.005358 / 0.004328 (0.001029) | 0.073878 / 0.004250 (0.069628) | 0.049292 / 0.037052 (0.012240) | 0.384053 / 0.258489 (0.125564) | 0.427826 / 0.293841 (0.133985) | 0.036780 / 0.128546 (-0.091766) | 0.012469 / 0.075646 (-0.063178) | 0.332989 / 0.419271 (-0.086283) | 0.059531 / 0.043533 (0.015998) | 0.378431 / 0.255139 (0.123292) | 0.402672 / 0.283200 (0.119473) | 0.110782 / 0.141683 (-0.030901) | 1.484570 / 1.452155 (0.032416) | 1.608081 / 1.492716 (0.115365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232356 / 0.018006 (0.214350) | 0.545648 / 0.000490 (0.545158) | 0.003113 / 0.000200 (0.002913) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028138 / 0.037411 (-0.009273) | 0.110786 / 0.014526 (0.096260) | 0.123615 / 0.176557 (-0.052941) | 0.165773 / 0.737135 (-0.571362) | 0.126401 / 0.296338 (-0.169937) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440518 / 0.215209 (0.225309) | 4.393821 / 2.077655 (2.316166) | 2.295479 / 1.504120 (0.791359) | 2.116679 / 1.541195 (0.575485) | 2.215561 / 1.468490 (0.747071) | 0.722343 / 4.584777 (-3.862434) | 3.783360 / 3.745712 (0.037647) | 3.302242 / 5.269862 (-1.967620) | 1.681535 / 4.565676 (-2.884142) | 0.085738 / 0.424275 (-0.338537) | 0.012373 / 0.007607 (0.004766) | 0.540499 / 0.226044 (0.314455) | 5.384915 / 2.268929 (3.115986) | 2.766346 / 55.444624 (-52.678279) | 2.451994 / 6.876477 (-4.424483) | 2.505720 / 2.142072 (0.363647) | 0.833006 / 4.805227 (-3.972221) | 0.168206 / 6.500664 (-6.332458) | 0.064971 / 0.075469 (-0.010498) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253499 / 1.841788 (-0.588289) | 15.381840 / 8.074308 (7.307532) | 13.519493 / 10.191392 (3.328101) | 0.165559 / 0.680424 (-0.514865) | 0.017682 / 0.534201 (-0.516519) | 0.422248 / 0.579283 (-0.157035) | 0.422750 / 0.434364 (-0.011614) | 0.524546 / 0.540337 (-0.015792) | 0.626956 / 1.386936 (-0.759980) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d9a8d8af0961c473103516dd018e2d34d23cea02 \"CML watermark\")\n" ]
2022-12-15T13:17:50
2023-01-26T18:46:45
2023-01-26T18:39:36
CONTRIBUTOR
null
Currently images that are provided as ndarrays, but not in `uint8` format are going to loose data. Namely, for example in a depth image where the data is in float32 format, the type-casting to uint8 will basically make the whole image blank. `PIL.Image.fromarray` [does support mode `F`](https://pillow.readthedocs.io/en/stable/handbook/concepts.html#concept-modes). although maybe some further metadata could be supplied via the [Image](https://huggingface.co/docs/datasets/v2.7.1/en/package_reference/main_classes#datasets.Image) object.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5365/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5365/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5365", "html_url": "https://github.com/huggingface/datasets/pull/5365", "diff_url": "https://github.com/huggingface/datasets/pull/5365.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5365.patch", "merged_at": "2023-01-26T18:39:36" }
true
https://api.github.com/repos/huggingface/datasets/issues/5364
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5364/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5364/comments
https://api.github.com/repos/huggingface/datasets/issues/5364/events
https://github.com/huggingface/datasets/pull/5364
1,498,360,628
PR_kwDODunzps5Fiss1
5,364
Support for writing arrow files directly with BeamWriter
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5364). All of your documentation changes will be reflected on that endpoint.", "Deleting `BeamPipeline` and `upload_local_to_remote` would break the existing Beam scripts, so I reverted this change.\r\n\r\nFrom what I understand, we need these components in our scripts for the pattern:\r\n```python\r\nif not pipeline.is_local():\r\n dl_manager.ship_files_with_pipeline()\r\n```\r\n\r\nI plan to address this in a subsequent PR by (implicitly) downloading the files directly to the remote storage of the non-local runners.", "I got `AttributeError: 'Pipeline' object has no attribute 'is_local'` when running\r\n```python\r\nload_dataset(\"wikipedia\", language=\"af\", date=\"20230101\", beam_runner=\"DirectRunner\")\r\n```\r\n```python\r\n~/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia.py in _split_generators(self, dl_manager, pipeline)\r\n 965 # Use dictionary since testing mock always returns the same result.\r\n 966 downloaded_files = dl_manager.download({\"xml\": xml_urls})\r\n--> 967 if not pipeline.is_local():\r\n 968 downloaded_files = dl_manager.ship_files_with_pipeline(downloaded_files, pipeline)\r\n 969 \r\n\r\nAttributeError: 'Pipeline' object has no attribute 'is_local'\r\n```", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010649 / 0.011353 (-0.000704) | 0.006116 / 0.011008 (-0.004892) | 0.115568 / 0.038508 (0.077060) | 0.041704 / 0.023109 (0.018595) | 0.360459 / 0.275898 (0.084561) | 0.425679 / 0.323480 (0.102200) | 0.008992 / 0.007986 (0.001006) | 0.006321 / 0.004328 (0.001993) | 0.090223 / 0.004250 (0.085973) | 0.049877 / 0.037052 (0.012824) | 0.382447 / 0.258489 (0.123958) | 0.406567 / 0.293841 (0.112726) | 0.045138 / 0.128546 (-0.083409) | 0.014203 / 0.075646 (-0.061444) | 0.388897 / 0.419271 (-0.030375) | 0.057176 / 0.043533 (0.013644) | 0.358729 / 0.255139 (0.103590) | 0.386086 / 0.283200 (0.102887) | 0.119221 / 0.141683 (-0.022462) | 1.731574 / 1.452155 (0.279419) | 1.744103 / 1.492716 (0.251386) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230380 / 0.018006 (0.212373) | 0.493690 / 0.000490 (0.493201) | 0.005150 / 0.000200 (0.004950) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030771 / 0.037411 (-0.006641) | 0.123196 / 0.014526 (0.108671) | 0.134097 / 0.176557 (-0.042459) | 0.190442 / 0.737135 (-0.546693) | 0.138416 / 0.296338 (-0.157923) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.469763 / 0.215209 (0.254554) | 4.682847 / 2.077655 (2.605192) | 2.076717 / 1.504120 (0.572597) | 1.843721 / 1.541195 (0.302527) | 1.923486 / 1.468490 (0.454996) | 0.817680 / 4.584777 (-3.767097) | 4.482409 / 3.745712 (0.736697) | 3.898695 / 5.269862 (-1.371167) | 2.078291 / 4.565676 (-2.487386) | 0.100285 / 0.424275 (-0.323990) | 0.014761 / 0.007607 (0.007154) | 0.611261 / 0.226044 (0.385217) | 5.926919 / 2.268929 (3.657990) | 2.685080 / 55.444624 (-52.759544) | 2.232179 / 6.876477 (-4.644298) | 2.305576 / 2.142072 (0.163504) | 0.993729 / 4.805227 (-3.811498) | 0.194491 / 6.500664 (-6.306173) | 0.074176 / 0.075469 (-0.001293) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.388592 / 1.841788 (-0.453196) | 17.146945 / 8.074308 (9.072636) | 15.989570 / 10.191392 (5.798178) | 0.200147 / 0.680424 (-0.480277) | 0.034009 / 0.534201 (-0.500192) | 0.517531 / 0.579283 (-0.061753) | 0.533966 / 0.434364 (0.099602) | 0.637024 / 0.540337 (0.096687) | 0.749166 / 1.386936 (-0.637770) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008240 / 0.011353 (-0.003113) | 0.006139 / 0.011008 (-0.004869) | 0.112258 / 0.038508 (0.073750) | 0.039001 / 0.023109 (0.015891) | 0.449467 / 0.275898 (0.173569) | 0.483422 / 0.323480 (0.159942) | 0.006176 / 0.007986 (-0.001810) | 0.006340 / 0.004328 (0.002012) | 0.083105 / 0.004250 (0.078855) | 0.047002 / 0.037052 (0.009950) | 0.458564 / 0.258489 (0.200075) | 0.513704 / 0.293841 (0.219863) | 0.041359 / 0.128546 (-0.087188) | 0.014515 / 0.075646 (-0.061131) | 0.392599 / 0.419271 (-0.026673) | 0.055222 / 0.043533 (0.011690) | 0.446956 / 0.255139 (0.191817) | 0.469194 / 0.283200 (0.185994) | 0.118212 / 0.141683 (-0.023471) | 1.682647 / 1.452155 (0.230492) | 1.780076 / 1.492716 (0.287360) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.259124 / 0.018006 (0.241117) | 0.507559 / 0.000490 (0.507069) | 0.001080 / 0.000200 (0.000880) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031969 / 0.037411 (-0.005442) | 0.126997 / 0.014526 (0.112471) | 0.139593 / 0.176557 (-0.036963) | 0.182735 / 0.737135 (-0.554400) | 0.145871 / 0.296338 (-0.150468) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.530894 / 0.215209 (0.315685) | 5.284979 / 2.077655 (3.207324) | 2.592886 / 1.504120 (1.088766) | 2.407202 / 1.541195 (0.866007) | 2.434079 / 1.468490 (0.965589) | 0.829382 / 4.584777 (-3.755395) | 4.481710 / 3.745712 (0.735998) | 3.912280 / 5.269862 (-1.357581) | 1.962291 / 4.565676 (-2.603386) | 0.101840 / 0.424275 (-0.322435) | 0.014528 / 0.007607 (0.006921) | 0.639956 / 0.226044 (0.413911) | 6.414685 / 2.268929 (4.145756) | 3.240290 / 55.444624 (-52.204334) | 2.795208 / 6.876477 (-4.081269) | 2.912122 / 2.142072 (0.770050) | 0.992188 / 4.805227 (-3.813039) | 0.200701 / 6.500664 (-6.299964) | 0.074235 / 0.075469 (-0.001234) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.455075 / 1.841788 (-0.386712) | 17.186669 / 8.074308 (9.112361) | 15.404357 / 10.191392 (5.212965) | 0.168267 / 0.680424 (-0.512157) | 0.020774 / 0.534201 (-0.513427) | 0.502603 / 0.579283 (-0.076680) | 0.506500 / 0.434364 (0.072136) | 0.624245 / 0.540337 (0.083907) | 0.735529 / 1.386936 (-0.651407) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n" ]
2022-12-15T12:38:05
2023-01-25T15:49:25
null
CONTRIBUTOR
null
Make it possible to write Arrow files directly with `BeamWriter` rather than converting from Parquet to Arrow, which is sub-optimal, especially for big datasets for which Beam is primarily used.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5364/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5364/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5364", "html_url": "https://github.com/huggingface/datasets/pull/5364", "diff_url": "https://github.com/huggingface/datasets/pull/5364.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5364.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5363
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5363/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5363/comments
https://api.github.com/repos/huggingface/datasets/issues/5363/events
https://github.com/huggingface/datasets/issues/5363
1,498,171,317
I_kwDODunzps5ZTEe1
5,363
Dataset.from_generator() crashes on simple example
{ "login": "villmow", "id": 2743060, "node_id": "MDQ6VXNlcjI3NDMwNjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/villmow", "html_url": "https://github.com/villmow", "followers_url": "https://api.github.com/users/villmow/followers", "following_url": "https://api.github.com/users/villmow/following{/other_user}", "gists_url": "https://api.github.com/users/villmow/gists{/gist_id}", "starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/villmow/subscriptions", "organizations_url": "https://api.github.com/users/villmow/orgs", "repos_url": "https://api.github.com/users/villmow/repos", "events_url": "https://api.github.com/users/villmow/events{/privacy}", "received_events_url": "https://api.github.com/users/villmow/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2022-12-15T10:21:28
2022-12-15T11:51:33
2022-12-15T11:51:33
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5363/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5363/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5362
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5362/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5362/comments
https://api.github.com/repos/huggingface/datasets/issues/5362/events
https://github.com/huggingface/datasets/issues/5362
1,497,643,744
I_kwDODunzps5ZRDrg
5,362
Run 'GPT-J' failure due to download dataset fail (' ConnectionError: Couldn't reach http://eaidata.bmk.sh/data/enron_emails.jsonl.zst ' )
{ "login": "shaoyuta", "id": 52023469, "node_id": "MDQ6VXNlcjUyMDIzNDY5", "avatar_url": "https://avatars.githubusercontent.com/u/52023469?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shaoyuta", "html_url": "https://github.com/shaoyuta", "followers_url": "https://api.github.com/users/shaoyuta/followers", "following_url": "https://api.github.com/users/shaoyuta/following{/other_user}", "gists_url": "https://api.github.com/users/shaoyuta/gists{/gist_id}", "starred_url": "https://api.github.com/users/shaoyuta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shaoyuta/subscriptions", "organizations_url": "https://api.github.com/users/shaoyuta/orgs", "repos_url": "https://api.github.com/users/shaoyuta/repos", "events_url": "https://api.github.com/users/shaoyuta/events{/privacy}", "received_events_url": "https://api.github.com/users/shaoyuta/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @shaoyuta.\r\n\r\nWe have checked and yes, apparently there is an issue with the server hosting the data of the \"enron_emails\" subset of \"the_pile\" dataset: http://eaidata.bmk.sh/data/enron_emails.jsonl.zst\r\nIt seems to be down: The connection has timed out.\r\n\r\nPlease note that at the Hugging Face Hub, we are not hosting their data for this dataset, but only a script that downloads the data from their servers. We are updating the data URL to one in another server.\r\n\r\nIn the meantime, please note that you can train your model in the entire \"the_pile\" dataset, by passing the \"all\" config (instead of the \"enron_emails\" one).", "We have transferred this issue to the corresponding dataset Community tab: https://huggingface.co/datasets/the_pile/discussions/2\r\n\r\nPlease, follow the updates there." ]
2022-12-15T01:23:03
2022-12-15T07:45:54
2022-12-15T07:45:53
NONE
null
### Describe the bug Run model "GPT-J" with dataset "the_pile" fail. The fail out is as below: ![image](https://user-images.githubusercontent.com/52023469/207750127-118d9896-35f4-4ee9-90d4-d0ab9aae9c74.png) Looks like which is due to "http://eaidata.bmk.sh/data/enron_emails.jsonl.zst" unreachable . ### Steps to reproduce the bug Steps to reproduce this issue: git clone https://github.com/huggingface/transformers cd transformers python examples/pytorch/language-modeling/run_clm.py --model_name_or_path EleutherAI/gpt-j-6B --dataset_name the_pile --dataset_config_name enron_emails --do_eval --output_dir /tmp/output --overwrite_output_dir ### Expected behavior This issue looks like due to "http://eaidata.bmk.sh/data/enron_emails.jsonl.zst " couldn't be reached. Is there another way to download the dataset "the_pile" ? Is there another way to cache the dataset "the_pile" but not let the hg to download it when runtime ? ### Environment info huggingface_hub version: 0.11.1 Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35 Python version: 3.9.12 Running in iPython ?: No Running in notebook ?: No Running in Google Colab ?: No Token path ?: /home/taosy/.huggingface/token Has saved token ?: False Configured git credential helpers: FastAI: N/A Tensorflow: N/A Torch: N/A Jinja2: N/A Graphviz: N/A Pydot: N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5362/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5362/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5361
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5361/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5361/comments
https://api.github.com/repos/huggingface/datasets/issues/5361/events
https://github.com/huggingface/datasets/issues/5361
1,497,153,889
I_kwDODunzps5ZPMFh
5,361
How concatenate `Audio` elements using batch mapping
{ "login": "bayartsogt-ya", "id": 43239645, "node_id": "MDQ6VXNlcjQzMjM5NjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bayartsogt-ya", "html_url": "https://github.com/bayartsogt-ya", "followers_url": "https://api.github.com/users/bayartsogt-ya/followers", "following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}", "gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}", "starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions", "organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs", "repos_url": "https://api.github.com/users/bayartsogt-ya/repos", "events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}", "received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "You can try something like this ?\r\n```python\r\ndef mapper_function(batch):\r\n return {\"concatenated_audio\": [np.concatenate([audio[\"array\"] for audio in batch[\"audio\"]])]}\r\n\r\ndataset = dataset.map(\r\n mapper_function,\r\n batched=True,\r\n batch_size=3,\r\n remove_columns=list(dataset.features),\r\n)\r\n```", "Thanks for the snippet!\r\n\r\nOne more question. I wonder why those two mappers are working so different that one taking 4 sec while other taking over 1 min :\r\n\r\n```python\r\n%%time\r\ndef mapper_function1(batch):\r\n # list_audio\r\n return {\r\n \"audio\": [\r\n {\r\n \"array\": np.concatenate([audio[\"array\"] for audio in batch[\"audio\"]]),\r\n \"sampling_rate\": 16_000,\r\n }\r\n ]\r\n }\r\n\r\ndataset.map(\r\n mapper_function1,\r\n batched=True,\r\n batch_size=3,\r\n remove_columns=list(dataset.features),\r\n)\r\n\r\n# 100%\r\n# 135/135 [01:13<00:00, 1.93ba/s]\r\n# CPU times: user 1min 10s, sys: 3.21 s, total: 1min 13s\r\n# Wall time: 1min 13s\r\n# Dataset({\r\n# features: ['audio'],\r\n# num_rows: 135\r\n# })\r\n\r\n# --------------------------------\r\n%%time\r\ndef mapper_function2(batch):\r\n # list_audio\r\n return {\"audio\": [np.concatenate([audio[\"array\"] for audio in batch[\"audio\"]])]}\r\n\r\ndataset.map(\r\n mapper_function2,\r\n batched=True,\r\n batch_size=3,\r\n remove_columns=list(dataset.features),\r\n)\r\n\r\n# 100%\r\n# 135/135 [00:03<00:00, 40.69ba/s]\r\n# CPU times: user 1.88 s, sys: 1.48 s, total: 3.36 s\r\n# Wall time: 4.8 s\r\n# Dataset({\r\n# features: ['audio'],\r\n# num_rows: 135\r\n# })\r\n```\r\n", "In the first one you get a dataset with an Audio type, and in the second one you get a dataset with a sequence of floats type.\r\n\r\nThe Audio type encodes the data as WAV to save disk space, so it takes more time to create.\r\nThe Audio type is automatically inferred because you modify the column \"audio\" which was already an Audio type. If you name it to something else, type inference will use a type struct with array and sampling rate fields." ]
2022-12-14T18:13:55
2022-12-15T10:53:28
null
NONE
null
### Describe the bug I am trying to do concatenate audios in a dataset e.g. `google/fleurs`. ```python print(dataset) # Dataset({ # features: ['path', 'audio'], # num_rows: 24 # }) def mapper_function(batch): # to merge every 3 audio # np.concatnate(audios[i: i+3]) for i in range(i, len(batch), 3) dataset = dataset.map(mapper_function, batch=True, batch_size=24) print(dataset) # Expected output: # Dataset({ # features: ['path', 'audio'], # num_rows: 8 # }) ``` I tried to construct `result={}` dictionary inside the mapper function, I just found it will not work because it needs `byte` also needed :(( I'd appreciate if your share any use cases similar to my problem or any solutions really. Thanks! cc: @lhoestq ### Steps to reproduce the bug 1. load audio dataset 2. try to merge every k audios and return as one ### Expected behavior Merged dataset with a fewer rows. If we merge every 3 rows, then `n // 3` number of examples. ### Environment info - `datasets` version: 2.1.0 - Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyArrow version: 8.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5361/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5361/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5360
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5360/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5360/comments
https://api.github.com/repos/huggingface/datasets/issues/5360/events
https://github.com/huggingface/datasets/issues/5360
1,496,947,177
I_kwDODunzps5ZOZnp
5,360
IterableDataset returns duplicated data using PyTorch DDP
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "If you use huggingface trainer, you will find the trainer has wrapped a `IterableDatasetShard` to avoid duplication.\r\nSee:\r\nhttps://github.com/huggingface/transformers/blob/dfd818420dcbad68e05a502495cf666d338b2bfb/src/transformers/trainer.py#L835\r\n", "If you want to support it by datasets natively, maybe we also need to change the code in `transformers` ?", "Opened https://github.com/huggingface/transformers/issues/20770 to discuss this :)", "Maybe something like this then ?\r\n```python\r\nfrom datasets.distributed import split_dataset_by_node\r\nds = split_dataset_by_node(ds, rank=rank, world_size=world_size)\r\n```\r\n\r\nFor map-style datasets the implementation is trivial (it can simply use `.shard()`).\r\n\r\nFor iterable datasets we would need to implement a new ExamplesIterable that would only iterate on a subset of the (possibly shuffled and re-shuffled after each epoch) list of shards, based on the rank and world size.", "My plan is to skip examples by default to not end up with duplicates.\r\n\r\nAnd if a dataset has a number of shards that is a factor of the world size, then I'd make it more optimized by distributing the shards evenly across nodes instead.", "Opened a PR here: https://github.com/huggingface/datasets/pull/5369\r\n\r\nfeel free to play with it and share your feedbacks :)" ]
2022-12-14T16:06:19
2023-01-16T13:33:33
2023-01-16T13:33:33
MEMBER
null
As mentioned in https://github.com/huggingface/datasets/issues/3423, when using PyTorch DDP the dataset ends up with duplicated data. We already check for the PyTorch `worker_info` for single node, but we should also check for `torch.distributed.get_world_size()` and `torch.distributed.get_rank()`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5360/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5360/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5359
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5359/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5359/comments
https://api.github.com/repos/huggingface/datasets/issues/5359/events
https://github.com/huggingface/datasets/pull/5359
1,495,297,857
PR_kwDODunzps5FYHWm
5,359
Raise error if ClassLabel names is not python list
{ "login": "freddyheppell", "id": 1475568, "node_id": "MDQ6VXNlcjE0NzU1Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/1475568?v=4", "gravatar_id": "", "url": "https://api.github.com/users/freddyheppell", "html_url": "https://github.com/freddyheppell", "followers_url": "https://api.github.com/users/freddyheppell/followers", "following_url": "https://api.github.com/users/freddyheppell/following{/other_user}", "gists_url": "https://api.github.com/users/freddyheppell/gists{/gist_id}", "starred_url": "https://api.github.com/users/freddyheppell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/freddyheppell/subscriptions", "organizations_url": "https://api.github.com/users/freddyheppell/orgs", "repos_url": "https://api.github.com/users/freddyheppell/repos", "events_url": "https://api.github.com/users/freddyheppell/events{/privacy}", "received_events_url": "https://api.github.com/users/freddyheppell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for your proposed fix, @freddyheppell.\r\n\r\nCurrently the CI fails because in a test we pass a `tuple` instead of a `list`. I would say we should accept `tuple` as a valid input type as well...\r\n\r\nWhat about checking for `Sequence` instead?", "Fixed that @albertvillanova, can you approve CI again please? Had some issues related to Pytorch .so files when running tests on my M1 mac, so wasn't able to test locally first. Have got them working on my desktop now though." ]
2022-12-13T23:04:06
2022-12-22T16:35:49
2022-12-22T16:32:49
CONTRIBUTOR
null
Checks type of names provided to ClassLabel to avoid easy and hard to debug errors (closes #5332 - see for discussion)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5359/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5359/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5359", "html_url": "https://github.com/huggingface/datasets/pull/5359", "diff_url": "https://github.com/huggingface/datasets/pull/5359.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5359.patch", "merged_at": "2022-12-22T16:32:49" }
true
https://api.github.com/repos/huggingface/datasets/issues/5358
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5358/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5358/comments
https://api.github.com/repos/huggingface/datasets/issues/5358/events
https://github.com/huggingface/datasets/pull/5358
1,495,270,822
PR_kwDODunzps5FYBcq
5,358
Fix `fs.open` resource leaks
{ "login": "tkukurin", "id": 297847, "node_id": "MDQ6VXNlcjI5Nzg0Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/297847?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tkukurin", "html_url": "https://github.com/tkukurin", "followers_url": "https://api.github.com/users/tkukurin/followers", "following_url": "https://api.github.com/users/tkukurin/following{/other_user}", "gists_url": "https://api.github.com/users/tkukurin/gists{/gist_id}", "starred_url": "https://api.github.com/users/tkukurin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tkukurin/subscriptions", "organizations_url": "https://api.github.com/users/tkukurin/orgs", "repos_url": "https://api.github.com/users/tkukurin/repos", "events_url": "https://api.github.com/users/tkukurin/events{/privacy}", "received_events_url": "https://api.github.com/users/tkukurin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@mariosasko Sorry, I didn't check tests/style after doing a merge from the Git UI last week. Thx for fixing. \r\n\r\nFYI I'm getting \"Only those with [write access](https://docs.github.com/articles/what-are-the-different-access-permissions) to this repository can merge pull requests.\" so it seems somebody else needs to merge this.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008816 / 0.011353 (-0.002536) | 0.004691 / 0.011008 (-0.006317) | 0.100039 / 0.038508 (0.061531) | 0.035422 / 0.023109 (0.012313) | 0.312600 / 0.275898 (0.036702) | 0.378684 / 0.323480 (0.055204) | 0.007593 / 0.007986 (-0.000392) | 0.005183 / 0.004328 (0.000855) | 0.078040 / 0.004250 (0.073790) | 0.041845 / 0.037052 (0.004793) | 0.325251 / 0.258489 (0.066762) | 0.363459 / 0.293841 (0.069618) | 0.038006 / 0.128546 (-0.090540) | 0.011911 / 0.075646 (-0.063735) | 0.335020 / 0.419271 (-0.084251) | 0.048765 / 0.043533 (0.005233) | 0.305913 / 0.255139 (0.050774) | 0.337620 / 0.283200 (0.054420) | 0.101867 / 0.141683 (-0.039816) | 1.450091 / 1.452155 (-0.002064) | 1.437303 / 1.492716 (-0.055413) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225650 / 0.018006 (0.207644) | 0.492480 / 0.000490 (0.491990) | 0.002857 / 0.000200 (0.002658) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026231 / 0.037411 (-0.011180) | 0.105479 / 0.014526 (0.090953) | 0.118438 / 0.176557 (-0.058119) | 0.167313 / 0.737135 (-0.569822) | 0.119416 / 0.296338 (-0.176923) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396233 / 0.215209 (0.181024) | 3.943325 / 2.077655 (1.865671) | 1.778864 / 1.504120 (0.274744) | 1.587957 / 1.541195 (0.046763) | 1.615404 / 1.468490 (0.146914) | 0.709427 / 4.584777 (-3.875350) | 3.823310 / 3.745712 (0.077598) | 3.461376 / 5.269862 (-1.808486) | 1.888330 / 4.565676 (-2.677346) | 0.086910 / 0.424275 (-0.337365) | 0.012215 / 0.007607 (0.004608) | 0.504877 / 0.226044 (0.278833) | 5.051513 / 2.268929 (2.782584) | 2.249389 / 55.444624 (-53.195235) | 1.890949 / 6.876477 (-4.985528) | 2.015584 / 2.142072 (-0.126489) | 0.862313 / 4.805227 (-3.942914) | 0.166295 / 6.500664 (-6.334369) | 0.061131 / 0.075469 (-0.014338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.201804 / 1.841788 (-0.639984) | 14.589425 / 8.074308 (6.515117) | 13.855522 / 10.191392 (3.664130) | 0.193406 / 0.680424 (-0.487018) | 0.028614 / 0.534201 (-0.505587) | 0.439857 / 0.579283 (-0.139426) | 0.443330 / 0.434364 (0.008966) | 0.514078 / 0.540337 (-0.026259) | 0.608245 / 1.386936 (-0.778691) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007087 / 0.011353 (-0.004265) | 0.005024 / 0.011008 (-0.005985) | 0.096852 / 0.038508 (0.058344) | 0.032870 / 0.023109 (0.009761) | 0.397790 / 0.275898 (0.121892) | 0.420717 / 0.323480 (0.097237) | 0.005552 / 0.007986 (-0.002434) | 0.003742 / 0.004328 (-0.000586) | 0.074788 / 0.004250 (0.070537) | 0.048030 / 0.037052 (0.010977) | 0.398520 / 0.258489 (0.140031) | 0.460919 / 0.293841 (0.167078) | 0.037652 / 0.128546 (-0.090894) | 0.012249 / 0.075646 (-0.063397) | 0.333077 / 0.419271 (-0.086194) | 0.052364 / 0.043533 (0.008831) | 0.394358 / 0.255139 (0.139219) | 0.414193 / 0.283200 (0.130994) | 0.103569 / 0.141683 (-0.038114) | 1.499208 / 1.452155 (0.047053) | 1.619481 / 1.492716 (0.126764) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229476 / 0.018006 (0.211470) | 0.448670 / 0.000490 (0.448180) | 0.000399 / 0.000200 (0.000199) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027550 / 0.037411 (-0.009862) | 0.109180 / 0.014526 (0.094654) | 0.118372 / 0.176557 (-0.058185) | 0.153136 / 0.737135 (-0.583999) | 0.122689 / 0.296338 (-0.173650) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445163 / 0.215209 (0.229954) | 4.426350 / 2.077655 (2.348695) | 2.194902 / 1.504120 (0.690782) | 2.019049 / 1.541195 (0.477854) | 2.032795 / 1.468490 (0.564305) | 0.700752 / 4.584777 (-3.884025) | 3.797616 / 3.745712 (0.051903) | 2.046414 / 5.269862 (-3.223447) | 1.345037 / 4.565676 (-3.220639) | 0.085389 / 0.424275 (-0.338886) | 0.012824 / 0.007607 (0.005217) | 0.553875 / 0.226044 (0.327831) | 5.550252 / 2.268929 (3.281323) | 2.702822 / 55.444624 (-52.741803) | 2.346257 / 6.876477 (-4.530220) | 2.410772 / 2.142072 (0.268699) | 0.848271 / 4.805227 (-3.956957) | 0.170787 / 6.500664 (-6.329877) | 0.064344 / 0.075469 (-0.011125) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266222 / 1.841788 (-0.575566) | 14.501194 / 8.074308 (6.426886) | 13.413678 / 10.191392 (3.222286) | 0.589048 / 0.680424 (-0.091375) | 0.018246 / 0.534201 (-0.515955) | 0.425221 / 0.579283 (-0.154062) | 0.425900 / 0.434364 (-0.008464) | 0.494023 / 0.540337 (-0.046314) | 0.604324 / 1.386936 (-0.782612) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n" ]
2022-12-13T22:35:51
2023-01-05T16:46:31
2023-01-05T15:59:51
CONTRIBUTOR
null
Invoking `{load,save}_from_dict` results in resource leak warnings, this should fix. Introduces no significant logic changes.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5358/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5358/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5358", "html_url": "https://github.com/huggingface/datasets/pull/5358", "diff_url": "https://github.com/huggingface/datasets/pull/5358.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5358.patch", "merged_at": "2023-01-05T15:59:51" }
true
https://api.github.com/repos/huggingface/datasets/issues/5357
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5357/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5357/comments
https://api.github.com/repos/huggingface/datasets/issues/5357/events
https://github.com/huggingface/datasets/pull/5357
1,495,029,602
PR_kwDODunzps5FXNyR
5,357
Support torch dataloader without torch formatting
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Need some more time to fix the tests, especially with pickle", "> And I actually don't quite understand the idea - what's the motivation behind making only IterableDataset compatible with torch DataLoader without setting the format explicitly?\r\n\r\nSetting the format to pytorch = set the output types of the dataset to be pytorch tensors. However sometimes your dataset is not made of tensors but you still want to be able to use a pytorch DataLoader", "A bit more context. \r\n\r\nThe arrow-backed `Dataset` supports `DataLoader(ds)` (even if the format is not \"torch\"), and we want to be able to do the same with `IterableDataset` for consistency. However, this is when the PyTorch internals come into play - an iterable dataset needs to be an instance of `torch.utils.data.IterableDataset` due to [this](https://github.com/pytorch/pytorch/blob/abc54f93145830b502400faa92bec86e05422fbd/torch/utils/data/dataloader.py#L276) check (notice there is no check for the map-style version). Hence the explicit subclassing in this PR.", "Exactly :) Btw I just took your comments into account @polinaeterna , so feel free to review again", "@lhoestq just checking, does this change still preserve the fix to the \"data duplicate when setting num_works > 1 with streaming data\" issue from before?\r\n\r\nhttps://github.com/huggingface/datasets/issues/3423", "Yes :)" ]
2022-12-13T19:39:24
2023-01-04T12:45:40
2022-12-15T19:15:54
MEMBER
null
In https://github.com/huggingface/datasets/pull/5084 we make the torch formatting consistent with the map-style datasets formatting: a torch formatted iterable dataset will yield torch tensors. The previous behavior of the torch formatting for iterable dataset was simply to make the iterable dataset inherit from `torch.utils.data.Dataset` to make it work in a torch DataLoader. However ideally an unformatted dataset should also work with a DataLoader. To fix that, `datasets.IterableDataset` should inherit from `torch.utils.data.IterableDataset`. Since we don't want to import torch on startup, I created this PR to dynamically make the `datasets.IterableDataset` class inherit form the torch one when a `datasets.IterableDataset` is instantiated and if PyTorch is available. ```python >>> from datasets import load_dataset >>> ds = load_dataset("c4", "en", streaming=True, split="train") >>> import torch.utils.data >>> isinstance(ds, torch.utils.data.IterableDataset) True >>> dataloader = torch.utils.data.DataLoader(ds, batch_size=32, num_workers=4) >>> for example in dataloader: ...: ... ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5357/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5357/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5357", "html_url": "https://github.com/huggingface/datasets/pull/5357", "diff_url": "https://github.com/huggingface/datasets/pull/5357.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5357.patch", "merged_at": "2022-12-15T19:15:54" }
true
https://api.github.com/repos/huggingface/datasets/issues/5356
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5356/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5356/comments
https://api.github.com/repos/huggingface/datasets/issues/5356/events
https://github.com/huggingface/datasets/pull/5356
1,494,961,609
PR_kwDODunzps5FW-c9
5,356
Clean filesystem and logging docstrings
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-13T18:54:09
2022-12-14T17:25:58
2022-12-14T17:22:16
MEMBER
null
This PR cleans the `Filesystems` and `Logging` docstrings.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5356/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5356/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5356", "html_url": "https://github.com/huggingface/datasets/pull/5356", "diff_url": "https://github.com/huggingface/datasets/pull/5356.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5356.patch", "merged_at": "2022-12-14T17:22:16" }
true
https://api.github.com/repos/huggingface/datasets/issues/5355
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5355/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5355/comments
https://api.github.com/repos/huggingface/datasets/issues/5355/events
https://github.com/huggingface/datasets/pull/5355
1,493,076,860
PR_kwDODunzps5FQcYG
5,355
Clean up Table class docstrings
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-13T00:29:47
2022-12-13T18:17:56
2022-12-13T18:14:42
MEMBER
null
This PR cleans up the `Table` class docstrings :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5355/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5355/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5355", "html_url": "https://github.com/huggingface/datasets/pull/5355", "diff_url": "https://github.com/huggingface/datasets/pull/5355.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5355.patch", "merged_at": "2022-12-13T18:14:42" }
true
https://api.github.com/repos/huggingface/datasets/issues/5354
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5354/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5354/comments
https://api.github.com/repos/huggingface/datasets/issues/5354/events
https://github.com/huggingface/datasets/issues/5354
1,492,174,125
I_kwDODunzps5Y8MUt
5,354
Consider using "Sequence" instead of "List"
{ "login": "tranhd95", "id": 15568078, "node_id": "MDQ6VXNlcjE1NTY4MDc4", "avatar_url": "https://avatars.githubusercontent.com/u/15568078?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tranhd95", "html_url": "https://github.com/tranhd95", "followers_url": "https://api.github.com/users/tranhd95/followers", "following_url": "https://api.github.com/users/tranhd95/following{/other_user}", "gists_url": "https://api.github.com/users/tranhd95/gists{/gist_id}", "starred_url": "https://api.github.com/users/tranhd95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tranhd95/subscriptions", "organizations_url": "https://api.github.com/users/tranhd95/orgs", "repos_url": "https://api.github.com/users/tranhd95/repos", "events_url": "https://api.github.com/users/tranhd95/events{/privacy}", "received_events_url": "https://api.github.com/users/tranhd95/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
open
false
null
[]
null
[ "Hi! Linking a comment to provide more info on the issue: https://stackoverflow.com/a/39458225. This means we should replace all (most of) the occurrences of `List` with `Sequence` in function signatures.\r\n\r\n@tranhd95 Would you be interested in submitting a PR?", "Hi all! I tried to reproduce this issue and didn't work for me. Also in your example i noticed that the variables have different names: `list_of_filenames` and `list_of_files`, could this be related to that?\r\n```python\r\n#I found random data in parquet format:\r\n!wget \"https://github.com/Teradata/kylo/raw/master/samples/sample-data/parquet/userdata1.parquet\"\r\n!wget \"https://github.com/Teradata/kylo/raw/master/samples/sample-data/parquet/userdata2.parquet\"\r\n\r\n#Then i try reproduce\r\nlist_of_files = [\"userdata1.parquet\", \"userdata2.parquet\"]\r\nds = Dataset.from_parquet(list_of_files)\r\n```\r\n**My output:**\r\n```python\r\nWARNING:datasets.builder:Using custom data configuration default-e287d097dc54e046\r\nDownloading and preparing dataset parquet/default to /root/.cache/huggingface/datasets/parquet/default-e287d097dc54e046/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...\r\nDownloading data files: 100%\r\n1/1 [00:00<00:00, 40.38it/s]\r\nExtracting data files: 100%\r\n1/1 [00:00<00:00, 23.43it/s]\r\nDataset parquet downloaded and prepared to /root/.cache/huggingface/datasets/parquet/default-e287d097dc54e046/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec. Subsequent calls will reuse this data.\r\n```\r\nP.S. This is my first experience with open source. So do not judge strictly if I do not understand something)", "@dantema There is indeed a typo in variable names. Nevertheless, I'm sorry if I was not clear but the output is from `mypy` type checker. You can run the code snippet without issues. The problem is with the type checking.", "However, I found out that the type annotation is actually misleading. The [`from_parquet`](https://github.com/huggingface/datasets/blob/5ef1ab1cc06c2b7a574bf2df454cd9fcb071ccb2/src/datasets/arrow_dataset.py#L1039) method should also accept list of [`PathLike`](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/typing.py#L8) objects which includes [`os.PathLike`](https://docs.python.org/3/library/os.html#os.PathLike). But if I would ran the code snippet below, an exception is thrown.\r\n\r\n**Code**\r\n```py\r\nfrom pathlib import Path\r\n\r\nlist_of_filenames = [Path(\"foo.parquet\"), Path(\"bar.parquet\")]\r\nds = Dataset.from_parquet(list_of_filenames)\r\n```\r\n**Output**\r\n```py\r\n[/usr/local/lib/python3.8/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in from_parquet(path_or_paths, split, features, cache_dir, keep_in_memory, columns, **kwargs)\r\n 1071 from .io.parquet import ParquetDatasetReader\r\n 1072 \r\n-> 1073 return ParquetDatasetReader(\r\n 1074 path_or_paths,\r\n 1075 split=split,\r\n\r\n[/usr/local/lib/python3.8/dist-packages/datasets/io/parquet.py](https://localhost:8080/#) in __init__(self, path_or_paths, split, features, cache_dir, keep_in_memory, streaming, **kwargs)\r\n 35 path_or_paths = path_or_paths if isinstance(path_or_paths, dict) else {self.split: path_or_paths}\r\n 36 hash = _PACKAGED_DATASETS_MODULES[\"parquet\"][1]\r\n---> 37 self.builder = Parquet(\r\n 38 cache_dir=cache_dir,\r\n 39 data_files=path_or_paths,\r\n\r\n[/usr/local/lib/python3.8/dist-packages/datasets/builder.py](https://localhost:8080/#) in __init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs)\r\n 298 \r\n 299 if data_files is not None and not isinstance(data_files, DataFilesDict):\r\n--> 300 data_files = DataFilesDict.from_local_or_remote(\r\n 301 sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token\r\n 302 )\r\n\r\n[/usr/local/lib/python3.8/dist-packages/datasets/data_files.py](https://localhost:8080/#) in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token)\r\n 794 for key, patterns_for_key in patterns.items():\r\n 795 out[key] = (\r\n--> 796 DataFilesList.from_local_or_remote(\r\n 797 patterns_for_key,\r\n 798 base_path=base_path,\r\n\r\n[/usr/local/lib/python3.8/dist-packages/datasets/data_files.py](https://localhost:8080/#) in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token)\r\n 762 ) -> \"DataFilesList\":\r\n 763 base_path = base_path if base_path is not None else str(Path().resolve())\r\n--> 764 data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n 765 origin_metadata = _get_origin_metadata_locally_or_by_urls(data_files, use_auth_token=use_auth_token)\r\n 766 return cls(data_files, origin_metadata)\r\n\r\n[/usr/local/lib/python3.8/dist-packages/datasets/data_files.py](https://localhost:8080/#) in resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n 357 data_files = []\r\n 358 for pattern in patterns:\r\n--> 359 if is_remote_url(pattern):\r\n 360 data_files.append(Url(pattern))\r\n 361 else:\r\n\r\n[/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py](https://localhost:8080/#) in is_remote_url(url_or_filename)\r\n 62 \r\n 63 def is_remote_url(url_or_filename: str) -> bool:\r\n---> 64 parsed = urlparse(url_or_filename)\r\n 65 return parsed.scheme in (\"http\", \"https\", \"s3\", \"gs\", \"hdfs\", \"ftp\")\r\n 66 \r\n\r\n[/usr/lib/python3.8/urllib/parse.py](https://localhost:8080/#) in urlparse(url, scheme, allow_fragments)\r\n 373 Note that we don't break the components up in smaller bits\r\n 374 (e.g. netloc is a single string) and we don't expand % escapes.\"\"\"\r\n--> 375 url, scheme, _coerce_result = _coerce_args(url, scheme)\r\n 376 splitresult = urlsplit(url, scheme, allow_fragments)\r\n 377 scheme, netloc, url, query, fragment = splitresult\r\n\r\n[/usr/lib/python3.8/urllib/parse.py](https://localhost:8080/#) in _coerce_args(*args)\r\n 125 if str_input:\r\n 126 return args + (_noop,)\r\n--> 127 return _decode_args(args) + (_encode_result,)\r\n 128 \r\n 129 # Result objects are more helpful than simple tuples\r\n\r\n[/usr/lib/python3.8/urllib/parse.py](https://localhost:8080/#) in _decode_args(args, encoding, errors)\r\n 109 def _decode_args(args, encoding=_implicit_encoding,\r\n 110 errors=_implicit_errors):\r\n--> 111 return tuple(x.decode(encoding, errors) if x else '' for x in args)\r\n 112 \r\n 113 def _coerce_args(*args):\r\n\r\n[/usr/lib/python3.8/urllib/parse.py](https://localhost:8080/#) in <genexpr>(.0)\r\n 109 def _decode_args(args, encoding=_implicit_encoding,\r\n 110 errors=_implicit_errors):\r\n--> 111 return tuple(x.decode(encoding, errors) if x else '' for x in args)\r\n 112 \r\n 113 def _coerce_args(*args):\r\n\r\nAttributeError: 'PosixPath' object has no attribute 'decode'\r\n```\r\n\r\n@mariosasko Should I create a new issue? " ]
2022-12-12T15:39:45
2022-12-16T21:02:44
null
NONE
null
### Feature request Hi, please consider using `Sequence` type annotation instead of `List` in function arguments such as in [`Dataset.from_parquet()`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L1088). It leads to type checking errors, see below. **How to reproduce** ```py list_of_filenames = ["foo.parquet", "bar.parquet"] ds = Dataset.from_parquet(list_of_filenames) ``` **Expected mypy output:** ``` Success: no issues found ``` **Actual mypy output:** ```py test.py:19: error: Argument 1 to "from_parquet" of "Dataset" has incompatible type "List[str]"; expected "Union[Union[str, bytes, PathLike[Any]], List[Union[str, bytes, PathLike[Any]]]]" [arg-type] test.py:19: note: "List" is invariant -- see https://mypy.readthedocs.io/en/stable/common_issues.html#variance test.py:19: note: Consider using "Sequence" instead, which is covariant ``` **Env:** mypy 0.991, Python 3.10.0, datasets 2.7.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5354/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5353
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5353/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5353/comments
https://api.github.com/repos/huggingface/datasets/issues/5353/events
https://github.com/huggingface/datasets/issues/5353
1,491,880,500
I_kwDODunzps5Y7Eo0
5,353
Support remote file systems for `Audio`
{ "login": "OllieBroadhurst", "id": 46894149, "node_id": "MDQ6VXNlcjQ2ODk0MTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/46894149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OllieBroadhurst", "html_url": "https://github.com/OllieBroadhurst", "followers_url": "https://api.github.com/users/OllieBroadhurst/followers", "following_url": "https://api.github.com/users/OllieBroadhurst/following{/other_user}", "gists_url": "https://api.github.com/users/OllieBroadhurst/gists{/gist_id}", "starred_url": "https://api.github.com/users/OllieBroadhurst/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OllieBroadhurst/subscriptions", "organizations_url": "https://api.github.com/users/OllieBroadhurst/orgs", "repos_url": "https://api.github.com/users/OllieBroadhurst/repos", "events_url": "https://api.github.com/users/OllieBroadhurst/events{/privacy}", "received_events_url": "https://api.github.com/users/OllieBroadhurst/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Just seen https://github.com/huggingface/datasets/issues/5281" ]
2022-12-12T13:22:13
2022-12-12T13:37:14
2022-12-12T13:37:14
NONE
null
### Feature request Hi there! It would be super cool if `Audio()`, and potentially other features, could read files from a remote file system. ### Motivation Large amounts of data is often stored in buckets. `load_from_disk` is able to retrieve data from cloud storage but to my knowledge actually copies the datasets across first, so if you're working off a system with smaller disk specs (like a VM), you can run out of space very quickly. ### Your contribution Something like this (for Google Cloud Platform in this instance): ```python from datasets import Dataset, Audio import gcsfs fs = gcsfs.GCSFileSystem() list_of_audio_fp = {'audio': ['1', '2', '3']} ds = Dataset.from_dict(list_of_audio_fp) ds = ds.cast_column("audio", Audio(sampling_rate=16000, fs=fs)) ``` Under the hood: ```python import librosa from io import BytesIO def load_audio(fp, sampling_rate=None, fs=None): if fs is not None: with fs.open(fp, 'rb') as f: arr, sr = librosa.load(BytesIO(f), sr=sampling_rate) else: # Perform existing io operations ``` Written from memory so some things could be wrong.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5353/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5352
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5352/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5352/comments
https://api.github.com/repos/huggingface/datasets/issues/5352/events
https://github.com/huggingface/datasets/issues/5352
1,490,796,414
I_kwDODunzps5Y279-
5,352
__init__() got an unexpected keyword argument 'input_size'
{ "login": "J-shel", "id": 82662111, "node_id": "MDQ6VXNlcjgyNjYyMTEx", "avatar_url": "https://avatars.githubusercontent.com/u/82662111?v=4", "gravatar_id": "", "url": "https://api.github.com/users/J-shel", "html_url": "https://github.com/J-shel", "followers_url": "https://api.github.com/users/J-shel/followers", "following_url": "https://api.github.com/users/J-shel/following{/other_user}", "gists_url": "https://api.github.com/users/J-shel/gists{/gist_id}", "starred_url": "https://api.github.com/users/J-shel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/J-shel/subscriptions", "organizations_url": "https://api.github.com/users/J-shel/orgs", "repos_url": "https://api.github.com/users/J-shel/repos", "events_url": "https://api.github.com/users/J-shel/events{/privacy}", "received_events_url": "https://api.github.com/users/J-shel/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi @J-shel, thanks for reporting.\r\n\r\nI think the issue comes from your call to `load_dataset`. As first argument, you should pass:\r\n- either the name of your dataset (\"mrf\") if this is already published on the Hub\r\n- or the path to the loading script of your dataset (\"path/to/your/local/mrf.py\").", "Hi, following your suggestion, I changed my call to load_dataset. Below is the latest:\r\nreader = load_dataset('data/mrf.py',\"default\", input_size=1024, split=split, streaming=True, keep_in_memory=None)\r\nHowever, I still got the same error.\r\nI have one question that is if I only define input_size=2048 in BUILDER_CONFIGS, may I specify input_size=1024 when loading the dataset? Cause I found that I could only specify name=\"default\" since I only define name=\"default\" in BUILDER_CONFIGS." ]
2022-12-12T02:52:03
2022-12-19T01:38:48
null
NONE
null
### Describe the bug I try to define a custom configuration with a input_size attribute following the instructions by "Specifying several dataset configurations" in https://huggingface.co/docs/datasets/v1.2.1/add_dataset.html But when I load the dataset, I got an error "__init__() got an unexpected keyword argument 'input_size'" ### Steps to reproduce the bug Following is the code to define the dataset: class CsvConfig(datasets.BuilderConfig): """BuilderConfig for CSV.""" input_size: int = 2048 class MRF(datasets.ArrowBasedBuilder): """Archival MRF data""" BUILDER_CONFIG_CLASS = CsvConfig VERSION = datasets.Version("1.0.0") BUILDER_CONFIGS = [ CsvConfig(name="default", version=VERSION, description="MRF data", input_size=2048), ] ... def _generate_examples(self): input_size = self.config.input_size if input_size > 1000: numin = 10000 else: numin = 15000 Below is the code to load the dataset: reader = load_dataset("default", input_size=1024) ### Expected behavior I hope to pass the "input_size" parameter to MRF datasets, and change "input_size" to any value when loading the datasets. ### Environment info - `datasets` version: 2.5.1 - Platform: Linux-4.18.0-305.3.1.el8.x86_64-x86_64-with-glibc2.31 - Python version: 3.9.12 - PyArrow version: 9.0.0 - Pandas version: 1.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5352/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5352/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5351
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5351/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5351/comments
https://api.github.com/repos/huggingface/datasets/issues/5351/events
https://github.com/huggingface/datasets/issues/5351
1,490,659,504
I_kwDODunzps5Y2aiw
5,351
Do we need to implement `_prepare_split`?
{ "login": "jmwoloso", "id": 7530947, "node_id": "MDQ6VXNlcjc1MzA5NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/7530947?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmwoloso", "html_url": "https://github.com/jmwoloso", "followers_url": "https://api.github.com/users/jmwoloso/followers", "following_url": "https://api.github.com/users/jmwoloso/following{/other_user}", "gists_url": "https://api.github.com/users/jmwoloso/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmwoloso/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmwoloso/subscriptions", "organizations_url": "https://api.github.com/users/jmwoloso/orgs", "repos_url": "https://api.github.com/users/jmwoloso/repos", "events_url": "https://api.github.com/users/jmwoloso/events{/privacy}", "received_events_url": "https://api.github.com/users/jmwoloso/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! `DatasetBuilder` is a parent class for concrete builders: `GeneratorBasedBuilder`, `ArrowBasedBuilder` and `BeamBasedBuilder`. When writing a builder script, these classes are the ones you should inherit from. And since all of them implement `_prepare_split`, you only have to implement the three methods mentioned above.", "Thanks so much @mariosasko for the fast response! I've been referencing [this page in the docs](https://huggingface.co/docs/datasets/v2.4.0/en/about_dataset_load) because it it pretty comprehensive in terms of what we have to do and I figured since we subclass the `BuilderConfig` the same pattern would hold, but I've also seen the page with those sub-classed builders as well, so that fills in a knowledge gap for me.", "cc @stevhliu who may have some ideas on how to improve this part of the docs.", "one more question for my understanding @mariosasko. the requirement of a loading script has always seemed counterintuitive to me. if i have to provide a script with every dataset, what is the point of using `datasets` if we're doing all the work of loading it, I can just do that in my code and skip the datasets integration (this of course discounts other potential benefits around metadata management, etc., my example is just simplest use case though for the sake of discussion).\r\n\r\nso i figured I would implement my own `BuilderConfig` and `DatasetBuilder` to handle that portion of it and not have to make a script. i _thought_ this would result in `datasets` (via `download_and_prepare`) then making me something that I could load using `load_dataset` moving forward.\r\n\r\nConcretely, i envisioned this pattern being possible:\r\n\r\n ```\r\nclass MyBuilderConfig(BuilderConfig):\r\n def __init__(self, name=\"my_named_dataset\", ...):\r\n super().__init__(name, ...)\r\n\r\nclass MyDatasetBuilder(GeneratorBasedBuilder):\r\n BUILDER_CONFIG_CLASS = MyBuilderConfig\r\n ....\r\n\r\nmy_builder = MyDatasetBuilder(...)\r\n\r\n# this doesn't exactly work like I thought; I don't get a dataset back, but NoneType instead\r\n# though I can see it loading the files and it generates the cache, etc.\r\nmy_dataset = my_builder.download_and_prepare()\r\n\r\n# load the dataset in the future by referencing it by name and loading from the cached arrow version\r\nnew_instance_of_my_dataset = load_dataset(\"my_named_dataset\")\r\n```\r\n\r\nI've seen references to the `save_to_disk` method which might be the next step I need in order to load it by name, in which case, that makes sense, then i just need to debug why `download_and_prepare` isn't returning me a dataset, but I feel like I still have a larger conceptual knowledge gap on how to use the library correctly.\r\n\r\nThanks again in advance!", "> the requirement of a loading script has always seemed counterintuitive to me\r\n\r\nThis is a requirement only for datasets not stored in standard formats such as CSV, JSON, SQL, Parquet, ImageFolder, etc. \r\n\r\n> if i have to provide a script with every dataset, what is the point of using datasets if we're doing all the work of loading it, I can just do that in my code and skip the datasets integration (this of course discounts other potential benefits around metadata management, etc., my example is just simplest use case though for the sake of discussion)\r\n\r\nOur README/documentation lists the main features... \r\n\r\nOne of the main ones is that our library makes it easy to work with datasets larger than RAM (thanks to Arrow and the caching mechanism), and this is not trivial to implement.\r\n\r\nRegarding the step-by-step builder, this is the pattern:\r\n```python\r\nfrom datasets import load_dataset_builder\r\nbuilder = load_dataset_builder(\"path/to/script\") # or direct instantiation with MyDatasetBuilder(...)\r\nbuilder.download_and_prepare()\r\ndset = builder.as_dataset()\r\n```", "ok, that makes sense. thank you @mariosasko. I realized i'd never looked on the hub at any of the files associated with any datasets. just did that now and it appears that i'll need to have a script regardless _but_ that will just contain my custom config and builder classes, so without realizing it I was already making my script, I just need to wrap that in a file that sits alongside my data (I looked at Glue and realized I was already doing what I thought didn't make sense to have to do, lol).\r\n\r\n`download_and_prepare` isn't returning me a dataset though, but I'll look into that and open another issue if I can't figure it out.", "`download_and_prepare` downloads and prepares the arrow files. You need to call `as_dataset` on the builder to get the dataset.", "ok, I think I was assigning the output of `builder.download_and_prepare` but it's an inplace op, so that explains the `NoneType` i was getting back. Now I'm getting:\r\n\r\n```\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-7-3ed50fb87c70> in <module>\r\n----> 1 ds = dataset_builder.as_dataset()\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/builder.py in as_dataset(self, split, run_post_process, ignore_verifications, in_memory)\r\n 1020 \r\n 1021 # Create a dataset for each of the given splits\r\n-> 1022 datasets = map_nested(\r\n 1023 partial(\r\n 1024 self._build_single_dataset,\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)\r\n 442 num_proc = 1\r\n 443 if num_proc <= 1 or len(iterable) < parallel_min_length:\r\n--> 444 mapped = [\r\n 445 _single_map_nested((function, obj, types, None, True, None))\r\n 446 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)\r\n 443 if num_proc <= 1 or len(iterable) < parallel_min_length:\r\n 444 mapped = [\r\n--> 445 _single_map_nested((function, obj, types, None, True, None))\r\n 446 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 447 ]\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)\r\n 344 # Singleton first to spare some computation\r\n 345 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 346 return function(data_struct)\r\n 347 \r\n 348 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/builder.py in _build_single_dataset(self, split, run_post_process, ignore_verifications, in_memory)\r\n 1051 \r\n 1052 # Build base dataset\r\n-> 1053 ds = self._as_dataset(\r\n 1054 split=split,\r\n 1055 in_memory=in_memory,\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/builder.py in _as_dataset(self, split, in_memory)\r\n 1120 \"\"\"\r\n 1121 cache_dir = self._fs._strip_protocol(self._output_dir)\r\n-> 1122 dataset_kwargs = ArrowReader(cache_dir, self.info).read(\r\n 1123 name=self.name,\r\n 1124 instructions=split,\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/arrow_reader.py in read(self, name, instructions, split_infos, in_memory)\r\n 236 msg = f'Instruction \"{instructions}\" corresponds to no data!'\r\n 237 raise ValueError(msg)\r\n--> 238 return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)\r\n 239 \r\n 240 def read_files(\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/arrow_reader.py in read_files(self, files, original_instructions, in_memory)\r\n 257 \"\"\"\r\n 258 # Prepend path to filename\r\n--> 259 pa_table = self._read_files(files, in_memory=in_memory)\r\n 260 # If original_instructions is not None, convert it to a human-readable NamedSplit\r\n 261 if original_instructions is not None:\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/arrow_reader.py in _read_files(self, files, in_memory)\r\n 192 f[\"filename\"] = os.path.join(self._path, f[\"filename\"])\r\n 193 for f_dict in files:\r\n--> 194 pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)\r\n 195 pa_tables.append(pa_table)\r\n 196 pa_tables = [t for t in pa_tables if len(t) > 0]\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/arrow_reader.py in _get_table_from_filename(self, filename_skip_take, in_memory)\r\n 327 filename_skip_take[\"take\"] if \"take\" in filename_skip_take else None,\r\n 328 )\r\n--> 329 table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n 330 if take == -1:\r\n 331 take = len(table) - skip\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/arrow_reader.py in read_table(filename, in_memory)\r\n 348 \"\"\"\r\n 349 table_cls = InMemoryTable if in_memory else MemoryMappedTable\r\n--> 350 return table_cls.from_file(filename)\r\n 351 \r\n 352 \r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/table.py in from_file(cls, filename, replays)\r\n 1034 @classmethod\r\n 1035 def from_file(cls, filename: str, replays=None):\r\n-> 1036 table = _memory_mapped_arrow_table_from_file(filename)\r\n 1037 table = cls._apply_replays(table, replays)\r\n 1038 return cls(table, filename, replays)\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/table.py in _memory_mapped_arrow_table_from_file(filename)\r\n 48 def _memory_mapped_arrow_table_from_file(filename: str) -> pa.Table:\r\n 49 memory_mapped_stream = pa.memory_map(filename)\r\n---> 50 opened_stream = pa.ipc.open_stream(memory_mapped_stream)\r\n 51 pa_table = opened_stream.read_all()\r\n 52 return pa_table\r\n\r\n/databricks/python/lib/python3.8/site-packages/pyarrow/ipc.py in open_stream(source)\r\n 152 reader : RecordBatchStreamReader\r\n 153 \"\"\"\r\n--> 154 return RecordBatchStreamReader(source)\r\n 155 \r\n 156 \r\n\r\n/databricks/python/lib/python3.8/site-packages/pyarrow/ipc.py in __init__(self, source)\r\n 43 \r\n 44 def __init__(self, source):\r\n---> 45 self._open(source)\r\n 46 \r\n 47 \r\n\r\n/databricks/python/lib/python3.8/site-packages/pyarrow/ipc.pxi in pyarrow.lib._RecordBatchStreamReader._open()\r\n\r\n/databricks/python/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n/databricks/python/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: Tried reading schema message, was null or length 0\r\n```\r\n\r\n", "looks like my arrow files are all empty @mariosasko \r\n\r\n![image](https://user-images.githubusercontent.com/7530947/208179977-9ae62c9a-866c-472b-9a09-25d1191188fb.png)\r\n\r\n\r\ni also see the `incomplete_info.lock` file a level up too. seems like the data isn't being persisted to disk when I call `download_and_prepare`. is there something else i need to do before then, perhaps?", "quick update @mariosasko. i got it working! i had to downgrade to `datasets==2.4.0`. testing other versions now and will let you know the results.", "I've tested with every version of `datasets>2.4.0` and i get the same error with all of them." ]
2022-12-12T01:38:54
2022-12-20T18:20:57
2022-12-12T16:48:56
NONE
null
### Describe the bug I'm not sure this is a bug or if it's just missing in the documentation, or i'm not doing something correctly, but I'm subclassing `DatasetBuilder` and getting the following error because on the `DatasetBuilder` class the `_prepare_split` method is abstract (as are the others we are required to implement, hence the genesis of my question): ``` Traceback (most recent call last): File "/home/jason/source/python/prism_machine_learning/examples/create_hf_datasets.py", line 28, in <module> dataset_builder.download_and_prepare() File "/home/jason/.virtualenvs/pml/lib/python3.8/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/jason/.virtualenvs/pml/lib/python3.8/site-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/jason/.virtualenvs/pml/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split raise NotImplementedError() NotImplementedError ``` ### Steps to reproduce the bug I will share implementation if it turns out that everything should be working (i.e. we only need to implement those 3 methods the docs mention), but I don't want to distract from the original question. ### Expected behavior I just need to know if there are additional methods we need to implement when subclassing `DatasetBuilder` besides what the documentation specifies -> `_info`, `_split_generators` and `_generate_examples` ### Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.2.5 - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5351/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5351/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5350
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5350/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5350/comments
https://api.github.com/repos/huggingface/datasets/issues/5350/events
https://github.com/huggingface/datasets/pull/5350
1,487,559,904
PR_kwDODunzps5E8y2E
5,350
Clean up Loading methods docstrings
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-09T22:25:30
2022-12-12T17:27:20
2022-12-12T17:24:01
MEMBER
null
Clean up for the docstrings in Loading methods!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5350/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5350", "html_url": "https://github.com/huggingface/datasets/pull/5350", "diff_url": "https://github.com/huggingface/datasets/pull/5350.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5350.patch", "merged_at": "2022-12-12T17:24:01" }
true
https://api.github.com/repos/huggingface/datasets/issues/5349
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5349/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5349/comments
https://api.github.com/repos/huggingface/datasets/issues/5349/events
https://github.com/huggingface/datasets/pull/5349
1,487,396,780
PR_kwDODunzps5E8N6G
5,349
Clean up remaining Main Classes docstrings
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-09T20:17:15
2022-12-12T17:27:17
2022-12-12T17:24:13
MEMBER
null
This PR cleans up the remaining docstrings in Main Classes (`IterableDataset`, `IterableDatasetDict`, and `Features`).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5349/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5349", "html_url": "https://github.com/huggingface/datasets/pull/5349", "diff_url": "https://github.com/huggingface/datasets/pull/5349.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5349.patch", "merged_at": "2022-12-12T17:24:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/5348
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5348/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5348/comments
https://api.github.com/repos/huggingface/datasets/issues/5348/events
https://github.com/huggingface/datasets/issues/5348
1,486,975,626
I_kwDODunzps5YoXKK
5,348
The data downloaded in the download folder of the cache does not respect `umask`
{ "login": "SaulLu", "id": 55560583, "node_id": "MDQ6VXNlcjU1NTYwNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaulLu", "html_url": "https://github.com/SaulLu", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "repos_url": "https://api.github.com/users/SaulLu/repos", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "note, that `datasets` already did some of that umask fixing in the past and also at the hub - the recent work on the hub about the same: https://github.com/huggingface/huggingface_hub/pull/1220\r\n\r\nAlso I noticed that each file has a .json counterpart and the latter always has the correct perms:\r\n\r\n```\r\n-rw------- 1 uue59kq cnw 173M Dec 9 01:37 537596e64721e2ae3d98785b91d30fda0360c196a8224e29658ad629e7303a4d\r\n-rw-rw---- 1 uue59kq cnw 101 Dec 9 01:37 537596e64721e2ae3d98785b91d30fda0360c196a8224e29658ad629e7303a4d.json\r\n```\r\n\r\nso perhaps cheating is possible and syncing the perms between the 2 will do the trick." ]
2022-12-09T15:46:27
2022-12-09T17:21:26
null
NONE
null
### Describe the bug For a project on a cluster we are several users to share the same cache for the datasets library. And we have a problem with the permissions on the data downloaded in the cache. Indeed, it seems that the data is downloaded by giving read and write permissions only to the user launching the command (and no permissions to the group). In our case, those permissions don't respect the `umask` of this user, which was `0007`. Traceback: ``` Using custom data configuration default Downloading and preparing dataset text_caps/default to /gpfswork/rech/cnw/commun/datasets/HuggingFaceM4___text_caps/default/1.0.0/2b9ad220cd90fcf2bfb454645bc54364711b83d6d39401ffdaf8cc40882e9141... Downloading data files: 100%|████████████████████| 3/3 [00:00<00:00, 921.62it/s] --------------------------------------------------------------------------- PermissionError Traceback (most recent call last) Cell In [3], line 1 ----> 1 ds = load_dataset(dataset_name) File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/load.py:1746, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1743 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1745 # Download and prepare data -> 1746 builder_instance.download_and_prepare( 1747 download_config=download_config, 1748 download_mode=download_mode, 1749 ignore_verifications=ignore_verifications, 1750 try_from_hf_gcs=try_from_hf_gcs, 1751 use_auth_token=use_auth_token, 1752 ) 1754 # Build dataset for splits 1755 keep_in_memory = ( 1756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1757 ) File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 702 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 703 if not downloaded_from_gcs: --> 704 self._download_and_prepare( 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 706 ) 707 # Sync info 708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/builder.py:1227, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos) 1226 def _download_and_prepare(self, dl_manager, verify_infos): -> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 769 split_dict = SplitDict(dataset_name=self.name) 770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 773 # Checksums verification 774 if verify_infos and dl_manager.record_checksums: File /gpfswork/rech/cnw/commun/modules/datasets_modules/datasets/HuggingFaceM4--TextCaps/2b9ad220cd90fcf2bfb454645bc54364711b83d6d39401ffdaf8cc40882e9141/TextCaps.py:125, in TextCapsDataset._split_generators(self, dl_manager) 123 def _split_generators(self, dl_manager): 124 # urls = _URLS[self.config.name] # TODO later --> 125 data_dir = dl_manager.download_and_extract(_URLS) 126 gen_kwargs = { 127 split_name: { 128 f"{dir_name}_path": Path(data_dir[dir_name][split_name]) (...) 133 for split_name in ["train", "val", "test"] 134 } 136 for split_name in ["train", "val", "test"]: File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/download/download_manager.py:431, in DownloadManager.download_and_extract(self, url_or_urls) 415 def download_and_extract(self, url_or_urls): 416 """Download and extract given url_or_urls. 417 418 Is roughly equivalent to: (...) 429 extracted_path(s): `str`, extracted paths of given URL(s). 430 """ --> 431 return self.extract(self.download(url_or_urls)) File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/download/download_manager.py:324, in DownloadManager.download(self, url_or_urls) 321 self.downloaded_paths.update(dict(zip(url_or_urls.flatten(), downloaded_path_or_paths.flatten()))) 323 start_time = datetime.now() --> 324 self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths) 325 duration = datetime.now() - start_time 326 logger.info(f"Checksum Computation took {duration.total_seconds() // 60} min") File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/download/download_manager.py:229, in DownloadManager._record_sizes_checksums(self, url_or_urls, downloaded_path_or_paths) 226 """Record size/checksum of downloaded files.""" 227 for url, path in zip(url_or_urls.flatten(), downloaded_path_or_paths.flatten()): 228 # call str to support PathLike objects --> 229 self._recorded_sizes_checksums[str(url)] = get_size_checksum_dict( 230 path, record_checksum=self.record_checksums 231 ) File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/utils/info_utils.py:82, in get_size_checksum_dict(path, record_checksum) 80 if record_checksum: 81 m = sha256() ---> 82 with open(path, "rb") as f: 83 for chunk in iter(lambda: f.read(1 << 20), b""): 84 m.update(chunk) PermissionError: [Errno 13] Permission denied: '/gpfswork/rech/cnw/commun/datasets/downloads/1e6aa6d23190c30885194fabb193dce3874d902d7636b66315ee8aaa584e80d6' ``` ### Steps to reproduce the bug I think the following will reproduce the bug. Given 2 users belonging to the same group with `umask` set to `0007` - first run with User 1: ```python from datasets import load_dataset ds_name = "HuggingFaceM4/VQAv2" ds = load_dataset(ds_name) ``` - then run with User 2: ```python from datasets import load_dataset ds_name = "HuggingFaceM4/TextCaps" ds = load_dataset(ds_name) ``` ### Expected behavior No `PermissionError` ### Environment info - `datasets` version: 2.4.0 - Platform: Linux-4.18.0-305.65.1.el8_4.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5348/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5348/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5347
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5347/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5347/comments
https://api.github.com/repos/huggingface/datasets/issues/5347/events
https://github.com/huggingface/datasets/pull/5347
1,486,920,261
PR_kwDODunzps5E6jb1
5,347
Force soundfile to return float32 instead of the default float64
{ "login": "qmeeus", "id": 25608944, "node_id": "MDQ6VXNlcjI1NjA4OTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/25608944?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qmeeus", "html_url": "https://github.com/qmeeus", "followers_url": "https://api.github.com/users/qmeeus/followers", "following_url": "https://api.github.com/users/qmeeus/following{/other_user}", "gists_url": "https://api.github.com/users/qmeeus/gists{/gist_id}", "starred_url": "https://api.github.com/users/qmeeus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qmeeus/subscriptions", "organizations_url": "https://api.github.com/users/qmeeus/orgs", "repos_url": "https://api.github.com/users/qmeeus/repos", "events_url": "https://api.github.com/users/qmeeus/events{/privacy}", "received_events_url": "https://api.github.com/users/qmeeus/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "cc @polinaeterna", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5347). All of your documentation changes will be reflected on that endpoint.", "Cool ! Feel free to add a comment in the code to explain that and we can merge :)", "I'm not sure if this is a good change since we plan to get rid of `torchaudio` in the next couple of months...", "What do you think @polinaeterna @patrickvonplaten ? Models are usually using float32 (e.g. Wev2vec2 in `transformers`) IIRC", "IMO we can safely assume that float32 is always good enough when using audio models in inference or training. Nevertheless there might be use cases for audio datasets in the future where float64 is needed. \r\n\r\n=> I would by default always cast to float32, but then possible allow the user to cast to float64 ", "> I'm not sure if this is a good change since we plan to get rid of torchaudio in the next couple of months...\r\n\r\n@mariosasko I agree but who knows how long we will have to wait until we are really able to do so (https://github.com/bastibe/libsndfile-binaries/pull/17 is a draft. so as @patrickvonplaten is okay with float32, I'd merge.\r\n\r\n\r\n", "@polinaeterna Can you comment on the linked PR to see why it's still a draft? Maybe we can help somehow to get this merged finally.\r\n\r\nI think it's weird to align `soundfile` with `torchaudio` when the latter is only used for MP3 (and prob for not much longer). " ]
2022-12-09T15:10:24
2023-01-17T16:12:49
null
NONE
null
(Fixes issue #5345)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5347/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5347/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5347", "html_url": "https://github.com/huggingface/datasets/pull/5347", "diff_url": "https://github.com/huggingface/datasets/pull/5347.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5347.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5346
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5346/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5346/comments
https://api.github.com/repos/huggingface/datasets/issues/5346/events
https://github.com/huggingface/datasets/issues/5346
1,486,884,983
I_kwDODunzps5YoBB3
5,346
[Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem!
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "As the survey is finished, can we close this issue, @LysandreJik ?", "Yes! I'll post a public summary on the forums shortly." ]
2022-12-09T14:48:02
2023-01-25T19:35:41
2023-01-25T19:35:40
MEMBER
null
Thanks to all of you, Datasets is just about to pass 15k stars! Since the last survey, a lot has happened: the [diffusers](https://github.com/huggingface/diffusers), [evaluate](https://github.com/huggingface/evaluate) and [skops](https://github.com/skops-dev/skops) libraries were born. `timm` joined the Hugging Face ecosystem. There were 25 new releases of `transformers`, 21 new releases of `datasets`, 13 new releases of `accelerate`. If you have a couple of minutes and want to participate in shaping the future of the ecosystem, please share your thoughts: [**hf.co/oss-survey**](https://docs.google.com/forms/d/e/1FAIpQLSf4xFQKtpjr6I_l7OfNofqiR8s-WG6tcNbkchDJJf5gYD72zQ/viewform?usp=sf_link) (please reply in the above feedback form rather than to this thread) Thank you all on behalf of the HuggingFace team! 🤗
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5346/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 3, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5346/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5345
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5345/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5345/comments
https://api.github.com/repos/huggingface/datasets/issues/5345/events
https://github.com/huggingface/datasets/issues/5345
1,486,555,384
I_kwDODunzps5Ymwj4
5,345
Wrong dtype for array in audio features
{ "login": "qmeeus", "id": 25608944, "node_id": "MDQ6VXNlcjI1NjA4OTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/25608944?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qmeeus", "html_url": "https://github.com/qmeeus", "followers_url": "https://api.github.com/users/qmeeus/followers", "following_url": "https://api.github.com/users/qmeeus/following{/other_user}", "gists_url": "https://api.github.com/users/qmeeus/gists{/gist_id}", "starred_url": "https://api.github.com/users/qmeeus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qmeeus/subscriptions", "organizations_url": "https://api.github.com/users/qmeeus/orgs", "repos_url": "https://api.github.com/users/qmeeus/repos", "events_url": "https://api.github.com/users/qmeeus/events{/privacy}", "received_events_url": "https://api.github.com/users/qmeeus/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "After some more investigation, this is due to [this line of code](https://github.com/huggingface/datasets/blob/main/src/datasets/features/audio.py#L279). The function `sf.read(file)` should be updated to `sf.read(file, dtype=\"float32\")`\r\n\r\nIndeed, the default value in soundfile is `float64` ([see here](https://pysoundfile.readthedocs.io/en/latest/#soundfile.read)). \r\n", "@qmeeus I agree, decoding of different audio formats should return the same dtypes indeed!\r\n\r\nBut note that here you are concatenating datasets with different sampling rates: 48000 for CommonVoice and 16000 for Voxpopuli. So you should cast them to the same sampling rate value before interleaving, for example:\r\n```\r\ncv = cv.cast_column(\"audio\", Audio(sampling_rate=16000))\r\n```\r\notherwise you would get the same error because features of the same column (\"audio\") are not the same.\r\n\r\nAlso, the error you get is unexpected. Could you please confirm that you use the latest main version of the `datasets`? We had an issue that could lead to an error like this after using `rename_column` method, but it was fixed in https://github.com/huggingface/datasets/pull/5287 " ]
2022-12-09T11:05:11
2022-12-16T13:44:46
null
NONE
null
### Describe the bug When concatenating/interleaving different datasets, I stumble into an error because the features can't be aligned. After some investigation, I understood that the audio arrays had different dtypes, namely `float32` and `float64`. Consequently, the datasets cannot be merged. ### Steps to reproduce the bug For example, for `facebook/voxpopuli` and `mozilla-foundation/common_voice_11_0`: ``` from datasets import load_dataset, interleave_datasets covost = load_dataset("mozilla-foundation/common_voice_11_0", "en", split="train", streaming=True) voxpopuli = datasets.load_dataset("facebook/voxpopuli", "nl", split="train", streaming=True) sample_cv, = covost.take(1) sample_vp, = voxpopuli.take(1) assert sample_cv["audio"]["array"].dtype == sample_vp["audio"]["array"].dtype # Fails dataset = interleave_datasets([covost, voxpopuli]) # ValueError: The features can't be aligned because the key audio of features {'audio_id': Value(dtype='string', id=None), 'language': Value(dtype='int64', id=None), 'audio': {'array': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'path': Value(dtype='string', id=None), 'sampling_rate': Value(dtype='int64', id=None)}, 'normalized_text': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'speaker_id': Value(dtype='string', id=None), 'is_gold_transcript': Value(dtype='bool', id=None), 'accent': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None)} has unexpected type - {'array': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'path': Value(dtype='string', id=None), 'sampling_rate': Value(dtype='int64', id=None)} (expected either Audio(sampling_rate=16000, mono=True, decode=True, id=None) or Value("null"). ``` ### Expected behavior The audio should be loaded to arrays with a unique dtype (I guess `float32`) ### Environment info ``` - `datasets` version: 2.7.1.dev0 - Platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.15 - PyArrow version: 10.0.1 - Pandas version: 1.5.2 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5345/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5344
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5344/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5344/comments
https://api.github.com/repos/huggingface/datasets/issues/5344/events
https://github.com/huggingface/datasets/pull/5344
1,485,628,319
PR_kwDODunzps5E2BPN
5,344
Clean up Dataset and DatasetDict
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-09T00:02:08
2022-12-13T00:56:07
2022-12-13T00:53:02
MEMBER
null
This PR cleans up the docstrings for the other half of the methods in `Dataset` and finishes `DatasetDict`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5344/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5344/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5344", "html_url": "https://github.com/huggingface/datasets/pull/5344", "diff_url": "https://github.com/huggingface/datasets/pull/5344.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5344.patch", "merged_at": "2022-12-13T00:53:01" }
true
https://api.github.com/repos/huggingface/datasets/issues/5343
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5343/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5343/comments
https://api.github.com/repos/huggingface/datasets/issues/5343/events
https://github.com/huggingface/datasets/issues/5343
1,485,297,823
I_kwDODunzps5Yh9if
5,343
T5 for Q&A produces truncated sentence
{ "login": "junyongyou", "id": 13484072, "node_id": "MDQ6VXNlcjEzNDg0MDcy", "avatar_url": "https://avatars.githubusercontent.com/u/13484072?v=4", "gravatar_id": "", "url": "https://api.github.com/users/junyongyou", "html_url": "https://github.com/junyongyou", "followers_url": "https://api.github.com/users/junyongyou/followers", "following_url": "https://api.github.com/users/junyongyou/following{/other_user}", "gists_url": "https://api.github.com/users/junyongyou/gists{/gist_id}", "starred_url": "https://api.github.com/users/junyongyou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/junyongyou/subscriptions", "organizations_url": "https://api.github.com/users/junyongyou/orgs", "repos_url": "https://api.github.com/users/junyongyou/repos", "events_url": "https://api.github.com/users/junyongyou/events{/privacy}", "received_events_url": "https://api.github.com/users/junyongyou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2022-12-08T19:48:46
2022-12-08T19:57:17
2022-12-08T19:57:17
NONE
null
Dear all, I am fine-tuning T5 for Q&A task using the MedQuAD ([GitHub - abachaa/MedQuAD: Medical Question Answering Dataset of 47,457 QA pairs created from 12 NIH websites](https://github.com/abachaa/MedQuAD)) dataset. In the dataset, there are many long answers with thousands of words. I have used pytorch_lightning to train the T5-large model. I have two questions. For example, I set both the max_length, max_input_length, max_output_length to 128. How to deal with those long answers? I just left them as is and the T5Tokenizer can automatically handle. I would assume the tokenizer just truncates an answer at the position of 128th word (or 127th). Is it possible that I manually split an answer into different parts, each part has 128 words; and then all these sub-answers serve as a separate answer to the same question? Another question is that I get incomplete (truncated) answers when using the fine-tuned model in inference, even though the predicted answer is shorter than 128 words. I found a message posted 2 years ago saying that one should add at the end of texts when fine-tuning T5. I followed that but then got a warning message that duplicated were found. I am assuming that this is because the tokenizer truncates an answer text, thus is missing in the truncated answer, such that the end token is not produced in predicted answer. However, I am not sure. Can anybody point out how to address this issue? Any suggestions are highly appreciated. Below is some code snippet. ` import pytorch_lightning as pl from torch.utils.data import DataLoader import torch import numpy as np import time from pathlib import Path from transformers import ( Adafactor, T5ForConditionalGeneration, T5Tokenizer, get_linear_schedule_with_warmup ) from torch.utils.data import RandomSampler from question_answering.utils import * class T5FineTuner(pl.LightningModule): def __init__(self, hyparams): super(T5FineTuner, self).__init__() self.hyparams = hyparams self.model = T5ForConditionalGeneration.from_pretrained(hyparams.model_name_or_path) self.tokenizer = T5Tokenizer.from_pretrained(hyparams.tokenizer_name_or_path) if self.hyparams.freeze_embeds: self.freeze_embeds() if self.hyparams.freeze_encoder: self.freeze_params(self.model.get_encoder()) # assert_all_frozen() self.step_count = 0 self.output_dir = Path(self.hyparams.output_dir) n_observations_per_split = { 'train': self.hyparams.n_train, 'validation': self.hyparams.n_val, 'test': self.hyparams.n_test } self.n_obs = {k: v if v >= 0 else None for k, v in n_observations_per_split.items()} self.em_score_list = [] self.subset_score_list = [] data_folder = r'C:\Datasets\MedQuAD-master' self.train_data, self.val_data, self.test_data = load_medqa_data(data_folder) def freeze_params(self, model): for param in model.parameters(): param.requires_grad = False def freeze_embeds(self): try: self.freeze_params(self.model.model.shared) for d in [self.model.model.encoder, self.model.model.decoder]: self.freeze_params(d.embed_positions) self.freeze_params(d.embed_tokens) except AttributeError: self.freeze_params(self.model.shared) for d in [self.model.encoder, self.model.decoder]: self.freeze_params(d.embed_tokens) def lmap(self, f, x): return list(map(f, x)) def is_logger(self): return self.trainer.proc_rank <= 0 def forward(self, input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, labels=None): return self.model( input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, labels=labels ) def _step(self, batch): labels = batch['target_ids'] labels[labels[:, :] == self.tokenizer.pad_token_id] = -100 outputs = self( input_ids = batch['source_ids'], attention_mask=batch['source_mask'], labels=labels, decoder_attention_mask=batch['target_mask'] ) loss = outputs[0] return loss def ids_to_clean_text(self, generated_ids): gen_text = self.tokenizer.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) return self.lmap(str.strip, gen_text) def _generative_step(self, batch): t0 = time.time() generated_ids = self.model.generate( batch["source_ids"], attention_mask=batch["source_mask"], use_cache=True, decoder_attention_mask=batch['target_mask'], max_length=128, num_beams=2, early_stopping=True ) preds = self.ids_to_clean_text(generated_ids) targets = self.ids_to_clean_text(batch["target_ids"]) gen_time = (time.time() - t0) / batch["source_ids"].shape[0] loss = self._step(batch) base_metrics = {'val_loss': loss} summ_len = np.mean(self.lmap(len, generated_ids)) base_metrics.update(gen_time=gen_time, gen_len=summ_len, preds=preds, target=targets) em_score, subset_match_score = calculate_scores(preds, targets) self.em_score_list.append(em_score) self.subset_score_list.append(subset_match_score) em_score = torch.tensor(em_score, dtype=torch.float32) subset_match_score = torch.tensor(subset_match_score, dtype=torch.float32) base_metrics.update(em_score=em_score, subset_match_score=subset_match_score) # rouge_results = self.rouge_metric.compute() # rouge_dict = self.parse_score(rouge_results) return base_metrics def training_step(self, batch, batch_idx): loss = self._step(batch) tensorboard_logs = {'train_loss': loss} return {'loss': loss, 'log': tensorboard_logs} def training_epoch_end(self, outputs): avg_train_loss = torch.stack([x['loss'] for x in outputs]).mean() tensorboard_logs = {'avg_train_loss': avg_train_loss} # return {'avg_train_loss': avg_train_loss, 'log': tensorboard_logs, 'progress_bar': tensorboard_logs} def validation_step(self, batch, batch_idx): return self._generative_step(batch) def validation_epoch_end(self, outputs): avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean() tensorboard_logs = {'val_loss': avg_loss} if len(self.em_score_list) <= 2: average_em_score = sum(self.em_score_list) / len(self.em_score_list) average_subset_match_score = sum(self.subset_score_list) / len(self.subset_score_list) else: latest_em_score = self.em_score_list[:-2] latest_subset_score = self.subset_score_list[:-2] average_em_score = sum(latest_em_score) / len(latest_em_score) average_subset_match_score = sum(latest_subset_score) / len(latest_subset_score) average_em_score = torch.tensor(average_em_score, dtype=torch.float32) average_subset_match_score = torch.tensor(average_subset_match_score, dtype=torch.float32) tensorboard_logs.update(em_score=average_em_score, subset_match_score=average_subset_match_score) self.target_gen = [] self.prediction_gen = [] return { 'avg_val_loss': avg_loss, 'em_score': average_em_score, 'subset_match_socre': average_subset_match_score, 'log': tensorboard_logs, 'progress_bar': tensorboard_logs } def configure_optimizers(self): model = self.model no_decay = ["bias", "LayerNorm.weight"] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], "weight_decay": self.hyparams.weight_decay, }, { "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, ] optimizer = Adafactor(optimizer_grouped_parameters, lr=self.hyparams.learning_rate, scale_parameter=False, relative_step=False) self.opt = optimizer return [optimizer] def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, optimizer_closure=None, on_tpu=False, using_native_amp=False, using_lbfgs=False): optimizer.step(closure=optimizer_closure) optimizer.zero_grad() self.lr_scheduler.step() def get_tqdm_dict(self): tqdm_dict = {"loss": "{:.3f}".format(self.trainer.avg_loss), "lr": self.lr_scheduler.get_last_lr()[-1]} return tqdm_dict def train_dataloader(self): n_samples = self.n_obs['train'] train_dataset = get_dataset(tokenizer=self.tokenizer, data=self.train_data, num_samples=n_samples, args=self.hyparams) sampler = RandomSampler(train_dataset) dataloader = DataLoader(train_dataset, sampler=sampler, batch_size=self.hyparams.train_batch_size, drop_last=True, num_workers=4) # t_total = ( # (len(dataloader.dataset) // (self.hyparams.train_batch_size * max(1, self.hyparams.n_gpu))) # // self.hyparams.gradient_accumulation_steps # * float(self.hyparams.num_train_epochs) # ) t_total = 100000 scheduler = get_linear_schedule_with_warmup( self.opt, num_warmup_steps=self.hyparams.warmup_steps, num_training_steps=t_total ) self.lr_scheduler = scheduler return dataloader def val_dataloader(self): n_samples = self.n_obs['validation'] validation_dataset = get_dataset(tokenizer=self.tokenizer, data=self.val_data, num_samples=n_samples, args=self.hyparams) sampler = RandomSampler(validation_dataset) return DataLoader(validation_dataset, shuffle=False, batch_size=self.hyparams.eval_batch_size, sampler=sampler, num_workers=4) def test_dataloader(self): n_samples = self.n_obs['test'] test_dataset = get_dataset(tokenizer=self.tokenizer, data=self.test_data, num_samples=n_samples, args=self.hyparams) return DataLoader(test_dataset, batch_size=self.hyparams.eval_batch_size, num_workers=4) def on_save_checkpoint(self, checkpoint): save_path = self.output_dir.joinpath("best_tfmr") self.model.config.save_step = self.step_count self.model.save_pretrained(save_path) self.tokenizer.save_pretrained(save_path) import os import argparse import pytorch_lightning as pl from question_answering.t5_closed_book import T5FineTuner if __name__ == '__main__': args_dict = dict( output_dir="", # path to save the checkpoints model_name_or_path='t5-large', tokenizer_name_or_path='t5-large', max_input_length=128, max_output_length=128, freeze_encoder=False, freeze_embeds=False, learning_rate=1e-5, weight_decay=0.0, adam_epsilon=1e-8, warmup_steps=0, train_batch_size=4, eval_batch_size=4, num_train_epochs=2, gradient_accumulation_steps=10, n_gpu=1, resume_from_checkpoint=None, val_check_interval=0.5, n_val=4000, n_train=-1, n_test=-1, early_stop_callback=False, fp_16=False, opt_level='O1', max_grad_norm=1.0, seed=101, ) args_dict.update({'output_dir': 't5_large_MedQuAD_256', 'num_train_epochs': 100, 'train_batch_size': 16, 'eval_batch_size': 16, 'learning_rate': 1e-3}) args = argparse.Namespace(**args_dict) checkpoint_callback = pl.callbacks.ModelCheckpoint(dirpath=args.output_dir, monitor="em_score", mode="max", save_top_k=1) ## If resuming from checkpoint, add an arg resume_from_checkpoint train_params = dict( accumulate_grad_batches=args.gradient_accumulation_steps, gpus=args.n_gpu, max_epochs=args.num_train_epochs, # early_stop_callback=False, precision=16 if args.fp_16 else 32, # amp_level=args.opt_level, # resume_from_checkpoint=args.resume_from_checkpoint, gradient_clip_val=args.max_grad_norm, checkpoint_callback=checkpoint_callback, val_check_interval=args.val_check_interval, # accelerator='dp' # logger=wandb_logger, # callbacks=[LoggingCallback()], ) model = T5FineTuner(args) trainer = pl.Trainer(**train_params) trainer.fit(model) `
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5343/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5342
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5342/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5342/comments
https://api.github.com/repos/huggingface/datasets/issues/5342/events
https://github.com/huggingface/datasets/issues/5342
1,485,244,178
I_kwDODunzps5YhwcS
5,342
Emotion dataset cannot be downloaded
{ "login": "cbarond", "id": 78887193, "node_id": "MDQ6VXNlcjc4ODg3MTkz", "avatar_url": "https://avatars.githubusercontent.com/u/78887193?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cbarond", "html_url": "https://github.com/cbarond", "followers_url": "https://api.github.com/users/cbarond/followers", "following_url": "https://api.github.com/users/cbarond/following{/other_user}", "gists_url": "https://api.github.com/users/cbarond/gists{/gist_id}", "starred_url": "https://api.github.com/users/cbarond/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cbarond/subscriptions", "organizations_url": "https://api.github.com/users/cbarond/orgs", "repos_url": "https://api.github.com/users/cbarond/repos", "events_url": "https://api.github.com/users/cbarond/events{/privacy}", "received_events_url": "https://api.github.com/users/cbarond/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
closed
false
null
[]
null
[ "Hi @cbarond there's already an open issue at https://github.com/dair-ai/emotion_dataset/issues/5, as the data seems to be missing now, so check that issue instead 👍🏻 ", "Thanks @cbarond for reporting and @alvarobartt for pointing to the issue we opened in the author's repo.\r\n\r\nIndeed, this issue was first raised in the \"emotion\" dataset Community tab: https://huggingface.co/datasets/emotion/discussions/3\r\n\r\nI'm closing this issue and leave the issue above for the subsequent updates.\r\n\r\nDuplicate of: https://huggingface.co/datasets/emotion/discussions/3", "try using \"SetFit/emotion\" instead", "> try using \"SetFit/emotion\" instead\r\n\r\nI' replaced \"emotion\" with \"SetFit/Emotion\", but the code is getting stuck at\r\n\r\n`emotions = load_dataset(\"SetFit/emotion\")`\r\n\r\nI pause execution using the debugger, and it takes me to filelock.py:226\r\n\r\n`with self._thread_lock:`\r\n\r\nDo you know a way to get past this issue?", "thanks @honeyimholm - worked for me", "> try using \"SetFit/emotion\" instead\r\n\r\nIt really helps a lot, thank you!", "The dataset loading script has been fixed: https://huggingface.co/datasets/emotion/discussions/4" ]
2022-12-08T19:07:09
2023-01-02T12:05:37
2022-12-09T10:46:11
NONE
null
### Describe the bug The emotion dataset gives a FileNotFoundError. The full error is: `FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt?dl=1`. It was working yesterday (December 7, 2022), but stopped working today (December 8, 2022). ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("emotion") ``` ### Expected behavior The dataset should load properly. ### Environment info - `datasets` version: 2.7.1 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.9.13 - PyArrow version: 10.0.1 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5342/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5341
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5341/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5341/comments
https://api.github.com/repos/huggingface/datasets/issues/5341/events
https://github.com/huggingface/datasets/pull/5341
1,484,376,644
PR_kwDODunzps5Exohx
5,341
Remove tasks.json
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-08T11:04:35
2022-12-09T12:26:21
2022-12-09T12:23:20
MEMBER
null
After discussions in https://github.com/huggingface/datasets/pull/5335 we should remove this file that is not used anymore. We should update https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts instead.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5341/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5341/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5341", "html_url": "https://github.com/huggingface/datasets/pull/5341", "diff_url": "https://github.com/huggingface/datasets/pull/5341.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5341.patch", "merged_at": "2022-12-09T12:23:20" }
true
https://api.github.com/repos/huggingface/datasets/issues/5340
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5340/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5340/comments
https://api.github.com/repos/huggingface/datasets/issues/5340/events
https://github.com/huggingface/datasets/pull/5340
1,483,182,158
PR_kwDODunzps5EtWo3
5,340
Clean up DatasetInfo and Dataset docstrings
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-08T00:17:53
2022-12-08T19:33:14
2022-12-08T19:30:10
MEMBER
null
This PR cleans up the docstrings for `DatasetInfo` and about half of the methods in `Dataset`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5340/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5340/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5340", "html_url": "https://github.com/huggingface/datasets/pull/5340", "diff_url": "https://github.com/huggingface/datasets/pull/5340.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5340.patch", "merged_at": "2022-12-08T19:30:10" }
true
https://api.github.com/repos/huggingface/datasets/issues/5339
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5339/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5339/comments
https://api.github.com/repos/huggingface/datasets/issues/5339/events
https://github.com/huggingface/datasets/pull/5339
1,482,817,424
PR_kwDODunzps5EsC8N
5,339
Add Video feature, videofolder, and video-classification task
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5339). All of your documentation changes will be reflected on that endpoint.", "@lhoestq I think I need some serious help with the tests 😅...I started this locally but it got too time consuming.\n\nOne issue I remember running into is with lossless audio encoding/decoding. I started thinking of using the underlying Audio feature instead of PyAV so I didn't have to rewrite similar logic here...but assumed that would turn into a mess w/ underlying logic" ]
2022-12-07T20:48:34
2023-01-05T23:54:12
null
CONTRIBUTOR
null
This PR does the following: - Adds `Video` feature (Resolves #5225 ) - Adds `video-classification` task - Adds `videofolder` packaged module for easy loading of local video classification datasets TODO: - [ ] add tests - [ ] add docs
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5339/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5339/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5339", "html_url": "https://github.com/huggingface/datasets/pull/5339", "diff_url": "https://github.com/huggingface/datasets/pull/5339.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5339.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5338
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5338/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5338/comments
https://api.github.com/repos/huggingface/datasets/issues/5338/events
https://github.com/huggingface/datasets/issues/5338
1,482,646,151
I_kwDODunzps5YX2KH
5,338
`map()` stops every 1000 steps
{ "login": "bayartsogt-ya", "id": 43239645, "node_id": "MDQ6VXNlcjQzMjM5NjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bayartsogt-ya", "html_url": "https://github.com/bayartsogt-ya", "followers_url": "https://api.github.com/users/bayartsogt-ya/followers", "following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}", "gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}", "starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions", "organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs", "repos_url": "https://api.github.com/users/bayartsogt-ya/repos", "events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}", "received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi !\r\n\r\n> It starts using all the cores (I am not sure why because I did not pass num_proc)\r\n\r\nThe tokenizer uses Rust code that is multithreaded. And maybe the `feature_extractor` might run some things in parallel as well - but I'm not super familiar with its internals.\r\n\r\n> then progress bar stops at every 1k steps. (starts using a single core)\r\n\r\nEvery 1000 examples we flush the processed examples to disk. It is this way because Arrow is a columnar format: you must write data chunk by chunk. The processing in on hold while writing right now - maybe this can be improved in the future.", "Hi @lhoestq \r\nThanks for the explanation! it was so helpful! Let me check why `feature_extractor` is running on multiple cpus." ]
2022-12-07T19:09:40
2022-12-10T00:39:29
2022-12-10T00:39:28
NONE
null
### Describe the bug I am passing the following `prepare_dataset` function to `Dataset.map` (code is inspired from [here](https://github.com/huggingface/community-events/blob/main/whisper-fine-tuning-event/run_speech_recognition_seq2seq_streaming.py#L454)) ```python3 def prepare_dataset(batch): # load and resample audio data from 48 to 16kHz audio = batch["audio"] # compute log-Mel input features from input audio array batch["input_features"] = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0] # encode target text to label ids batch["labels"] = tokenizer(batch[text_column]).input_ids return batch ... train_ds = train_ds.map(prepare_dataset) ``` Here is the exact code I am running https://github.com/bayartsogt-ya/whisper-multiple-hf-datasets/blob/main/train.py#L70-L71 It starts using all the cores (I am not sure why because I did not pass `num_proc`) then progress bar stops at every 1k steps. (starts using a single core) then come back to using all the cores again. link to [screen record](https://youtu.be/jPQpQQGp6Gc) Can someone explain this process and maybe provide a way to improve this pipeline? cc: @lhoestq ### Steps to reproduce the bug 1. load the dataset 2. create a Whisper processor 3. create a `prepare_dataset` function 4. pass the function to `dataset.map(prepare_dataset)` ### Expected behavior - Use a single core per a function - not to stop at some point? ### Environment info - `datasets` version: 2.7.1.dev0 - Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.27 - Python version: 3.8.10 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5338/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5338/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5337
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5337/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5337/comments
https://api.github.com/repos/huggingface/datasets/issues/5337/events
https://github.com/huggingface/datasets/issues/5337
1,481,692,156
I_kwDODunzps5YUNP8
5,337
Support webdataset format
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I like the idea of having `webdataset` as an optional dependency to ensure our loader generates web datasets the same way as the main project.", "Webdataset is the one of the most popular dataset formats for large scale computer vision tasks. Upvote for this issue. " ]
2022-12-07T11:32:25
2023-01-04T20:35:31
null
MEMBER
null
Webdataset is an efficient format for iterable datasets. It would be nice to support it in `datasets`, as discussed in https://github.com/rom1504/img2dataset/issues/234. In particular it would be awesome to be able to load one using `load_dataset` in streaming mode (either from a local directory, or from a dataset on the Hugging Face Hub). Some datasets on the Hub are already in webdataset format. It terms of implementation, we can have something similar to the Parquet loader. I also think it's fine to have webdataset as an optional dependency.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5337/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5337/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5336
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5336/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5336/comments
https://api.github.com/repos/huggingface/datasets/issues/5336/events
https://github.com/huggingface/datasets/pull/5336
1,479,649,900
PR_kwDODunzps5Egzed
5,336
Set `IterableDataset.map` param `batch_size` typing as optional
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5336). All of your documentation changes will be reflected on that endpoint.", "Hi @mariosasko, @lhoestq I was wondering whether we should include `batched` as a `pytest.mark` param for the functions testing `IterableDataset.map` so as to ensure that the changes done in this PR work fine without breaking anything of the actual functionality.\r\n\r\nI've pushed updated tests just for one of the unit testing functions to be run as `pytest tests/test_iterable_dataset.py::test_mapped_examples_iterable -s --durations 0`, but some are still missing `batched` param, it was just to ask you whether we're supposed to do this for the rest of the functions or not, if it's a yes I'll push the commit as it's ready, but didn't want to push extra stuff that may be discarded later!\r\n\r\nThanks :hugs:", "Thanks for the feedback @lhoestq, I agree with keeping `Optional` instead of `Union[type, None]` for now 👍🏻" ]
2022-12-06T17:08:10
2022-12-07T14:14:56
2022-12-07T14:06:27
CONTRIBUTOR
null
This PR solves #5325 ~Indeed we're using the typing for optional values as `Union[type, None]` as it's similar to how Python 3.10 handles optional values as `type | None`, instead of using `Optional[type]`.~ ~Do we want to start using `Union[type, None]` for type-hinting optional values or just keep on using `Optional`?~ -> Keeping `Optional` still for consistency with the rest of the code in `datasets` Also we now allow `batch_size` to be `None` for `IterableDataset.map` and `IterableDataset.filter`e.g. `MappedExamplesIterable` as `map` is internally instantiating those and propagating the `batch_size` param so if it can be `None` for `map` it should also do so for `MappedExamplesIterable`, as well as for `FilteredExamplesIterable` when calling `IterableDataset.filter`. ## TODOs - [x] Add integration tests - [x] Handle scenario where `batched=True` and `batch_size=None` or `batch_size<=0`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5336/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5336", "html_url": "https://github.com/huggingface/datasets/pull/5336", "diff_url": "https://github.com/huggingface/datasets/pull/5336.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5336.patch", "merged_at": "2022-12-07T14:06:27" }
true
https://api.github.com/repos/huggingface/datasets/issues/5335
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5335/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5335/comments
https://api.github.com/repos/huggingface/datasets/issues/5335/events
https://github.com/huggingface/datasets/pull/5335
1,478,890,788
PR_kwDODunzps5EeHdA
5,335
Update tasks.json
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think the only place where we need to add it is here https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts\r\n\r\nAnd I think we can remove tasks.json completely from this repo", "Isn't tasks.json used anymore in this repo?", "> I think the only place where we need to add it is here https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts\r\n> \r\n> And I think we can remove tasks.json completely from this repo\r\n\r\nWhat about the warning I mentioned in https://github.com/huggingface/datasets/issues/5255#issuecomment-1339013527? Also, the depth estimation entry is already present in https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts. ", "The update is based on what I received in the output of the export job (c.f. https://github.com/huggingface/datasets/issues/5255#issuecomment-1339107195). \r\n\r\nEdit: Oh, are you referring to the dataset card of NYU Depth V2?", "Yes, my suggestion was for the dataset card: you got the error message because you tried to set `depth-estimation` in `class_ids` instead of `class_categories`.", "> What about the warning I mentioned in https://github.com/huggingface/datasets/issues/5255#issuecomment-1339013527? Also, the depth estimation entry is already present in https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts.\r\n\r\nif you place it in `task_categories` you should be good :)", "yes i would suggest rm'ing tasks.json here for clarity", "Closing it. ", "It's not clear if we can remove it btw, since old versions of `evaluate` rely on it (see https://github.com/huggingface/evaluate/pull/309)\r\n\r\ncc @lvwerra ", "Actually it can be removed without incidence in old versions of evaluate since we kept an hardcoded `known_task_ids` that is marked \"DEPRECATED\"" ]
2022-12-06T11:37:57
2022-12-08T11:05:33
2022-12-07T12:46:03
MEMBER
null
Context: * https://github.com/huggingface/datasets/issues/5255#issuecomment-1339107195 Cc: @osanseviero
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5335/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5335", "html_url": "https://github.com/huggingface/datasets/pull/5335", "diff_url": "https://github.com/huggingface/datasets/pull/5335.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5335.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5334
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5334/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5334/comments
https://api.github.com/repos/huggingface/datasets/issues/5334/events
https://github.com/huggingface/datasets/pull/5334
1,477,421,927
PR_kwDODunzps5EY9zN
5,334
Clean up docstrings
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks ! Let us know if we can help :)\r\n\r\nSmall pref for having multiple PRs", "Awesome, thanks! Sorry this one is a little big, I'll open some smaller ones next :)" ]
2022-12-05T20:56:08
2022-12-09T01:44:25
2022-12-09T01:41:44
MEMBER
null
As raised by @polinaeterna in #5324, some of the docstrings are a bit of a mess because it has both Markdown and Sphinx syntax. This PR fixes the docstring for `DatasetBuilder`. I'll start working on cleaning up the rest of the docstrings and removing the old Sphinx syntax (let me know if you prefer one big PR with all the cleaned changes or multiple smaller ones)! 🧼
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5334/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5334/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5334", "html_url": "https://github.com/huggingface/datasets/pull/5334", "diff_url": "https://github.com/huggingface/datasets/pull/5334.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5334.patch", "merged_at": "2022-12-09T01:41:44" }
true
https://api.github.com/repos/huggingface/datasets/issues/5333
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5333/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5333/comments
https://api.github.com/repos/huggingface/datasets/issues/5333/events
https://github.com/huggingface/datasets/pull/5333
1,476,890,156
PR_kwDODunzps5EXGQ2
5,333
fix: 🐛 pass the token to get the list of config names
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-05T16:06:09
2022-12-06T08:25:17
2022-12-06T08:22:49
CONTRIBUTOR
null
Otherwise, get_dataset_infos doesn't work on gated or private datasets, even with the correct token.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5333/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5333/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5333", "html_url": "https://github.com/huggingface/datasets/pull/5333", "diff_url": "https://github.com/huggingface/datasets/pull/5333.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5333.patch", "merged_at": "2022-12-06T08:22:49" }
true
https://api.github.com/repos/huggingface/datasets/issues/5332
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5332/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5332/comments
https://api.github.com/repos/huggingface/datasets/issues/5332/events
https://github.com/huggingface/datasets/issues/5332
1,476,513,072
I_kwDODunzps5YAc0w
5,332
Passing numpy array to ClassLabel names causes ValueError
{ "login": "freddyheppell", "id": 1475568, "node_id": "MDQ6VXNlcjE0NzU1Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/1475568?v=4", "gravatar_id": "", "url": "https://api.github.com/users/freddyheppell", "html_url": "https://github.com/freddyheppell", "followers_url": "https://api.github.com/users/freddyheppell/followers", "following_url": "https://api.github.com/users/freddyheppell/following{/other_user}", "gists_url": "https://api.github.com/users/freddyheppell/gists{/gist_id}", "starred_url": "https://api.github.com/users/freddyheppell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/freddyheppell/subscriptions", "organizations_url": "https://api.github.com/users/freddyheppell/orgs", "repos_url": "https://api.github.com/users/freddyheppell/repos", "events_url": "https://api.github.com/users/freddyheppell/events{/privacy}", "received_events_url": "https://api.github.com/users/freddyheppell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Should `datasets` allow `ClassLabel` input parameter to be an `np.array` even though internally we need to cast it to a Python list? @lhoestq @mariosasko ", "Hi! No, I don't think so. The `names` parameter is [annotated](https://github.com/huggingface/datasets/blob/582236640b9109988e5f7a16a8353696ffa09a16/src/datasets/features/features.py#L892) as `List[str]` (**NumPy arrays are not lists**), and considering that type checking is not a common practice in Python, I think we can leave the code as-is.", "I appreciate it is the wrong type, and that type checking is not common, but I think there's a few circumstances that make it a good idea from a usability perspective.\r\n\r\nIt's quite a difficult error to debug because it comes from a utility function (so it's not immediately obvious which parameter caused it). What makes it even more difficult is the exception happens when the features instance is used to instantiate the dataset, **not** when when the wrong type is actually passed when the features is instantiated. When I was debugging the error, I didn't really consider it could be an issue with the features instance because it had instantiated fine. It's also not one of the more common exceptions caused by trying to use a non-list as a list.\r\n\r\nIt's also relatively easy to accidentally get a numpy array of class types (e.g. calling `unique()` on a pandas dataframe column). Additionally, passing in a `set` instead of the list (again, relatively easy because people may run `set(classes)` to generate uniques) causes an error when the features instance is used, albeit a slightly more obvious one.\r\n\r\nThe names list is already being processed and validated in the `__post_init__` method anyway, so it would not really be adding any complexity to check it is actually a list here too. I'm happy to contribute this change if you change your mind about whether it's worthwhile.", "I agree that it's not easy to debug this issue, so perhaps we could add some basic type checking (e.g. `not isinstance(names, list)` -> error) to make debugging easier. Feel free to submit a PR.\r\n\r\n> Additionally, passing in a set instead of the list (again, relatively easy because people may run set(classes) to generate uniques) causes an error when the features instance is used, albeit a slightly more obvious one.\r\n\r\n`set` is an unordered structure (it's ordered in Python 3.6+, but this is CPython's implementation detail), and the order of ClassLabel `names` matters, so this doesn't require a fix.", "What about checking for `Sequence` instead? I think users can pass a list or a tuple as well." ]
2022-12-05T12:59:03
2022-12-22T16:32:50
2022-12-22T16:32:50
CONTRIBUTOR
null
### Describe the bug If a numpy array is passed to the names argument of ClassLabel, creating a dataset with those features causes an error. ### Steps to reproduce the bug https://colab.research.google.com/drive/1cV_es1PWZiEuus17n-2C-w0KEoEZ68IX TLDR: If I define my classes as: ``` my_classes = np.array(['one', 'two', 'three']) ``` Then this errors: ```py features = Features({'value': Value('string'), 'label': ClassLabel(names=my_classes)}) dataset = Dataset.from_list(my_data, features=features) ``` ``` ValueError Traceback (most recent call last) [<ipython-input-8-a8a9d53ec82f>](https://localhost:8080/#) in <module> ----> 1 dataset = Dataset.from_list(my_data, features=features) 11 frames [/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in _asdict_inner(obj) 183 for f in fields(obj): 184 value = _asdict_inner(getattr(obj, f.name)) --> 185 if not f.init or value != f.default or f.metadata.get("include_in_asdict_even_if_is_default", False): 186 result[f.name] = value 187 return result ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ``` But this works: ``` features2 = Features({'value': Value('string'), 'label': ClassLabel(names=list(my_classes))}) dataset2 = Dataset.from_list(my_data, features=features2) ``` ### Expected behavior If I provide a numpy array of class names, I would expect either an error that the names list is the wrong type, or for it to be cast internally. ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.10 - Python version: 3.8.15 - PyArrow version: 10.0.1 - Pandas version: 1.5.2 Additionally: - Numpy version: 1.23.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5332/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5331
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5331/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5331/comments
https://api.github.com/repos/huggingface/datasets/issues/5331/events
https://github.com/huggingface/datasets/pull/5331
1,473,146,738
PR_kwDODunzps5EKDpr
5,331
Support for multiple configs in packaged modules via metadata yaml info
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5331). All of your documentation changes will be reflected on that endpoint." ]
2022-12-02T16:43:44
2023-01-27T19:51:43
null
CONTRIBUTOR
null
will solve https://github.com/huggingface/datasets/issues/5209 and https://github.com/huggingface/datasets/issues/5151 TODO: - [ ] cache dirs structure - [x] push_to_hub - create dirs and meta - [ ] make --save_info not rewrite configs_kwargs in readme (update test cli util) - [ ] get_config_names - [ ] update docstrings - [ ] refactor copypaste in get_modules - [ ] tests
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5331/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5331", "html_url": "https://github.com/huggingface/datasets/pull/5331", "diff_url": "https://github.com/huggingface/datasets/pull/5331.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5331.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5329
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5329/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5329/comments
https://api.github.com/repos/huggingface/datasets/issues/5329/events
https://github.com/huggingface/datasets/pull/5329
1,471,999,125
PR_kwDODunzps5EGK3y
5,329
Clarify imagefolder is for small datasets
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think it's also reasonable to add the same note to the AudioFolder decription", "Thank you ! I think \"regular\" is more appropriate than \"small\". It can easily scale to a few thousands of images - just not millions x)", "Replaced \"small\" with \"several thousand\" since what is considered \"regular\" and even \"small\" can be kind of vague!" ]
2022-12-01T21:47:29
2022-12-06T17:20:04
2022-12-06T17:16:53
MEMBER
null
Based on feedback from [here](https://github.com/huggingface/datasets/issues/5317#issuecomment-1334108824), this PR adds a note to the `imagefolder` loading and creating docs that `imagefolder` is designed for small scale image datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5329/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5329/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5329", "html_url": "https://github.com/huggingface/datasets/pull/5329", "diff_url": "https://github.com/huggingface/datasets/pull/5329.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5329.patch", "merged_at": "2022-12-06T17:16:53" }
true
https://api.github.com/repos/huggingface/datasets/issues/5328
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5328/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5328/comments
https://api.github.com/repos/huggingface/datasets/issues/5328/events
https://github.com/huggingface/datasets/pull/5328
1,471,661,437
PR_kwDODunzps5EFAyT
5,328
Fix docs building for main
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "EDIT\r\nAt least the docs for ~~main~~ PR branch are now built:\r\n- https://github.com/huggingface/datasets/actions/runs/3594847760/jobs/6053620813", "Build documentation for main branch was triggered after this PR being merged: https://github.com/huggingface/datasets/actions/runs/3603370082/jobs/6071482470" ]
2022-12-01T17:07:45
2022-12-02T16:29:00
2022-12-02T16:26:00
MEMBER
null
This PR reverts the triggering event for building documentation introduced by: - #5250 Fix #5326.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5328/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5328", "html_url": "https://github.com/huggingface/datasets/pull/5328", "diff_url": "https://github.com/huggingface/datasets/pull/5328.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5328.patch", "merged_at": "2022-12-02T16:26:00" }
true
https://api.github.com/repos/huggingface/datasets/issues/5327
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5327/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5327/comments
https://api.github.com/repos/huggingface/datasets/issues/5327/events
https://github.com/huggingface/datasets/pull/5327
1,471,657,247
PR_kwDODunzps5EE_3Q
5,327
Avoid unwanted behaviour when splits from script and metadata are not matching because of outdated metadata
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5327). All of your documentation changes will be reflected on that endpoint." ]
2022-12-01T17:05:23
2023-01-23T12:48:29
null
CONTRIBUTOR
null
will fix #5315
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5327/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5327", "html_url": "https://github.com/huggingface/datasets/pull/5327", "diff_url": "https://github.com/huggingface/datasets/pull/5327.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5327.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5326
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5326/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5326/comments
https://api.github.com/repos/huggingface/datasets/issues/5326/events
https://github.com/huggingface/datasets/issues/5326
1,471,634,168
I_kwDODunzps5Xt1r4
5,326
No documentation for main branch is built
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2022-12-01T16:50:58
2022-12-02T16:26:01
2022-12-02T16:26:01
MEMBER
null
Since: - #5250 - Commit: 703b84311f4ead83c7f79639f2dfa739295f0be6 the docs for main branch are no longer built. The change introduced only triggers the docs building for releases.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5326/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5325
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5325/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5325/comments
https://api.github.com/repos/huggingface/datasets/issues/5325/events
https://github.com/huggingface/datasets/issues/5325
1,471,536,822
I_kwDODunzps5Xtd62
5,325
map(...batch_size=None) for IterableDataset
{ "login": "frankier", "id": 299380, "node_id": "MDQ6VXNlcjI5OTM4MA==", "avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frankier", "html_url": "https://github.com/frankier", "followers_url": "https://api.github.com/users/frankier/followers", "following_url": "https://api.github.com/users/frankier/following{/other_user}", "gists_url": "https://api.github.com/users/frankier/gists{/gist_id}", "starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frankier/subscriptions", "organizations_url": "https://api.github.com/users/frankier/orgs", "repos_url": "https://api.github.com/users/frankier/repos", "events_url": "https://api.github.com/users/frankier/events{/privacy}", "received_events_url": "https://api.github.com/users/frankier/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[ { "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi! I agree it makes sense for `IterableDataset.map` to support the `batch_size=None` case. This should be super easy to fix.", "@mariosasko as this is something simple maybe I can include it as part of https://github.com/huggingface/datasets/pull/5311? Let me know :+1:", "#self-assign", "Feel free to close this @lhoestq as part of https://github.com/huggingface/datasets/pull/5336 :hugs:", "Thanks again :)\r\n\r\n> For practical usages, an alternative to this would be to convert from an iterable dataset to a map-style dataset, but it is not obvious how to do this.\r\n\r\nThis is interesting as well, if anyone wants to explore" ]
2022-12-01T15:43:42
2022-12-07T15:54:43
2022-12-07T15:54:42
CONTRIBUTOR
null
### Feature request Dataset.map(...) allows batch_size to be None. It would be nice if IterableDataset did too. ### Motivation Although it may seem a bit of a spurious request given that `IterableDataset` is meant for larger than memory datasets, but there are a couple of reasons why this might be nice. One is that load_dataset(...) can return either IterableDataset or Dataset. mypy will then complain if batch_size=None even if we know it is Dataset. Of course we can do: assert isinstance(d, datasets.DatasetDict) But it is a mild inconvenience. What's more annoying is that whenever we use something like e.g. `combine_datasets(...)`, we end up with the union again, and so have to do the assert again. Another is that we could actually end up with an IterableDataset small enough for memory in normal/correct usage, e.g. by filtering a massive IterableDataset. For practical usages, an alternative to this would be to convert from an iterable dataset to a map-style dataset, but it is not obvious how to do this. ### Your contribution Not this time.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5325/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5324
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5324/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5324/comments
https://api.github.com/repos/huggingface/datasets/issues/5324/events
https://github.com/huggingface/datasets/issues/5324
1,471,524,512
I_kwDODunzps5Xta6g
5,324
Fix docstrings and types in documentation that appears on the website
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
[ "I agree we have a mess with docstrings...", "Ok, I believe we've cleaned up most of the old syntax we were using for the user-facing docs! There are still a couple of `:obj:`'s and `:class:` floating around in the docstrings we don't expose that I'll track down :)" ]
2022-12-01T15:34:53
2022-12-13T19:03:55
null
CONTRIBUTOR
null
While I was working on https://github.com/huggingface/datasets/pull/5313 I've noticed that we have a mess in how we annotate types and format args and return values in the code. And some of it is displayed in the [Reference section](https://huggingface.co/docs/datasets/package_reference/builder_classes) of the documentation on the website. Would be nice someday, maybe before releasing datasets 3.0.0, to unify it......
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5324/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5323
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5323/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5323/comments
https://api.github.com/repos/huggingface/datasets/issues/5323/events
https://github.com/huggingface/datasets/issues/5323
1,471,518,803
I_kwDODunzps5XtZhT
5,323
Duplicated Keys in Taskmaster-2 Dataset
{ "login": "liaeh", "id": 52380283, "node_id": "MDQ6VXNlcjUyMzgwMjgz", "avatar_url": "https://avatars.githubusercontent.com/u/52380283?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liaeh", "html_url": "https://github.com/liaeh", "followers_url": "https://api.github.com/users/liaeh/followers", "following_url": "https://api.github.com/users/liaeh/following{/other_user}", "gists_url": "https://api.github.com/users/liaeh/gists{/gist_id}", "starred_url": "https://api.github.com/users/liaeh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liaeh/subscriptions", "organizations_url": "https://api.github.com/users/liaeh/orgs", "repos_url": "https://api.github.com/users/liaeh/repos", "events_url": "https://api.github.com/users/liaeh/events{/privacy}", "received_events_url": "https://api.github.com/users/liaeh/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @liaeh.\r\n\r\nWe are having a look at it. ", "I have transferred the discussion to the Community tab of the dataset: https://huggingface.co/datasets/taskmaster2/discussions/1" ]
2022-12-01T15:31:06
2022-12-01T16:26:06
2022-12-01T16:26:06
NONE
null
### Describe the bug Loading certain splits () of the taskmaster-2 dataset fails because of a DuplicatedKeysError. This occurs for the following domains: `'hotels', 'movies', 'music', 'sports'`. The domains `'flights', 'food-ordering', 'restaurant-search'` load fine. Output: ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("taskmaster2", "music") ``` Output: ``` --------------------------------------------------------------------------- DuplicatedKeysError Traceback (most recent call last) File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1532, in GeneratorBasedBuilder._prepare_split_single(self, arg) [1531](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1530) example = self.info.features.encode_example(record) if self.info.features is not None else record -> [1532](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1531) writer.write(example, key) [1533](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1532) num_examples_progress_update += 1 File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:475, in ArrowWriter.write(self, example, key, writer_batch_size) [474](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=473) if self._check_duplicates: --> [475](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=474) self.check_duplicate_keys() [476](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=475) # Re-intializing to empty list for next batch File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:492, in ArrowWriter.check_duplicate_keys(self) [486](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=485) duplicate_key_indices = [ [487](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=486) str(self._num_examples + index) [488](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=487) for index, (duplicate_hash, _) in enumerate(self.hkey_record) [489](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=488) if duplicate_hash == hash [490](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=489) ] --> [492](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=491) raise DuplicatedKeysError(key, duplicate_key_indices) [493](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=492) else: DuplicatedKeysError: Found multiple examples generated with the same key The examples at index 858, 859 have the key dlg-89174425-d57a-4db7-a92b-165c3bff6735 During handling of the above exception, another exception occurred: DuplicatedKeysError Traceback (most recent call last) File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1541, in GeneratorBasedBuilder._prepare_split_single(self, arg) [1540](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1539) num_shards = shard_id + 1 -> [1541](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1540) num_examples, num_bytes = writer.finalize() [1542](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1541) writer.close() File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:563, in ArrowWriter.finalize(self, close_stream) [562](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=561) if self._check_duplicates: --> [563](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=562) self.check_duplicate_keys() [564](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=563) # Re-intializing to empty list for next batch File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:492, in ArrowWriter.check_duplicate_keys(self) [486](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=485) duplicate_key_indices = [ [487](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=486) str(self._num_examples + index) [488](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=487) for index, (duplicate_hash, _) in enumerate(self.hkey_record) [489](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=488) if duplicate_hash == hash [490](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=489) ] --> [492](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=491) raise DuplicatedKeysError(key, duplicate_key_indices) [493](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=492) else: DuplicatedKeysError: Found multiple examples generated with the same key The examples at index 858, 859 have the key dlg-89174425-d57a-4db7-a92b-165c3bff6735 The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) Cell In[23], line 1 ----> 1 dataset = load_dataset("taskmaster2", "music") File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py:1741, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs) [1738](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1737) try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES [1740](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1739) # Download and prepare data -> [1741](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1740) builder_instance.download_and_prepare( [1742](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1741) download_config=download_config, [1743](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1742) download_mode=download_mode, [1744](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1743) ignore_verifications=ignore_verifications, [1745](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1744) try_from_hf_gcs=try_from_hf_gcs, [1746](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1745) use_auth_token=use_auth_token, [1747](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1746) num_proc=num_proc, [1748](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1747) ) [1750](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1749) # Build dataset for splits [1751](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1750) keep_in_memory = ( [1752](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1751) keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) [1753](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1752) ) File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:822, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) [820](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=819) if num_proc is not None: [821](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=820) prepare_split_kwargs["num_proc"] = num_proc --> [822](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=821) self._download_and_prepare( [823](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=822) dl_manager=dl_manager, [824](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=823) verify_infos=verify_infos, [825](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=824) **prepare_split_kwargs, [826](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=825) **download_and_prepare_kwargs, [827](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=826) ) [828](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=827) # Sync info [829](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=828) self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1555, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs) [1554](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1553) def _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs): -> [1555](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1554) super()._download_and_prepare( [1556](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1555) dl_manager, verify_infos, check_duplicate_keys=verify_infos, **prepare_splits_kwargs [1557](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1556) ) File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:913, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) [909](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=908) split_dict.add(split_generator.split_info) [911](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=910) try: [912](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=911) # Prepare split will record examples associated to the split --> [913](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=912) self._prepare_split(split_generator, **prepare_split_kwargs) [914](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=913) except OSError as e: [915](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=914) raise OSError( [916](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=915) "Cannot find data file. " [917](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=916) + (self.manual_download_instructions or "") [918](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=917) + "\nOriginal error:\n" [919](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=918) + str(e) [920](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=919) ) from None File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1396, in GeneratorBasedBuilder._prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) [1394](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1393) gen_kwargs = split_generator.gen_kwargs [1395](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1394) job_id = 0 -> [1396](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1395) for job_id, done, content in self._prepare_split_single( [1397](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1396) {"gen_kwargs": gen_kwargs, "job_id": job_id, **_prepare_split_args} [1398](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1397) ): [1399](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1398) if done: [1400](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1399) result = content File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1550, in GeneratorBasedBuilder._prepare_split_single(self, arg) [1548](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1547) if isinstance(e, SchemaInferenceError) and e.__context__ is not None: [1549](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1548) e = e.__context__ -> [1550](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1549) raise DatasetGenerationError("An error occurred while generating the dataset") from e [1552](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1551) yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior Loads the dataset ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5323/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5323/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5322
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5322/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5322/comments
https://api.github.com/repos/huggingface/datasets/issues/5322/events
https://github.com/huggingface/datasets/pull/5322
1,471,502,162
PR_kwDODunzps5EEeQP
5,322
Raise error for `.tar` archives in the same way as for `.tar.gz` and `.tgz` in `_get_extraction_protocol`
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-01T15:19:28
2022-12-14T16:37:16
2022-12-14T16:33:30
CONTRIBUTOR
null
Currently `download_and_extract` doesn't throw an error when it is used with files with `.tar` extension in streaming mode because `_get_extraction_protocol` doesn't do it (like it does for `tar.gz` and `tgz`). `_get_extraction_protocol` returns formatted url as if we support tar protocol but we don't. That means that in dataset scripts `.tar` files would be attempted to load and fail during examples generation (after `download_and_extract` execution). So this PR raises error for `tar` files too.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5322/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5322", "html_url": "https://github.com/huggingface/datasets/pull/5322", "diff_url": "https://github.com/huggingface/datasets/pull/5322.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5322.patch", "merged_at": "2022-12-14T16:33:30" }
true
https://api.github.com/repos/huggingface/datasets/issues/5321
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5321/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5321/comments
https://api.github.com/repos/huggingface/datasets/issues/5321/events
https://github.com/huggingface/datasets/pull/5321
1,471,430,667
PR_kwDODunzps5EEOhE
5,321
Fix loading from HF GCP cache
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Do you know why this stopped working?\r\n\r\nIt comes from the changes in https://github.com/huggingface/datasets/pull/5107/files#diff-355ae5c229f95f86895404b72378ecd6e966c41cbeebb674af6fe6e9611bc126" ]
2022-12-01T14:39:06
2022-12-01T16:10:09
2022-12-01T16:07:02
MEMBER
null
As reported in https://discuss.huggingface.co/t/error-loading-wikipedia-dataset/26599/4 it's not possible to download a cached version of Wikipedia from the HF GCP cache I fixed it and added an integration test (runs in 10sec)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5321/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5321", "html_url": "https://github.com/huggingface/datasets/pull/5321", "diff_url": "https://github.com/huggingface/datasets/pull/5321.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5321.patch", "merged_at": "2022-12-01T16:07:02" }
true
https://api.github.com/repos/huggingface/datasets/issues/5320
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5320/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5320/comments
https://api.github.com/repos/huggingface/datasets/issues/5320/events
https://github.com/huggingface/datasets/pull/5320
1,471,360,910
PR_kwDODunzps5ED_UQ
5,320
[Extract] Place the lock file next to the destination directory
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-01T13:55:49
2022-12-01T15:36:44
2022-12-01T15:33:58
MEMBER
null
Previously it was placed next to the archive to extract, but the archive can be in a read-only directory as noticed in https://github.com/huggingface/datasets/issues/5295 Therefore I moved the lock location to be next to the destination directory, which is required to have write permissions
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5320/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5320", "html_url": "https://github.com/huggingface/datasets/pull/5320", "diff_url": "https://github.com/huggingface/datasets/pull/5320.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5320.patch", "merged_at": "2022-12-01T15:33:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/5319
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5319/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5319/comments
https://api.github.com/repos/huggingface/datasets/issues/5319/events
https://github.com/huggingface/datasets/pull/5319
1,470,945,515
PR_kwDODunzps5ECkfc
5,319
Fix Text sample_by paragraph
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-01T09:08:09
2022-12-01T15:21:44
2022-12-01T15:19:00
MEMBER
null
Fix #5316.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5319/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5319", "html_url": "https://github.com/huggingface/datasets/pull/5319", "diff_url": "https://github.com/huggingface/datasets/pull/5319.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5319.patch", "merged_at": "2022-12-01T15:19:00" }
true
https://api.github.com/repos/huggingface/datasets/issues/5318
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5318/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5318/comments
https://api.github.com/repos/huggingface/datasets/issues/5318/events
https://github.com/huggingface/datasets/pull/5318
1,470,749,750
PR_kwDODunzps5EB6RM
5,318
Origin/fix missing features error
{ "login": "eunseojo", "id": 12104720, "node_id": "MDQ6VXNlcjEyMTA0NzIw", "avatar_url": "https://avatars.githubusercontent.com/u/12104720?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eunseojo", "html_url": "https://github.com/eunseojo", "followers_url": "https://api.github.com/users/eunseojo/followers", "following_url": "https://api.github.com/users/eunseojo/following{/other_user}", "gists_url": "https://api.github.com/users/eunseojo/gists{/gist_id}", "starred_url": "https://api.github.com/users/eunseojo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eunseojo/subscriptions", "organizations_url": "https://api.github.com/users/eunseojo/orgs", "repos_url": "https://api.github.com/users/eunseojo/repos", "events_url": "https://api.github.com/users/eunseojo/events{/privacy}", "received_events_url": "https://api.github.com/users/eunseojo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "please review :) @lhoestq @ola13 thankoo", "Thanks :) I just updated the test to make sure it works even when there's a column missing, and did a minor change to json.py to add the missing columns for the other kinds of JSON files as well (I moved the code to`self._cast_table`)", "Thanks Unso! If @lhoestq is happy then I'm also happy :D", "When I noticed the ping, this PR had already been merged...\r\n\r\nLuckily, PyArrow's `read_json` behaves the same when `explicit_schema` is given via `ParseOptions`, so I'm okay with this change (our JSON loader doesn't use `read_json` for decoding JSON in some scenarios, so this manual approach is the right one).\r\n" ]
2022-12-01T06:18:39
2022-12-12T19:06:42
2022-12-04T05:49:39
CONTRIBUTOR
null
This fixes the problem of when the dataset_load function reads a function with "features" provided but some read batches don't have columns that later show up. For instance, the provided "features" requires columns A,B,C but only columns B,C show. This fixes this by adding the column A with nulls.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5318/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5318", "html_url": "https://github.com/huggingface/datasets/pull/5318", "diff_url": "https://github.com/huggingface/datasets/pull/5318.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5318.patch", "merged_at": "2022-12-04T05:49:39" }
true
https://api.github.com/repos/huggingface/datasets/issues/5317
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5317/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5317/comments
https://api.github.com/repos/huggingface/datasets/issues/5317/events
https://github.com/huggingface/datasets/issues/5317
1,470,390,164
I_kwDODunzps5XpF-U
5,317
`ImageFolder` performs poorly with large datasets
{ "login": "salieri", "id": 1086393, "node_id": "MDQ6VXNlcjEwODYzOTM=", "avatar_url": "https://avatars.githubusercontent.com/u/1086393?v=4", "gravatar_id": "", "url": "https://api.github.com/users/salieri", "html_url": "https://github.com/salieri", "followers_url": "https://api.github.com/users/salieri/followers", "following_url": "https://api.github.com/users/salieri/following{/other_user}", "gists_url": "https://api.github.com/users/salieri/gists{/gist_id}", "starred_url": "https://api.github.com/users/salieri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/salieri/subscriptions", "organizations_url": "https://api.github.com/users/salieri/orgs", "repos_url": "https://api.github.com/users/salieri/repos", "events_url": "https://api.github.com/users/salieri/events{/privacy}", "received_events_url": "https://api.github.com/users/salieri/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! ImageFolder is made for small scale datasets indeed. For large scale image datasets you better group your images in TAR archives or Arrow/Parquet files. This is true not just for ImageFolder loading performance, but also because having millions of files is not ideal for your filesystem or when moving the data around.\r\n\r\nOption 1. use TAR archives\r\n\r\nI'd suggest you to take a look at how we load [Imagenet](https://huggingface.co/datasets/imagenet-1k/tree/main) for example. The dataset is sharded in multiple TAR archives and there is a [script](https://huggingface.co/datasets/imagenet-1k/blob/main/imagenet-1k.py) that iterates over the archives to load the images.\r\n\r\nOption 2. use Arrow/Parquet\r\n\r\nYou can load your images as an Arrow Dataset with\r\n```python\r\nfrom datasets import Dataset, Image, load_from_disk, load_dataset\r\n\r\nds = Dataset.from_dict({\"image\": list(glob.glob(\"path/to/dir/**/*.jpg\"))})\r\n\r\ndef add_metadata(example):\r\n ...\r\n\r\nds = ds.map(add_metadata, num_proc=16) # num_proc for multiprocessing\r\nds = ds.cast_column(\"image\", Image())\r\n\r\n# save as Arrow locally\r\nds.save_to_disk(\"output_dir\")\r\nreloaded = load_from_disk(\"output_dir\")\r\n\r\n# OR save as Parquet on the HF Hub\r\nds.push_to_hub(\"username/dataset_name\")\r\nreloaded = load_dataset(\"username/dataset_name\")\r\n# reloaded = load_dataset(\"username/dataset_name\", num_proc=16) # to use multiprocessing\r\n```\r\n\r\nPS: maybe we can actually have something similar to ImageFolder but for image archives at one point ?", "@lhoestq Thanks!\r\n\r\nPerhaps it'd be worth adding a note on the documentation that `ImageFolder` is not intended for large datasets? This limitation is not intuitively obvious to someone who has not used it before, I think.", "Thanks for the feedback @salieri! I opened #5329 to make it clear `ImageFolder` is not intended for large datasets. Please feel free to comment if you have any other feedback! 🙂 " ]
2022-12-01T00:04:21
2022-12-01T21:49:26
null
NONE
null
### Describe the bug While testing image dataset creation, I'm seeing significant performance bottlenecks with imagefolders when scanning a directory structure with large number of images. ## Setup * Nested directories (5 levels deep) * 3M+ images * 1 `metadata.jsonl` file ## Performance Degradation Point 1 Degradation occurs because [`get_data_files_patterns`](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L231-L243) runs the exact same scan for many different types of patterns, and there doesn't seem to be a way to easily limit this. It's controlled by the definition of [`ALL_DEFAULT_PATTERNS`](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L82-L85). One scan with 3M+ files takes about 10-15 minutes to complete on my setup, so having those extra scans really slows things down – from 10 minutes to 60+. Most of the scans return no matches, but they still take a significant amount of time to complete – hence the poor performance. As a side effect, when this scan is run on 3M+ image files, Python also consumes up to 12 GB of RAM, which is not ideal. ## Performance Degradation Point 2 The second performance bottleneck is in [`PackagedDatasetModuleFactory.get_module`](https://github.com/huggingface/datasets/blob/d7dfbc83d68e87ba002c5eb2555f7a932e59038a/src/datasets/load.py#L707-L711), which calls `DataFilesDict.from_local_or_remote`. It runs for a long time (60min+), consuming significant amounts of RAM – even more than the point 1 above. Based on `iostat -d 2`, it performs **zero** disk operations, which to me suggests that there is a code based bottleneck there that could be sorted out. ### Steps to reproduce the bug ```python from datasets import load_dataset import os import huggingface_hub dataset = load_dataset( 'imagefolder', data_dir='/some/path', # just to spell it out: split=None, drop_labels=True, keep_in_memory=False ) dataset.push_to_hub('account/dataset', private=True) ``` ### Expected behavior While it's certainly possible to write a custom loader to replace `ImageFolder` with, it'd be great if the off-the-shelf `ImageFolder` would by default have a setup that can scale to large datasets. Or perhaps there could be a dedicated loader just for large datasets that trades off flexibility for performance? As in, maybe you have to define explicitly how you want it to work rather than it trying to guess your data structure like `_get_data_files_patterns()` does? ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-4.14.296-222.539.amzn2.x86_64-x86_64-with-glibc2.2.5 - Python version: 3.7.10 - PyArrow version: 10.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5317/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5316
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5316/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5316/comments
https://api.github.com/repos/huggingface/datasets/issues/5316/events
https://github.com/huggingface/datasets/issues/5316
1,470,115,681
I_kwDODunzps5XoC9h
5,316
Bug in sample_by="paragraph"
{ "login": "adampauls", "id": 1243668, "node_id": "MDQ6VXNlcjEyNDM2Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/1243668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adampauls", "html_url": "https://github.com/adampauls", "followers_url": "https://api.github.com/users/adampauls/followers", "following_url": "https://api.github.com/users/adampauls/following{/other_user}", "gists_url": "https://api.github.com/users/adampauls/gists{/gist_id}", "starred_url": "https://api.github.com/users/adampauls/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adampauls/subscriptions", "organizations_url": "https://api.github.com/users/adampauls/orgs", "repos_url": "https://api.github.com/users/adampauls/repos", "events_url": "https://api.github.com/users/adampauls/events{/privacy}", "received_events_url": "https://api.github.com/users/adampauls/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @adampauls.\r\n\r\nWe are having a look at it. " ]
2022-11-30T19:24:13
2022-12-01T15:19:02
2022-12-01T15:19:02
NONE
null
### Describe the bug I think [this line](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/text/text.py#L96) is wrong and should be `batch = f.read(self.config.chunksize)`. Otherwise it will never terminate because even when `f` is finished reading, `batch` will still be truthy from the last iteration. ### Steps to reproduce the bug ``` > cat test.txt a b c d e f ```` ```python >>> import datasets >>> datasets.load_dataset("text", data_files={"train":"test.txt"}, sample_by="paragraph") ``` This will go on forever. ### Expected behavior Terminates very quickly. ### Environment info `version = "2.6.1"` but I think the bug is still there on main.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5316/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5315
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5315/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5315/comments
https://api.github.com/repos/huggingface/datasets/issues/5315/events
https://github.com/huggingface/datasets/issues/5315
1,470,026,797
I_kwDODunzps5XntQt
5,315
Adding new splits to a dataset script with existing old splits info in metadata's `dataset_info` fails
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
[ "EDIT:\r\nI think in this case, the metadata files (either README or JSON) should not be read (i.e. `self.info.splits` should be None).\r\n\r\nOne idea: \r\n- I think ideally we should set this behavior when we pass `--save_info` to the CLI `test`\r\n- However, currently, the builder is unaware of this: `save_info` arg is not passed to it", "> I think in this case\r\n\r\n@albertvillanova You mean in cases when the script was changed? \r\n\r\nI suggest that we:\r\n* add a check on the slice (like 'split_name[n%]) kind of format here: https://github.com/huggingface/datasets/blob/main/src/datasets/splits.py#L523 to catch things like this. \r\n* Error here happens before splits verification, but in `_prepare_split`, and `_prepare_split` doesn't perform any verification and don't know about it. so we can pass this parameter and take splits from `split_generator`, not from `split.info` in case when `verify_infos` is False\r\n* we can check if split **names** from split_generators and self.info.splits are the same **before** preparing splits (if `verify_info=True`) so that we don't spend time on generating unwanted data. \r\n* provide some user-friendly warnings about `ignore_verifications` parameter so that users know that if something is not matching they can ignore it\r\n\r\nI started it here: https://github.com/huggingface/datasets/pull/5327/files\r\n\r\nWhat do you think @albertvillanova ?", "I edited my previous comment:\r\n- First I proposed setting `self.info.splits` to None when `ignore_verifications=True`\r\n - I thought it was the easiest implementation because `ignore_verifications` is passed to `DatasetBuilder.download_and_prepare`\r\n - However, afterwards, I realized this might not be a good idea for this use case:\r\n - A user wants to optimize the loading of the dataset, and passes `ignore_verifications=False` to avoid all the verifications\r\n - In this case, we want `self.info.splits` to be read from metadata file\r\n- Then, I thought that it might be better to set `self.info.splits` to None when we pass `--save_info` to the CLI test: if we are going to save the info to the metadata file, it makes no sense to read the info from the metadata file\r\n - This implementation is not so easy because the Builder knows nothing about `--save_info`\r\n\r\nI agree with you there are 2 things to be addressed here:\r\n- One is what I have just commented: `self.info.splits` should be None in this case\r\n- The other, a validation should be implemented when calling `make_file_instructions` and/or `SplitDict.__getitem__`, so that when passing \"training\" to it, we get a more descriptive error other than `TypeError: expected str, bytes or os.PathLike object, not NoneType` " ]
2022-11-30T18:02:15
2022-12-02T07:02:53
null
CONTRIBUTOR
null
### Describe the bug If you first create a custom dataset with a specific set of splits, generate metadata with `datasets-cli test ... --save_info`, then change your script to include more splits, it fails. That's what happened in https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/discussions/2#6385fd1269634850f8ddff48. ### Steps to reproduce the bug 1. create a dataset with a custom split that returns, for example, only `"train"` split in `_splits_generators'`. specifically, if really want to reproduce, copy `https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/blob/main/food_vision_199_classes.py 2. run `datasets-cli test dataset_script.py --save_info --all_configs` - this would generate metadata yaml in `README.md` that would contain info about splits, for example, like this: ``` splits: - name: train num_bytes: 2973286 num_examples: 19747 ``` 3. make changes to your script so that it returns another set of splits, for example, `"train"` and `"test"` (uncomment [these lines](https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/blob/main/food_vision_199_classes.py#L271)) 4. run `load_dataset` and get the following error: ```python Traceback (most recent call last): File "/home/daniel/code/pytorch/env/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/commands/test.py", line 141, in run builder.download_and_prepare( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 1555, in _download_and_prepare super()._download_and_prepare( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 913, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 1356, in _prepare_split split_info = self.info.splits[split_generator.name] File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/splits.py", line 525, in __getitem__ instructions = make_file_instructions( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/arrow_reader.py", line 111, in make_file_instructions name2filenames = { File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/arrow_reader.py", line 112, in <dictcomp> info.name: filenames_for_dataset_split( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/naming.py", line 78, in filenames_for_dataset_split prefix = filename_prefix_for_split(dataset_name, split) File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/naming.py", line 57, in filename_prefix_for_split if os.path.basename(name) != name: File "/home/daniel/code/pytorch/env/lib/python3.8/posixpath.py", line 143, in basename p = os.fspath(p) TypeError: expected str, bytes or os.PathLike object, not NoneType ``` 5. bonus: try to regenerate metadata in `README.md` with `datasets-cli` as in step 2 and get the same error. This is because `dataset.info.splits` contains only `"train"` split so when we are doing `self.info.splits[split_generator.name]` it tries to infer smth like `info.splits['train[50%]']` and that's not the case and it fails. ### Expected behavior to be discussed? This can be solved by removing splits information from metadata file first. But I wonder if there is a better way. ### Environment info - Datasets version: 2.7.1 - Python version: 3.8.13
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5315/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5314
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5314/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5314/comments
https://api.github.com/repos/huggingface/datasets/issues/5314/events
https://github.com/huggingface/datasets/issues/5314
1,469,685,118
I_kwDODunzps5XmZ1-
5,314
Datasets: classification_report() got an unexpected keyword argument 'suffix'
{ "login": "JonathanAlis", "id": 42126634, "node_id": "MDQ6VXNlcjQyMTI2NjM0", "avatar_url": "https://avatars.githubusercontent.com/u/42126634?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JonathanAlis", "html_url": "https://github.com/JonathanAlis", "followers_url": "https://api.github.com/users/JonathanAlis/followers", "following_url": "https://api.github.com/users/JonathanAlis/following{/other_user}", "gists_url": "https://api.github.com/users/JonathanAlis/gists{/gist_id}", "starred_url": "https://api.github.com/users/JonathanAlis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JonathanAlis/subscriptions", "organizations_url": "https://api.github.com/users/JonathanAlis/orgs", "repos_url": "https://api.github.com/users/JonathanAlis/repos", "events_url": "https://api.github.com/users/JonathanAlis/events{/privacy}", "received_events_url": "https://api.github.com/users/JonathanAlis/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "This seems similar to https://github.com/huggingface/datasets/issues/2512 Can you try to update seqeval ? ", "@JonathanAlis also note that the metrics are deprecated in our `datasets` library.\r\n\r\nPlease, use the new library 🤗 Evaluate instead: https://huggingface.co/docs/evaluate" ]
2022-11-30T14:01:03
2022-12-01T15:00:46
null
NONE
null
https://github.com/huggingface/datasets/blob/main/metrics/seqeval/seqeval.py > import datasets predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] seqeval = datasets.load_metric("seqeval") results = seqeval.compute(predictions=predictions, references=references) print(list(results.keys())) print(results["overall_f1"]) print(results["PER"]["f1"]) It raises the error: > TypeError: classification_report() got an unexpected keyword argument 'suffix' For context, versions on my pip list -v > datasets 1.12.1 seqeval 1.2.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5314/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5313
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5313/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5313/comments
https://api.github.com/repos/huggingface/datasets/issues/5313/events
https://github.com/huggingface/datasets/pull/5313
1,468,484,136
PR_kwDODunzps5D6Qfb
5,313
Fix description of streaming in the docs
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-29T18:00:28
2022-12-01T14:55:30
2022-12-01T14:00:34
CONTRIBUTOR
null
We say that "the data is being downloaded progressively" which is not true, it's just streamed, so I fixed it. Probably I missed some other places where it is written? Also changed docstrings for `StreamingDownloadManager`'s `download` and `extract` to reflect the same, as these docstrings are displayed in the documentation cc @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5313/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5313", "html_url": "https://github.com/huggingface/datasets/pull/5313", "diff_url": "https://github.com/huggingface/datasets/pull/5313.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5313.patch", "merged_at": "2022-12-01T14:00:34" }
true
https://api.github.com/repos/huggingface/datasets/issues/5312
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5312/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5312/comments
https://api.github.com/repos/huggingface/datasets/issues/5312/events
https://github.com/huggingface/datasets/pull/5312
1,468,352,562
PR_kwDODunzps5D5zxI
5,312
Add DatasetDict.to_pandas
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The current implementation is what I had in mind, i.e. concatenate all splits by default.\r\n\r\nHowever, I think most tabular datasets would come as a single split. So for that usecase, it wouldn't change UX if we raise when there are more than one splits.\r\n\r\nAnd for multiple splits, the user either passes a list, or they can pass `splits=\"all\"` to have all splits concatenated.", "I think it's better to raise an error in cases when there are multiple splits but no split is specified so that users know for sure with which data they are working. I imagine a case when a user loads a dataset that they don't know much about (like what splits it has), and if they get a concatenation of everything, it might lead to incorrect processing or interpretations and it would be hard to notice it.\r\n(\"explicit is better than implicit\")", "I just changed to raise an error if there are multiple splits. The error shows an example of how to choose a split to convert.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5312). All of your documentation changes will be reflected on that endpoint.", "Thanks for the review, I've updated the type hint and added a line to raise an error on bad splits :)", "Merging https://github.com/huggingface/datasets/pull/5301 would eliminate the need for this PR, no?\r\n\r\nIn the meantime, I find the current API cleaner.", "This solution is simpler than https://github.com/huggingface/datasets/pull/5301 and covers most cases for tabular datasets, so I'm in favor of merging this one and put https://github.com/huggingface/datasets/pull/5301 on stand by", "Let me know if it sounds good to you @mariosasko @albertvillanova :)", "I'm still not convinced. If `DatasetDict` needs this method and there is no other way, then IMO it would make more sense to return a dictionary with the splits converted to `pd.DataFrame`. ", "@mariosasko the issue we're dealing with is that in tabular scenarios, we often don't have splits in the dataset, and imposing that concept to people dealing with the library hampers adoption.", "@adrinjalali This PR proposes a solution inconsistent with the existing API (in other words, a solution that clutters our API 🙂). Moreover, our library primarily focuses on larger-than-RAM datasets, and tabular datasets don't (directly) fall into this group.\r\n\r\nInstead of the temporary \"fix\" proposed here, it makes much more sense to align `load_dataset` with both tabular and DL workflows \"in a consistent way\", so I suggest we continue our discussion from https://github.com/huggingface/datasets/issues/5189 to have this resolved by version 3.0.", "closing this one for now" ]
2022-11-29T16:30:02
2023-01-25T17:33:43
2023-01-25T17:33:42
MEMBER
null
From discussions in https://github.com/huggingface/datasets/issues/5189, for tabular data it doesn't really make sense to have to do ```python df = load_dataset(...)["train"].to_pandas() ``` because many datasets are not split. In this PR I added `to_pandas` to `DatasetDict` which returns the DataFrame: If there's only one split, you don't need to specify the split name: ```python df = load_dataset(...).to_pandas() ``` EDIT: and if a dataset has multiple splits: ```python df = load_dataset(...).to_pandas(splits=["train", "test"]) # or df = load_dataset(...).to_pandas(splits="all") # raises an error because you need to select the split(s) to convert load_dataset(...).to_pandas() ``` I do have one question though @merveenoyan @adrinjalali @mariosasko: Should we raise an error if there are multiple splits and ask the user to choose one explicitly ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5312/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5312/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5312", "html_url": "https://github.com/huggingface/datasets/pull/5312", "diff_url": "https://github.com/huggingface/datasets/pull/5312.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5312.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5311
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5311/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5311/comments
https://api.github.com/repos/huggingface/datasets/issues/5311/events
https://github.com/huggingface/datasets/pull/5311
1,467,875,153
PR_kwDODunzps5D4Mm3
5,311
Add `features` param to `IterableDataset.map`
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-29T11:08:34
2022-12-06T15:45:02
2022-12-06T15:42:04
CONTRIBUTOR
null
## Description As suggested by @lhoestq in #3888, we should be adding the param `features` to `IterableDataset.map` so that the features can be preserved (not turned into `None` as that's the default behavior) whenever the user passes those as param, so as to be consistent with `Dataset.map`, as it provides the `features` param so that those are not inferred by default, but specified by the user, and later validated by `ArrowWriter`. This is internally handled already by the functions relying on `IterableDataset.map` such as `rename_column`, `rename_columns`, and `remove_columns` as described in #5287. ## Usage Example ```python from datasets import load_dataset, Features ds = load_dataset("rotten_tomatoes", split="validation", streaming=True) print(ds.info.features) ds = ds.map( lambda x: {"target": x["label"]}, features=Features( {"target": ds.info.features["label"], "label": ds.info.features["label"], "text": ds.info.features["text"]} ), ) print(ds.info.features) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5311/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5311", "html_url": "https://github.com/huggingface/datasets/pull/5311", "diff_url": "https://github.com/huggingface/datasets/pull/5311.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5311.patch", "merged_at": "2022-12-06T15:42:04" }
true
https://api.github.com/repos/huggingface/datasets/issues/5310
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5310/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5310/comments
https://api.github.com/repos/huggingface/datasets/issues/5310/events
https://github.com/huggingface/datasets/pull/5310
1,467,719,635
PR_kwDODunzps5D3rGw
5,310
Support xPath for Windows pathnames
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-29T09:20:47
2022-11-30T12:00:09
2022-11-30T11:57:16
MEMBER
null
This PR implements a string representation of `xPath`, which is valid for local paths (also windows) and remote URLs. Additionally, some `os.path` methods are fixed for remote URLs on Windows machines. Now, on Windows machines: ```python In [2]: str(xPath("C:\\dir\\file.txt")) Out[2]: 'C:\\dir\\file.txt' In [3]: str(xPath("http://domain.com/file.txt")) Out[3]: 'http://domain.com/file.txt' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5310/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5310", "html_url": "https://github.com/huggingface/datasets/pull/5310", "diff_url": "https://github.com/huggingface/datasets/pull/5310.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5310.patch", "merged_at": "2022-11-30T11:57:16" }
true
https://api.github.com/repos/huggingface/datasets/issues/5309
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5309/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5309/comments
https://api.github.com/repos/huggingface/datasets/issues/5309/events
https://github.com/huggingface/datasets/pull/5309
1,466,758,987
PR_kwDODunzps5D0g1y
5,309
Close stream in `ArrowWriter.finalize` before inference error
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-28T16:59:39
2022-12-07T12:55:20
2022-12-07T12:52:15
CONTRIBUTOR
null
Ensure the file stream is closed in `ArrowWriter.finalize` before raising the `SchemaInferenceError` to avoid the `PermissionError` on Windows in `incomplete_dir`'s `shutil.rmtree`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5309/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5309", "html_url": "https://github.com/huggingface/datasets/pull/5309", "diff_url": "https://github.com/huggingface/datasets/pull/5309.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5309.patch", "merged_at": "2022-12-07T12:52:15" }
true
https://api.github.com/repos/huggingface/datasets/issues/5308
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5308/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5308/comments
https://api.github.com/repos/huggingface/datasets/issues/5308/events
https://github.com/huggingface/datasets/pull/5308
1,466,552,281
PR_kwDODunzps5Dz0Tv
5,308
Support `topdown` parameter in `xwalk`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I like the `kwargs` approach, thanks!" ]
2022-11-28T14:42:41
2022-12-09T12:58:55
2022-12-09T12:55:59
CONTRIBUTOR
null
Add support for the `topdown` parameter in `xwalk` when `fsspec>=2022.11.0` is installed.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5308/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5308", "html_url": "https://github.com/huggingface/datasets/pull/5308", "diff_url": "https://github.com/huggingface/datasets/pull/5308.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5308.patch", "merged_at": "2022-12-09T12:55:59" }
true
https://api.github.com/repos/huggingface/datasets/issues/5307
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5307/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5307/comments
https://api.github.com/repos/huggingface/datasets/issues/5307/events
https://github.com/huggingface/datasets/pull/5307
1,466,477,427
PR_kwDODunzps5Dzj8r
5,307
Use correct dataset type in `from_generator` docs
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-28T13:59:10
2022-11-28T15:30:37
2022-11-28T15:27:26
CONTRIBUTOR
null
Use the correct dataset type in the `from_generator` docs (example with sharding).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5307/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5307", "html_url": "https://github.com/huggingface/datasets/pull/5307", "diff_url": "https://github.com/huggingface/datasets/pull/5307.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5307.patch", "merged_at": "2022-11-28T15:27:26" }
true
https://api.github.com/repos/huggingface/datasets/issues/5306
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5306/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5306/comments
https://api.github.com/repos/huggingface/datasets/issues/5306/events
https://github.com/huggingface/datasets/issues/5306
1,465,968,639
I_kwDODunzps5XYOf_
5,306
Can't use custom feature description when loading a dataset
{ "login": "clefourrier", "id": 22726840, "node_id": "MDQ6VXNlcjIyNzI2ODQw", "avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clefourrier", "html_url": "https://github.com/clefourrier", "followers_url": "https://api.github.com/users/clefourrier/followers", "following_url": "https://api.github.com/users/clefourrier/following{/other_user}", "gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}", "starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions", "organizations_url": "https://api.github.com/users/clefourrier/orgs", "repos_url": "https://api.github.com/users/clefourrier/repos", "events_url": "https://api.github.com/users/clefourrier/events{/privacy}", "received_events_url": "https://api.github.com/users/clefourrier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Forgot to actually convert the feature dict to a Feature object. Closing." ]
2022-11-28T07:55:44
2022-11-28T08:11:45
2022-11-28T08:11:44
CONTRIBUTOR
null
### Describe the bug I have created a feature dictionary to describe my datasets' column types, to use when loading the dataset, following [the doc](https://huggingface.co/docs/datasets/main/en/about_dataset_features). It crashes at dataset load. ### Steps to reproduce the bug ```python # Creating features task_list = [f"motif_G{i}" for i in range(19, 53)] features = {t: Sequence(feature=Value(dtype="float64")) for t in task_list} for col_name in ["class_label"]: features[col_name] = Sequence(feature=Value(dtype="int64")) for col_name in ["num_nodes"]: features[col_name] = Value(dtype="int64") for col_name in ["num_bridges", "num_cycles", "avg_shortest_path_len"]: features[col_name] = Sequence(feature=Value(dtype="float64")) for col_name in ["edge_attr", "node_feat", "edge_index"]: features[col_name] = Sequence(feature=Sequence(feature=Value(dtype="int64"))) print(features) dataset = load_dataset(path=f"graphs-datasets/unbalanced-motifs-500K", split="train", features=features) ``` Last line will crash and say 'TypeError: argument of type 'Sequence' is not iterable'. Full stack: ``` Traceback (most recent call last): File "pretrain_tokengt.py", line 131, in <module> main(output_folder = "../workspace/pretraining", File "pretrain_tokengt.py", line 52, in main dataset = load_dataset(path=f"graphs-datasets/{dataset_name}", split="train", features=features) File "huggingface_env/lib/python3.8/site-packages/datasets/load.py", line 1718, in load_dataset builder_instance = load_dataset_builder( File "huggingface_env/lib/python3.8/site-packages/datasets/load.py", line 1514, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "huggingface_env/lib/python3.8/site-packages/datasets/builder.py", line 321, in __init__ info.update(self._info()) File "huggingface_env/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py", line 62, in _info return datasets.DatasetInfo(features=self.config.features) File "<string>", line 20, in __init__ File "huggingface_env/lib/python3.8/site-packages/datasets/info.py", line 155, in __post_init__ self.features = Features.from_dict(self.features) File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1599, in from_dict obj = generate_from_dict(dic) File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1282, in generate_from_dict return {key: generate_from_dict(value) for key, value in obj.items()} File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1282, in <dictcomp> return {key: generate_from_dict(value) for key, value in obj.items()} File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1281, in generate_from_dict if "_type" not in obj or isinstance(obj["_type"], dict): TypeError: argument of type 'Sequence' is not iterable ``` ### Expected behavior For it not to crash. ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-5.14.0-1054-oem-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5306/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5305
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5305/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5305/comments
https://api.github.com/repos/huggingface/datasets/issues/5305/events
https://github.com/huggingface/datasets/issues/5305
1,465,627,826
I_kwDODunzps5XW7Sy
5,305
Dataset joelito/mc4_legal does not work with multiple files
{ "login": "JoelNiklaus", "id": 3775944, "node_id": "MDQ6VXNlcjM3NzU5NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JoelNiklaus", "html_url": "https://github.com/JoelNiklaus", "followers_url": "https://api.github.com/users/JoelNiklaus/followers", "following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}", "gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}", "starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions", "organizations_url": "https://api.github.com/users/JoelNiklaus/orgs", "repos_url": "https://api.github.com/users/JoelNiklaus/repos", "events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}", "received_events_url": "https://api.github.com/users/JoelNiklaus/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting @JoelNiklaus.\r\n\r\nPlease note that since we moved all dataset loading scripts to the Hub, the issues and pull requests relative to specific datasets are directly handled on the Hub, in their Community tab. I'm transferring this issue there: https://huggingface.co/datasets/joelito/mc4_legal/discussions\r\n\r\nI am also having a look at the bug in your script.", "Issue transferred to: https://huggingface.co/datasets/joelito/mc4_legal/discussions/1" ]
2022-11-28T00:16:16
2022-11-28T07:22:42
2022-11-28T07:22:42
CONTRIBUTOR
null
### Describe the bug The dataset https://huggingface.co/datasets/joelito/mc4_legal works for languages like bg with a single data file, but not for languages with multiple files like de. It shows zero rows for the de dataset. joelniklaus@Joels-MacBook-Pro ~/N/P/C/L/p/m/mc4_legal (main) [1]> python test_mc4_legal.py (debug) Found cached dataset mc4_legal (/Users/joelniklaus/.cache/huggingface/datasets/mc4_legal/de/0.0.0/fb6952a097180f8c936e2a7605525ff670354a344fc1a2c70107684d3f7cb02f) Dataset({ features: ['index', 'url', 'timestamp', 'matches', 'text'], num_rows: 0 }) joelniklaus@Joels-MacBook-Pro ~/N/P/C/L/p/m/mc4_legal (main)> python test_mc4_legal.py (debug) Downloading and preparing dataset mc4_legal/bg to /Users/joelniklaus/.cache/huggingface/datasets/mc4_legal/bg/0.0.0/fb6952a097180f8c936e2a7605525ff670354a344fc1a2c70107684d3f7cb02f... Downloading data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1240.55it/s] Dataset mc4_legal downloaded and prepared to /Users/joelniklaus/.cache/huggingface/datasets/mc4_legal/bg/0.0.0/fb6952a097180f8c936e2a7605525ff670354a344fc1a2c70107684d3f7cb02f. Subsequent calls will reuse this data. Dataset({ features: ['index', 'url', 'timestamp', 'matches', 'text'], num_rows: 204 }) ### Steps to reproduce the bug import datasets from datasets import load_dataset, get_dataset_config_names language = "bg" test = load_dataset("joelito/mc4_legal", language, split='train') ### Expected behavior It should display the correct number of rows for the de dataset which should be a large number (thousands or more). ### Environment info Package Version ------------------------ -------------- absl-py 1.3.0 aiohttp 3.8.1 aiosignal 1.2.0 astunparse 1.6.3 async-timeout 4.0.2 attrs 22.1.0 beautifulsoup4 4.11.1 blinker 1.4 blis 0.7.8 Bottleneck 1.3.4 brotlipy 0.7.0 cachetools 5.2.0 catalogue 2.0.7 certifi 2022.5.18.1 cffi 1.15.1 chardet 4.0.0 charset-normalizer 2.1.0 click 8.0.4 conllu 4.5.2 cryptography 38.0.1 cymem 2.0.6 datasets 2.6.1 dill 0.3.5.1 docker-pycreds 0.4.0 fasttext 0.9.2 fasttext-langdetect 1.0.3 filelock 3.0.12 flatbuffers 20210226132247 frozenlist 1.3.0 fsspec 2022.5.0 gast 0.4.0 gcloud 0.18.3 gitdb 4.0.9 GitPython 3.1.27 google-auth 2.9.0 google-auth-oauthlib 0.4.6 google-pasta 0.2.0 googleapis-common-protos 1.57.0 grpcio 1.47.0 h5py 3.7.0 httplib2 0.21.0 huggingface-hub 0.8.1 idna 3.4 importlib-metadata 4.12.0 Jinja2 3.1.2 joblib 1.0.1 keras 2.9.0 Keras-Preprocessing 1.1.2 langcodes 3.3.0 lxml 4.9.1 Markdown 3.3.7 MarkupSafe 2.1.1 mkl-fft 1.3.1 mkl-random 1.2.2 mkl-service 2.4.0 multidict 6.0.2 multiprocess 0.70.13 murmurhash 1.0.7 numexpr 2.8.1 numpy 1.22.3 oauth2client 4.1.3 oauthlib 3.2.1 opt-einsum 3.3.0 packaging 21.3 pandas 1.4.2 pathtools 0.1.2 pathy 0.6.1 pip 21.1.2 preshed 3.0.6 promise 2.3 protobuf 4.21.9 psutil 5.9.1 pyarrow 8.0.0 pyasn1 0.4.8 pyasn1-modules 0.2.8 pybind11 2.9.2 pycountry 22.3.5 pycparser 2.21 pydantic 1.8.2 PyJWT 2.4.0 pylzma 0.5.0 pyOpenSSL 22.0.0 pyparsing 3.0.4 PySocks 1.7.1 python-dateutil 2.8.2 pytz 2021.3 PyYAML 6.0 regex 2021.4.4 requests 2.28.1 requests-oauthlib 1.3.1 responses 0.18.0 rsa 4.8 sacremoses 0.0.45 scikit-learn 1.1.1 scipy 1.8.1 sentencepiece 0.1.96 sentry-sdk 1.6.0 setproctitle 1.2.3 setuptools 65.5.0 shortuuid 1.0.9 six 1.16.0 smart-open 5.2.1 smmap 5.0.0 soupsieve 2.3.2.post1 spacy 3.3.1 spacy-legacy 3.0.9 spacy-loggers 1.0.2 srsly 2.4.3 tabulate 0.8.9 tensorboard 2.9.1 tensorboard-data-server 0.6.1 tensorboard-plugin-wit 1.8.1 tensorflow 2.9.1 tensorflow-estimator 2.9.0 termcolor 2.1.0 thinc 8.0.17 threadpoolctl 3.1.0 tokenizers 0.12.1 torch 1.13.0 tqdm 4.64.0 transformers 4.20.1 typer 0.4.1 typing-extensions 4.3.0 Unidecode 1.3.6 urllib3 1.26.12 wandb 0.12.20 wasabi 0.9.1 web-anno-tsv 0.0.1 Werkzeug 2.1.2 wget 3.2 wheel 0.35.1 wrapt 1.14.1 xxhash 3.0.0 yarl 1.8.1 zipp 3.8.0 Python 3.8.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5305/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5304
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5304/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5304/comments
https://api.github.com/repos/huggingface/datasets/issues/5304/events
https://github.com/huggingface/datasets/issues/5304
1,465,110,367
I_kwDODunzps5XU89f
5,304
timit_asr doesn't load the test split.
{ "login": "seyong92", "id": 17842800, "node_id": "MDQ6VXNlcjE3ODQyODAw", "avatar_url": "https://avatars.githubusercontent.com/u/17842800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/seyong92", "html_url": "https://github.com/seyong92", "followers_url": "https://api.github.com/users/seyong92/followers", "following_url": "https://api.github.com/users/seyong92/following{/other_user}", "gists_url": "https://api.github.com/users/seyong92/gists{/gist_id}", "starred_url": "https://api.github.com/users/seyong92/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seyong92/subscriptions", "organizations_url": "https://api.github.com/users/seyong92/orgs", "repos_url": "https://api.github.com/users/seyong92/repos", "events_url": "https://api.github.com/users/seyong92/events{/privacy}", "received_events_url": "https://api.github.com/users/seyong92/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The [timit_asr.py](https://huggingface.co/datasets/timit_asr/blob/main/timit_asr.py) script iterates over the WAV files per split directory using this:\r\n```python\r\nwav_paths = sorted(Path(data_dir).glob(f\"**/{split}/**/*.wav\"))\r\nwav_paths = wav_paths if wav_paths else sorted(Path(data_dir).glob(f\"**/{split.upper()}/**/*.WAV\"))\r\n```\r\n\r\nCan you check that there is a directory named \"test\" somewhere in your timit data directory ?" ]
2022-11-26T10:18:22
2022-12-01T13:28:59
null
NONE
null
### Describe the bug When I use the function ```timit = load_dataset('timit_asr', data_dir=data_dir)```, it only loads train split, not test split. I tried to change the directory and filename to lower case to upper case for the test split, but it does not work at all. ```python DatasetDict({ train: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 4620 }) test: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 0 }) }) ``` The directory structure of both splits are same. (DIALECT_REGION / SPEAKER_CODE / DATA_FILES) ### Steps to reproduce the bug 1. just use ```timit = load_dataset('timit_asr', data_dir=data_dir)``` ### Expected behavior ```python DatasetDict({ train: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 4620 }) test: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 1680 }) }) ``` ### Environment info - ubuntu 20.04 - python 3.9.13 - datasets 2.7.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5304/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5303
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5303/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5303/comments
https://api.github.com/repos/huggingface/datasets/issues/5303/events
https://github.com/huggingface/datasets/pull/5303
1,464,837,251
PR_kwDODunzps5DuVTa
5,303
Skip dataset verifications by default
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5303). All of your documentation changes will be reflected on that endpoint.", "100% agree that the checksum verification is overkill and not super useful. But I think this PR would also disable the check on num_examples no ?\r\n \r\nAs a user I would like to know if the dataset I'm loading changed significantly.\r\nAnd I also think it can be useful to make sure the metadata are up to date.\r\n\r\nWhat do you think ?\r\n\r\nWe could have a default `ignore_verifications=\"ignore_checksums\"`", "> We could have a default `ignore_verifications=\"ignore_checksums\"`\r\n\r\nAccepting multiple types (booleans and strings) at the same time is not the best design. Maybe we could define an enum for this parameter?", "Yes an enum sounds good !", "so we can have three verification levels, - smth like \"ignore_all\" (to skip both checksums and all other info like num_examples verification), \"ignore_checksums\" (to skip only checksums verification), and \"verify_all\" (to perform all verification)?\r\nand deprecate `ignore_verifications` param.\r\n\r\n@mariosasko if you're not going to work on this PR in the coming days, I can take over it if you want (this PR will help me with [this issue](https://github.com/huggingface/datasets/issues/5315), not super urgent though).", "Okay, I propose deprecating `ignore_verifications` in favor of `verification_mode` (`load_dataset` already has `download_mode`; some other projects use this name for verification control). `verification_mode` would accept the following enum (or strings in the same manner as `download_mode` does):\r\n\r\n```python\r\nclass VerificationMode(enum.Enum):\r\n FULL = \"full\" # runs all verification checks \r\n BASIC = \"basic\" # default, runs only the cheap ones (skips the checksum check)\r\n NONE = \"none\" # skips all the checks\r\n```\r\n\r\nWDTY?", "(copy paste from my message on slack)\r\n\r\nWhat do you think of a config variable in config.py to switch from one verification mode to another ? This way we don’t deprecate anything\r\n\r\nMany users are familiar with ignore_verifications=True, it might be overkill to deprecate it", "@lhoestq So we have \"basic\" verification mode in `config.py` and continue to have `False` as a default \r\nvalue for `ignore_verifications`? That way running all verifications including checksums would not be possible without switching the config var, right? \r\n\r\nI like having a `VerificationMode` enum because it's aligned with `DownloadMode` and sounds more natural to me (`ignore_verifications` feels a bit semantically reverted but this is probably just my feeling) and it's flexible (no need to worry about `config.py`, I'm not sure that users even know it exists, wdyt?).\r\n\r\nThe usage point seems also valid to me, but cases when users are stuck with NonMatchingX errors also happen from time to time and to figure out what's wrong is non-trivial here. \r\n\r\nAs a note aside - I suggest to add instructions to the NonMatchingX error message (how to use `ignore_verifications` / `verification_mode`), this would save users who don't know about this param a lot of time.", "Ok I see. I'm fine with the new parameter then (even though I had a small pref for the config variable) :)", "I like the idea of an enum and the `verification_mode` parameter. \r\n\r\nIn relation with the config parameter, we could additionally add a `DEFAULT_VERIFICATION_MODE`, maybe only if users require it. Note that until now there wasn't any config parameter for a default `ignore_verifications` value: I guess people are explicitly passing `ignore_verifications=True`...\r\n\r\nAs a note aside, I like the suggestion by @polinaeterna: we could give actionable messages when verifying checksums. This could be done in other PR.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012891 / 0.011353 (0.001538) | 0.006474 / 0.011008 (-0.004535) | 0.144038 / 0.038508 (0.105530) | 0.036151 / 0.023109 (0.013042) | 0.404366 / 0.275898 (0.128468) | 0.479988 / 0.323480 (0.156508) | 0.010219 / 0.007986 (0.002233) | 0.005319 / 0.004328 (0.000990) | 0.099705 / 0.004250 (0.095455) | 0.046639 / 0.037052 (0.009586) | 0.398997 / 0.258489 (0.140508) | 0.478431 / 0.293841 (0.184590) | 0.069125 / 0.128546 (-0.059421) | 0.019603 / 0.075646 (-0.056043) | 0.400829 / 0.419271 (-0.018443) | 0.066549 / 0.043533 (0.023016) | 0.398343 / 0.255139 (0.143204) | 0.417928 / 0.283200 (0.134728) | 0.121124 / 0.141683 (-0.020559) | 1.751513 / 1.452155 (0.299358) | 1.821239 / 1.492716 (0.328523) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.251603 / 0.018006 (0.233597) | 0.579916 / 0.000490 (0.579427) | 0.003257 / 0.000200 (0.003058) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031502 / 0.037411 (-0.005909) | 0.134688 / 0.014526 (0.120162) | 0.152306 / 0.176557 (-0.024251) | 0.198943 / 0.737135 (-0.538192) | 0.142551 / 0.296338 (-0.153788) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.634672 / 0.215209 (0.419463) | 6.370215 / 2.077655 (4.292561) | 2.548123 / 1.504120 (1.044003) | 2.184263 / 1.541195 (0.643069) | 2.239026 / 1.468490 (0.770536) | 1.233340 / 4.584777 (-3.351437) | 5.791824 / 3.745712 (2.046112) | 5.093032 / 5.269862 (-0.176830) | 2.849833 / 4.565676 (-1.715844) | 0.143787 / 0.424275 (-0.280488) | 0.015279 / 0.007607 (0.007672) | 0.757984 / 0.226044 (0.531939) | 7.883604 / 2.268929 (5.614675) | 3.321591 / 55.444624 (-52.123033) | 2.671777 / 6.876477 (-4.204700) | 2.685215 / 2.142072 (0.543142) | 1.546709 / 4.805227 (-3.258519) | 0.247186 / 6.500664 (-6.253478) | 0.085117 / 0.075469 (0.009648) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.679809 / 1.841788 (-0.161979) | 18.528893 / 8.074308 (10.454585) | 23.168590 / 10.191392 (12.977198) | 0.277618 / 0.680424 (-0.402806) | 0.045109 / 0.534201 (-0.489092) | 0.568873 / 0.579283 (-0.010410) | 0.695017 / 0.434364 (0.260653) | 0.671024 / 0.540337 (0.130687) | 0.823817 / 1.386936 (-0.563119) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009809 / 0.011353 (-0.001544) | 0.006890 / 0.011008 (-0.004118) | 0.099211 / 0.038508 (0.060703) | 0.035387 / 0.023109 (0.012278) | 0.507603 / 0.275898 (0.231705) | 0.535553 / 0.323480 (0.212073) | 0.007346 / 0.007986 (-0.000640) | 0.007559 / 0.004328 (0.003231) | 0.099132 / 0.004250 (0.094882) | 0.048048 / 0.037052 (0.010996) | 0.518096 / 0.258489 (0.259607) | 0.561134 / 0.293841 (0.267294) | 0.057580 / 0.128546 (-0.070966) | 0.023665 / 0.075646 (-0.051982) | 0.138409 / 0.419271 (-0.280862) | 0.061989 / 0.043533 (0.018456) | 0.510568 / 0.255139 (0.255429) | 0.552722 / 0.283200 (0.269522) | 0.115990 / 0.141683 (-0.025693) | 1.884900 / 1.452155 (0.432745) | 1.990604 / 1.492716 (0.497888) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280638 / 0.018006 (0.262632) | 0.592837 / 0.000490 (0.592347) | 0.000465 / 0.000200 (0.000265) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030253 / 0.037411 (-0.007158) | 0.141580 / 0.014526 (0.127054) | 0.135114 / 0.176557 (-0.041443) | 0.190003 / 0.737135 (-0.547133) | 0.160230 / 0.296338 (-0.136109) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.699762 / 0.215209 (0.484553) | 6.632344 / 2.077655 (4.554689) | 2.718803 / 1.504120 (1.214683) | 2.485294 / 1.541195 (0.944099) | 2.579889 / 1.468490 (1.111399) | 1.268795 / 4.584777 (-3.315982) | 5.777745 / 3.745712 (2.032033) | 3.232551 / 5.269862 (-2.037311) | 2.127699 / 4.565676 (-2.437977) | 0.146570 / 0.424275 (-0.277705) | 0.015971 / 0.007607 (0.008364) | 0.803181 / 0.226044 (0.577137) | 8.377192 / 2.268929 (6.108264) | 3.551242 / 55.444624 (-51.893382) | 2.865228 / 6.876477 (-4.011249) | 2.774869 / 2.142072 (0.632797) | 1.553856 / 4.805227 (-3.251371) | 0.264510 / 6.500664 (-6.236154) | 0.087918 / 0.075469 (0.012449) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.653396 / 1.841788 (-0.188391) | 18.703863 / 8.074308 (10.629555) | 22.067331 / 10.191392 (11.875939) | 0.257424 / 0.680424 (-0.422999) | 0.026448 / 0.534201 (-0.507753) | 0.550100 / 0.579283 (-0.029183) | 0.647296 / 0.434364 (0.212932) | 0.657476 / 0.540337 (0.117138) | 0.781119 / 1.386936 (-0.605817) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8c4a9cb95f8742a2850f11d59abbef71d6c1f60c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008889 / 0.011353 (-0.002464) | 0.004563 / 0.011008 (-0.006445) | 0.101627 / 0.038508 (0.063118) | 0.030526 / 0.023109 (0.007417) | 0.297175 / 0.275898 (0.021277) | 0.368454 / 0.323480 (0.044974) | 0.007246 / 0.007986 (-0.000740) | 0.003565 / 0.004328 (-0.000763) | 0.078644 / 0.004250 (0.074394) | 0.038616 / 0.037052 (0.001564) | 0.310521 / 0.258489 (0.052032) | 0.348014 / 0.293841 (0.054173) | 0.033463 / 0.128546 (-0.095083) | 0.011544 / 0.075646 (-0.064102) | 0.323281 / 0.419271 (-0.095990) | 0.040187 / 0.043533 (-0.003346) | 0.298015 / 0.255139 (0.042876) | 0.326392 / 0.283200 (0.043193) | 0.088730 / 0.141683 (-0.052952) | 1.503387 / 1.452155 (0.051233) | 1.548704 / 1.492716 (0.055988) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185983 / 0.018006 (0.167977) | 0.451889 / 0.000490 (0.451400) | 0.001433 / 0.000200 (0.001233) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023396 / 0.037411 (-0.014015) | 0.118236 / 0.014526 (0.103710) | 0.124594 / 0.176557 (-0.051962) | 0.159089 / 0.737135 (-0.578047) | 0.129369 / 0.296338 (-0.166969) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423161 / 0.215209 (0.207952) | 4.228211 / 2.077655 (2.150556) | 1.853862 / 1.504120 (0.349742) | 1.649471 / 1.541195 (0.108276) | 1.708631 / 1.468490 (0.240141) | 0.697456 / 4.584777 (-3.887321) | 3.473244 / 3.745712 (-0.272468) | 1.942586 / 5.269862 (-3.327275) | 1.291592 / 4.565676 (-3.274084) | 0.082758 / 0.424275 (-0.341517) | 0.012256 / 0.007607 (0.004649) | 0.528355 / 0.226044 (0.302311) | 5.277620 / 2.268929 (3.008691) | 2.299604 / 55.444624 (-53.145020) | 1.954940 / 6.876477 (-4.921537) | 2.055543 / 2.142072 (-0.086529) | 0.814723 / 4.805227 (-3.990505) | 0.149937 / 6.500664 (-6.350727) | 0.064529 / 0.075469 (-0.010941) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266240 / 1.841788 (-0.575547) | 14.144016 / 8.074308 (6.069708) | 14.331733 / 10.191392 (4.140340) | 0.138963 / 0.680424 (-0.541461) | 0.029034 / 0.534201 (-0.505167) | 0.397325 / 0.579283 (-0.181958) | 0.405293 / 0.434364 (-0.029071) | 0.480745 / 0.540337 (-0.059592) | 0.573386 / 1.386936 (-0.813550) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007214 / 0.011353 (-0.004139) | 0.004569 / 0.011008 (-0.006439) | 0.078718 / 0.038508 (0.040209) | 0.031104 / 0.023109 (0.007995) | 0.342562 / 0.275898 (0.066664) | 0.387802 / 0.323480 (0.064322) | 0.005378 / 0.007986 (-0.002608) | 0.003414 / 0.004328 (-0.000915) | 0.077249 / 0.004250 (0.072999) | 0.044337 / 0.037052 (0.007285) | 0.341397 / 0.258489 (0.082907) | 0.385536 / 0.293841 (0.091695) | 0.033257 / 0.128546 (-0.095289) | 0.011825 / 0.075646 (-0.063821) | 0.086723 / 0.419271 (-0.332549) | 0.045951 / 0.043533 (0.002418) | 0.340914 / 0.255139 (0.085775) | 0.367126 / 0.283200 (0.083926) | 0.096326 / 0.141683 (-0.045357) | 1.608612 / 1.452155 (0.156458) | 1.687251 / 1.492716 (0.194534) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227595 / 0.018006 (0.209589) | 0.418502 / 0.000490 (0.418013) | 0.000392 / 0.000200 (0.000192) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026232 / 0.037411 (-0.011179) | 0.101020 / 0.014526 (0.086494) | 0.110017 / 0.176557 (-0.066539) | 0.153497 / 0.737135 (-0.583639) | 0.110602 / 0.296338 (-0.185737) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433789 / 0.215209 (0.218579) | 4.329350 / 2.077655 (2.251696) | 2.052136 / 1.504120 (0.548016) | 1.848457 / 1.541195 (0.307262) | 1.936791 / 1.468490 (0.468301) | 0.700609 / 4.584777 (-3.884168) | 3.391983 / 3.745712 (-0.353729) | 1.903220 / 5.269862 (-3.366642) | 1.179463 / 4.565676 (-3.386213) | 0.084025 / 0.424275 (-0.340250) | 0.012743 / 0.007607 (0.005136) | 0.536816 / 0.226044 (0.310772) | 5.420230 / 2.268929 (3.151302) | 2.507438 / 55.444624 (-52.937187) | 2.178907 / 6.876477 (-4.697570) | 2.228586 / 2.142072 (0.086514) | 0.812527 / 4.805227 (-3.992701) | 0.153382 / 6.500664 (-6.347282) | 0.069932 / 0.075469 (-0.005537) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256861 / 1.841788 (-0.584927) | 14.309236 / 8.074308 (6.234928) | 13.740323 / 10.191392 (3.548931) | 0.142698 / 0.680424 (-0.537726) | 0.016998 / 0.534201 (-0.517203) | 0.385489 / 0.579283 (-0.193794) | 0.391515 / 0.434364 (-0.042849) | 0.472704 / 0.540337 (-0.067633) | 0.565042 / 1.386936 (-0.821894) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4b0713ddf2e2e7129d9ccda791d265684c96675c \"CML watermark\")\n" ]
2022-11-25T18:39:09
2023-01-27T16:05:38
null
CONTRIBUTOR
null
Skip the dataset verifications (split and checksum verifications, duplicate keys check) by default unless a dataset is being tested (`datasets-cli test/run_beam`). The main goal is to avoid running the checksum check in the default case due to how expensive it can be for large datasets. PS: Maybe we should deprecate `ignore_verifications`, which is `True` now by default, and give it a different name?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5303/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5303/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5303", "html_url": "https://github.com/huggingface/datasets/pull/5303", "diff_url": "https://github.com/huggingface/datasets/pull/5303.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5303.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5302
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5302/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5302/comments
https://api.github.com/repos/huggingface/datasets/issues/5302/events
https://github.com/huggingface/datasets/pull/5302
1,464,778,901
PR_kwDODunzps5DuJJp
5,302
Improve `use_auth_token` docstring and deprecate `use_auth_token` in `download_and_prepare`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-25T17:09:21
2022-12-09T14:20:15
2022-12-09T14:17:20
CONTRIBUTOR
null
Clarify in the docstrings what happens when `use_auth_token` is `None` and deprecate the `use_auth_token` param in `download_and_prepare`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5302/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5302", "html_url": "https://github.com/huggingface/datasets/pull/5302", "diff_url": "https://github.com/huggingface/datasets/pull/5302.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5302.patch", "merged_at": "2022-12-09T14:17:20" }
true
https://api.github.com/repos/huggingface/datasets/issues/5301
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5301/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5301/comments
https://api.github.com/repos/huggingface/datasets/issues/5301/events
https://github.com/huggingface/datasets/pull/5301
1,464,749,156
PR_kwDODunzps5DuCzR
5,301
Return a split Dataset in load_dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5301). All of your documentation changes will be reflected on that endpoint.", "Just noticed that now we have to deal with indexed & split datasets. The remaining tests are failing because one should be able to get an indexed dataset when accessing the split of a dataset made of indexed splits (right now the index is just trashed)" ]
2022-11-25T16:35:54
2022-11-30T16:53:34
null
MEMBER
null
...instead of a DatasetDict. ```python # now supported ds = load_dataset("squad") ds[0] for example in ds: pass # still works ds["train"] ds["validation"] # new ds.splits # Dict[str, Dataset] | None # soon to be supported (not in this PR) ds = load_dataset("dataset_with_no_splits") ds[0] for example in ds: pass ``` I implemented `Dataset.__getitem__` and `IterableDataset.__getitem__` to be able to get a split from a dataset. The splits are defined by the `ds.info.splits` dictionary. Therefore a dataset is a table that optionally has some splits defined in the dataset info. And a split dataset is the concatenation of all its splits. I made as little breaking changes as possible. Notable breaking changes: - `load_dataset("potato").keys() / .items() / .values() /` don't work anymore, since we don't return a dict - same for `for split_name in load_dataset("potato")`, since we now iterate on the examples - .. TODO: - [x] Update push_to_hub - [x] Update save_to_disk/load_from_disk - [ ] check for other breaking changes - [ ] fix existing tests - [ ] add new tests - [ ] docs This is related to https://github.com/huggingface/datasets/issues/5189, to extend `load_dataset` to return datasets without splits
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5301/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5301/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5301", "html_url": "https://github.com/huggingface/datasets/pull/5301", "diff_url": "https://github.com/huggingface/datasets/pull/5301.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5301.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5300
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5300/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5300/comments
https://api.github.com/repos/huggingface/datasets/issues/5300/events
https://github.com/huggingface/datasets/pull/5300
1,464,697,136
PR_kwDODunzps5Dt3uK
5,300
Use same `num_proc` for dataset download and generation
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I noticed this bug the other day and was going to look into it! \"Where are these processes coming from?\" ;-)" ]
2022-11-25T15:37:42
2022-12-07T12:55:39
2022-12-07T12:52:51
CONTRIBUTOR
null
Use the same `num_proc` value for data download and generation. Additionally, do not set `num_proc` to 16 in `DownloadManager` by default (`num_proc` now has to be specified explicitly).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5300/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5300/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5300", "html_url": "https://github.com/huggingface/datasets/pull/5300", "diff_url": "https://github.com/huggingface/datasets/pull/5300.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5300.patch", "merged_at": "2022-12-07T12:52:50" }
true
https://api.github.com/repos/huggingface/datasets/issues/5299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5299/comments
https://api.github.com/repos/huggingface/datasets/issues/5299/events
https://github.com/huggingface/datasets/pull/5299
1,464,695,091
PR_kwDODunzps5Dt3Sk
5,299
Fix xopen for Windows pathnames
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-25T15:35:28
2022-11-29T08:23:58
2022-11-29T08:21:24
MEMBER
null
This PR fixes a bug in `xopen` function for Windows pathnames. Fix #5298.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5299/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5299", "html_url": "https://github.com/huggingface/datasets/pull/5299", "diff_url": "https://github.com/huggingface/datasets/pull/5299.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5299.patch", "merged_at": "2022-11-29T08:21:24" }
true
https://api.github.com/repos/huggingface/datasets/issues/5298
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5298/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5298/comments
https://api.github.com/repos/huggingface/datasets/issues/5298/events
https://github.com/huggingface/datasets/issues/5298
1,464,681,871
I_kwDODunzps5XTUWP
5,298
Bug in xopen with Windows pathnames
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2022-11-25T15:21:32
2022-11-29T08:21:25
2022-11-29T08:21:25
MEMBER
null
Currently, `xopen` function has a bug with local Windows pathnames: From its implementation: ```python def xopen(file: str, mode="r", *args, **kwargs): file = _as_posix(PurePath(file)) main_hop, *rest_hops = file.split("::") if is_local_path(main_hop): return open(file, mode, *args, **kwargs) ``` On a Windows machine, if we pass the argument: ```python xopen("C:\\Users\\USERNAME\\filename.txt") ``` it returns ```python open("C:/Users/USERNAME/filename.txt") ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5298/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5297
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5297/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5297/comments
https://api.github.com/repos/huggingface/datasets/issues/5297/events
https://github.com/huggingface/datasets/pull/5297
1,464,554,491
PR_kwDODunzps5DtZjg
5,297
Fix xjoin for Windows pathnames
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-25T13:30:17
2022-11-29T08:07:39
2022-11-29T08:05:12
MEMBER
null
This PR fixes a bug in `xjoin` function with Windows pathnames. Fix #5296.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5297/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5297", "html_url": "https://github.com/huggingface/datasets/pull/5297", "diff_url": "https://github.com/huggingface/datasets/pull/5297.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5297.patch", "merged_at": "2022-11-29T08:05:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/5296
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5296/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5296/comments
https://api.github.com/repos/huggingface/datasets/issues/5296/events
https://github.com/huggingface/datasets/issues/5296
1,464,553,580
I_kwDODunzps5XS1Bs
5,296
Bug in xjoin with Windows pathnames
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2022-11-25T13:29:33
2022-11-29T08:05:13
2022-11-29T08:05:13
MEMBER
null
Currently, `xjoin` function has a bug with local Windows pathnames: instead of returning the OS-dependent join pathname, it always returns it in POSIX format. ```python from datasets.download.streaming_download_manager import xjoin path = xjoin("C:\\Users\\USERNAME", "filename.txt") ``` Join path should be: ```python "C:\\Users\\USERNAME\\filename.txt" ``` However it is: ```python "C:/Users/USERNAME/filename.txt" ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5296/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5295
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5295/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5295/comments
https://api.github.com/repos/huggingface/datasets/issues/5295/events
https://github.com/huggingface/datasets/issues/5295
1,464,006,743
I_kwDODunzps5XQvhX
5,295
Extractions failed when .zip file located on read-only path (e.g., SageMaker FastFile mode)
{ "login": "verdimrc", "id": 2340781, "node_id": "MDQ6VXNlcjIzNDA3ODE=", "avatar_url": "https://avatars.githubusercontent.com/u/2340781?v=4", "gravatar_id": "", "url": "https://api.github.com/users/verdimrc", "html_url": "https://github.com/verdimrc", "followers_url": "https://api.github.com/users/verdimrc/followers", "following_url": "https://api.github.com/users/verdimrc/following{/other_user}", "gists_url": "https://api.github.com/users/verdimrc/gists{/gist_id}", "starred_url": "https://api.github.com/users/verdimrc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/verdimrc/subscriptions", "organizations_url": "https://api.github.com/users/verdimrc/orgs", "repos_url": "https://api.github.com/users/verdimrc/repos", "events_url": "https://api.github.com/users/verdimrc/events{/privacy}", "received_events_url": "https://api.github.com/users/verdimrc/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! Thanks for reporting. Indeed the lock file should be placed in a directory with write permission (e.g. in the directory where the archive is extracted).", "I opened https://github.com/huggingface/datasets/pull/5320 to fix this - it places the lock file in the cache directory instead of trying to put in next to the ZIP where it's read-only" ]
2022-11-25T03:59:43
2022-12-01T13:56:40
null
NONE
null
### Describe the bug Hi, `load_dataset()` does not work .zip files located on a read-only directory. Looks like it's because Dataset creates a lock file in the [same directory](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/utils/extract.py) as the .zip file. Encountered this when attempting `load_dataset()` on a datadir with SageMaker FastFile mode. ### Steps to reproduce the bug ```python # Showing relevant lines only. hyperparameters = { "dataset_name": "ydshieh/coco_dataset_script", "dataset_config_name": 2017, "data_dir": "/opt/ml/input/data/coco", "cache_dir": "/tmp/huggingface-cache", # Fix dataset complains out-of-space. ... } estimator = PyTorch( base_job_name="clip", source_dir="../src/sm-entrypoint", entry_point="run_clip.py", # Transformers/src/examples/pytorch/contrastive-image-text/run_clip.py framework_version="1.12", py_version="py38", hyperparameters=hyperparameters, instance_count=1, instance_type="ml.p3.16xlarge", volume_size=100, distribution={"smdistributed": {"dataparallel": {"enabled": True}}}, ) fast_file = lambda x: TrainingInput(x, input_mode='FastFile') estimator.fit( { "pre-trained": fast_file("s3://vm-sagemakerr-us-east-1/clip/pre-trained-checkpoint/"), "coco": fast_file("s3://vm-sagemakerr-us-east-1/clip/coco-zip-files/"), } ) ``` Error message: ```text ErrorMessage "OSError: [Errno 30] Read-only file system: '/opt/ml/input/data/coco/image_info_test2017.zip.lock' """ The above exception was the direct cause of the following exception Traceback (most recent call last) File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/opt/conda/lib/python3.8/site-packages/mpi4py/__main__.py", line 7, in <module> main() File "/opt/conda/lib/python3.8/site-packages/mpi4py/run.py", line 198, in main run_command_line(args) File "/opt/conda/lib/python3.8/site-packages/mpi4py/run.py", line 47, in run_command_line run_path(sys.argv[0], run_name='__main__') File "/opt/conda/lib/python3.8/runpy.py", line 265, in run_path return _run_module_code(code, init_globals, run_name, File "/opt/conda/lib/python3.8/runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "run_clip_smddp.py", line 594, in <module> File "run_clip_smddp.py", line 327, in main dataset = load_dataset( File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1741, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 1555, in _download_and_prepare super()._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 891, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/ydshieh--coco_dataset_script/e033205c0266a54c10be132f9264f2a39dcf893e798f6756d224b1ff5078998f/coco_dataset_script.py", line 123, in _split_generators archive_path = dl_manager.download_and_extract(_DL_URLS) File "/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py", line 447, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py", line 419, in extract extracted_paths = map_nested( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 472, in map_nested mapped = pool.map(_single_map_nested, split_kwds) File "/opt/conda/lib/python3.8/multiprocessing/pool.py", line 364, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "/opt/conda/lib/python3.8/multiprocessing/pool.py", line 771, in get raise self._value OSError: [Errno 30] Read-only file system: '/opt/ml/input/data/coco/image_info_test2017.zip.lock'" ``` ### Expected behavior `load_dataset()` to succeed, just like when .zip file is passed in SageMaker File mode. ### Environment info * datasets-2.7.1 * transformers-4.24.0 * python-3.8 * torch-1.12 * SageMaker PyTorch DLC
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5295/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5294
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5294/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5294/comments
https://api.github.com/repos/huggingface/datasets/issues/5294/events
https://github.com/huggingface/datasets/pull/5294
1,463,679,582
PR_kwDODunzps5DqgLW
5,294
Support streaming datasets with pathlib.Path.with_suffix
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-24T18:04:38
2022-11-29T07:09:08
2022-11-29T07:06:32
MEMBER
null
This PR extends the support in streaming mode for datasets that use `pathlib.Path.with_suffix`. Fix #5293.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5294/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5294", "html_url": "https://github.com/huggingface/datasets/pull/5294", "diff_url": "https://github.com/huggingface/datasets/pull/5294.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5294.patch", "merged_at": "2022-11-29T07:06:32" }
true
https://api.github.com/repos/huggingface/datasets/issues/5293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5293/comments
https://api.github.com/repos/huggingface/datasets/issues/5293/events
https://github.com/huggingface/datasets/issues/5293
1,463,669,201
I_kwDODunzps5XPdHR
5,293
Support streaming datasets with pathlib.Path.with_suffix
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2022-11-24T17:52:08
2022-11-29T07:06:33
2022-11-29T07:06:33
MEMBER
null
Extend support for streaming datasets that use `pathlib.Path.with_suffix`. This feature will be useful e.g. for datasets containing text files and annotated files with the same name but different extension.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5293/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5292
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5292/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5292/comments
https://api.github.com/repos/huggingface/datasets/issues/5292/events
https://github.com/huggingface/datasets/issues/5292
1,463,053,832
I_kwDODunzps5XNG4I
5,292
Missing documentation build for versions 2.7.1 and 2.6.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "- Build docs for 2.6.2:\r\n - Commit: a6a5a1cf4cdf1e0be65168aed5a327f543001fe8\r\n - Build docs GH Action: https://github.com/huggingface/datasets/actions/runs/3539470622/jobs/5941404044\r\n- Build docs for 2.7.1:\r\n - Commit: 5ef1ab1cc06c2b7a574bf2df454cd9fcb071ccb2\r\n - Build docs GH Action: https://github.com/huggingface/datasets/actions/runs/3539574442/jobs/5941636792" ]
2022-11-24T09:42:10
2022-11-24T10:10:02
2022-11-24T10:10:02
MEMBER
null
After the patch releases [2.7.1](https://github.com/huggingface/datasets/releases/tag/2.7.1) and [2.6.2](https://github.com/huggingface/datasets/releases/tag/2.6.2), the online docs were not properly built (the build_documentation workflow was not triggered). There was a fix by: - #5291 However, both documentations were built from main branch, instead of their corresponding version branch. We are rebuilding them.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5292/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5291
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5291/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5291/comments
https://api.github.com/repos/huggingface/datasets/issues/5291/events
https://github.com/huggingface/datasets/pull/5291
1,462,983,472
PR_kwDODunzps5DoKNC
5,291
[build doc] for v2.7.1 & v2.6.2
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "doc versions are built https://huggingface.co/docs/datasets/index" ]
2022-11-24T08:54:47
2022-11-24T09:14:10
2022-11-24T09:11:15
CONTRIBUTOR
null
Do NOT merge. Using this PR to build docs for [v2.7.1](https://github.com/huggingface/datasets/pull/5291/commits/f4914af20700f611b9331a9e3ba34743bbeff934) & [v2.6.2](https://github.com/huggingface/datasets/pull/5291/commits/025f85300a0874eeb90a20393c62f25ac0accaa0)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5291/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5291", "html_url": "https://github.com/huggingface/datasets/pull/5291", "diff_url": "https://github.com/huggingface/datasets/pull/5291.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5291.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5290
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5290/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5290/comments
https://api.github.com/repos/huggingface/datasets/issues/5290/events
https://github.com/huggingface/datasets/pull/5290
1,462,716,766
PR_kwDODunzps5DnQsS
5,290
fix error where reading breaks when batch missing an assigned column feature
{ "login": "eunseojo", "id": 12104720, "node_id": "MDQ6VXNlcjEyMTA0NzIw", "avatar_url": "https://avatars.githubusercontent.com/u/12104720?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eunseojo", "html_url": "https://github.com/eunseojo", "followers_url": "https://api.github.com/users/eunseojo/followers", "following_url": "https://api.github.com/users/eunseojo/following{/other_user}", "gists_url": "https://api.github.com/users/eunseojo/gists{/gist_id}", "starred_url": "https://api.github.com/users/eunseojo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eunseojo/subscriptions", "organizations_url": "https://api.github.com/users/eunseojo/orgs", "repos_url": "https://api.github.com/users/eunseojo/repos", "events_url": "https://api.github.com/users/eunseojo/events{/privacy}", "received_events_url": "https://api.github.com/users/eunseojo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5290). All of your documentation changes will be reflected on that endpoint." ]
2022-11-24T03:53:46
2022-11-25T03:21:54
null
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5290/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5290", "html_url": "https://github.com/huggingface/datasets/pull/5290", "diff_url": "https://github.com/huggingface/datasets/pull/5290.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5290.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5289
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5289/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5289/comments
https://api.github.com/repos/huggingface/datasets/issues/5289/events
https://github.com/huggingface/datasets/pull/5289
1,462,543,139
PR_kwDODunzps5Dmrk9
5,289
Added support for JXL images.
{ "login": "alexjc", "id": 445208, "node_id": "MDQ6VXNlcjQ0NTIwOA==", "avatar_url": "https://avatars.githubusercontent.com/u/445208?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexjc", "html_url": "https://github.com/alexjc", "followers_url": "https://api.github.com/users/alexjc/followers", "following_url": "https://api.github.com/users/alexjc/following{/other_user}", "gists_url": "https://api.github.com/users/alexjc/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexjc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexjc/subscriptions", "organizations_url": "https://api.github.com/users/alexjc/orgs", "repos_url": "https://api.github.com/users/alexjc/repos", "events_url": "https://api.github.com/users/alexjc/events{/privacy}", "received_events_url": "https://api.github.com/users/alexjc/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I'm fine with the addition of jxl in the list of known image extensions, this way users that have the plugin can work with their JXL datasets. WDYT @mariosasko ?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5289). All of your documentation changes will be reflected on that endpoint.", "I think we should wait for official support from Pillow. Plus, the linked plugin doesn't support `Image.save`, which is one of the requirements for a format to be included in `IMAGE_EXTENSIONS`.\r\n\r\n@alexjc In the meantime, one option is to add these lines to the card:\r\n```python\r\nimport importlib\r\nimport datasets\r\n\r\nif \".jxl\" not in datasets.packaged_modules.imagefolder.IMAGE_EXTENSIONS:\r\n datasets.packaged_modules.imagefolder.IMAGE_EXTENSIONS.append(\".jxl\")\r\n\r\nif \"jxl\" not in datasets.packaged_modules._EXTENSION_TO_MODULE:\r\n datasets.packaged_modules._EXTENSION_TO_MODULE[\"jxl\"] = (\"imagefolder\", {})\r\n\r\nimportlib.reload(datasets.load)\r\nds = datasets.load_dataset(\"texturedesign/td01_natural-ground-textures\")\r\n```\r\nAnd you can add a note to the card that this dataset requires the \"jxlpy\" package to work. \r\n\r\nIn this case, you can also disable the viewer to avoid the discrepancy between the data displayed in the preview and the loaded data.\r\n\r\nAnother option is to define the loading script and add `jxlpy` to the list of dependencies [here](https://github.com/huggingface/datasets-server/blob/3012da62054a025467616abc14b0b46e1f11ea13/workers/first_rows/pyproject.toml#L8) to enable the viewer. This option requires more work, so let us know if you need help.", "Thank you both for your thoughtful replies!\r\n\r\nOne questions and and update:\r\n* The jxlpy plugin does support saving, in the `_save` function of the JXLImagePlugin file. Did it not work? I'm working on the upgrade to the latest JXL, so it'd be good to know if it failed so I can fix it.\r\n* I wrote to the Pillow maintainer and the preferred solution would be to keep JXL as a separate plugin because they're a small team don't have the resources to maintain more code.\r\n\r\nWith that in mind, let me share the minimal set of features I'd need for this to work within the `datasets` library:\r\n1. Using `load_dataset()` with the HuggingFace dataset name correctly downloads the JXL files so they are available locally. Even if the `file_name` field is left intact and not loaded as a PIL image, this is the first step.\r\n2. With minimal monkey-patching, having the `load_dataset` correctly expand `file_name` into PIL `image` fields if JXL support is available.\r\n\r\nIf both of these work, then I can use HuggingFace's hub and the `datasets` library for an MVP even if not all features are there. I don't need automatic thumbnails or previews of the dataset on the server.\r\n\r\n\r\nGiven the reply from the Pillow maintainer, what solution can we come up with that works in a more permanent way than waiting for Pillow integration (which may not happen) — assuming users install the `jxlpy` plugin separately?", "Link to my upgrade for the latest `libjxl`, pending review and merge. I tested load/save via Pillow extensively for this: https://github.com/olokelo/jxlpy/pull/13", "After more research, here's my latest suggestion:\r\n* Depending on the build of pillow, the source (pip or conda), the platform even, certain formats may or may not be available — despite them being in the list. For example, webp support is not consistently available.\r\n* I'd suggest adding JXL to the list and simply catching the `PIL.UnidentifiedImageError` — printing a useful error message that sends them to a Wiki page to find out what to do.\r\n* On that page would be included instructions how to install support for the format and what to do for the dataset to load correctly on any platform, both with or without conda, etc.\r\n\r\nWhat do you think?", "> The jxlpy plugin does support saving, in the _save function of the JXLImagePlugin file. Did it not work? I'm working on the upgrade to the latest JXL, so it'd be good to know if it failed so I can fix it.\r\n\r\nMy bad, I was referring to [this](https://github.com/google/brunsli/blob/2dd949e53ed05796eb44a31cc759fbf9e6c53e2f/contrib/py/jxl_library_patches/jxl_pillow.py) version of the plugin.\r\n\r\nI still think this involves too much work:\r\n* would require a new doc page\r\n* unofficial plugins have to be imported explicitly, leading to messier code on our side\r\n* etc.\r\n\r\nFor now, it seems more reasonable to create a loading script (faster than ImageFolder, as ImageFolder has to resolve the image files first) for this particular case and add `jxlpy` to the list of the `datasets-server`'s dependencies. Also, one additional advantage of this approach is that it reports if any of the modules imported in a script is missing, which is handy in your case for the plugin lib. WDYT?", "OK, let me try it it and I'll report back.\r\n\r\nWill the JXL files (even if unknown format) be automatically downloaded if they are linked from the `.jsonl` file?\r\n\r\n(I had trouble getting that working before this patch.)", "> Will the JXL files (even if unknown format) be automatically downloaded if they are linked from the .jsonl file?\r\n\r\nNo, they need to be downloaded explicitly.\r\n\r\nFeel free to use 🤗 Hub discussions in your dataset repo to ping us for help (our usernames are the same there)", "Is it possible to add support for JXL files being downloaded without needing to add server-side rendering support?", "In the loading script, data files are downloaded with `DownloadManager` (`dl_manager` in `_split_generators`), which doesn't have any requirements regarding the actual type of the downloaded files.\r\n\r\nPS: Let's use the forum or Hub discussions for further questions to avoid pinging other participants" ]
2022-11-23T23:16:33
2022-11-29T18:49:46
null
NONE
null
JPEG-XL is the most advanced of the next-generation of image codecs, supporting both lossless and lossy files — with better compression and quality than PNG and JPG respectively. It has reduced the disk sizes and bandwidth required for many of the datasets I use. Pillow does not yet support JXL, but there's a plugin as a separate Python library that does (`pip install jxlpy`), and I've tested that this change works as expected when the plugin is imported. Dataset used for testing, you must `git pull` as loading it from Python won't work until `datasets-server` is also changed to support JXL files: https://huggingface.co/datasets/texturedesign/td01_natural-ground-textures The case where the plugin is not imported first raises an error: ``` PIL.UnidentifiedImageError: cannot identify image file 'td01/train/set01/01_145523.jxl' ``` In order to enable support for JXL even before pillow supports this, should this exception be handled with a better error message? I'd expect/hope JXL support to follow in one of the pillow quarterly releases in the next 6-9 months.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5289/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5289", "html_url": "https://github.com/huggingface/datasets/pull/5289", "diff_url": "https://github.com/huggingface/datasets/pull/5289.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5289.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5288/comments
https://api.github.com/repos/huggingface/datasets/issues/5288/events
https://github.com/huggingface/datasets/issues/5288
1,462,134,067
I_kwDODunzps5XJmUz
5,288
Lossy json serialization - deserialization of dataset info
{ "login": "anuragprat1k", "id": 57542204, "node_id": "MDQ6VXNlcjU3NTQyMjA0", "avatar_url": "https://avatars.githubusercontent.com/u/57542204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anuragprat1k", "html_url": "https://github.com/anuragprat1k", "followers_url": "https://api.github.com/users/anuragprat1k/followers", "following_url": "https://api.github.com/users/anuragprat1k/following{/other_user}", "gists_url": "https://api.github.com/users/anuragprat1k/gists{/gist_id}", "starred_url": "https://api.github.com/users/anuragprat1k/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anuragprat1k/subscriptions", "organizations_url": "https://api.github.com/users/anuragprat1k/orgs", "repos_url": "https://api.github.com/users/anuragprat1k/repos", "events_url": "https://api.github.com/users/anuragprat1k/events{/privacy}", "received_events_url": "https://api.github.com/users/anuragprat1k/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! JSON is a lossy format indeed. If you want to keep the feature types or other metadata I'd encourage you to store them as well. For example you can use `dataset.info.write_to_directory` and `DatasetInfo.from_directory` to store the feature types, split info, description, license etc." ]
2022-11-23T17:20:15
2022-11-25T12:53:51
null
NONE
null
### Describe the bug Saving a dataset to disk as json (using `to_json`) and then loading it again (using `load_dataset`) results in features whose labels are not type-cast correctly. In the code snippet below, `features.label` should have a label of type `ClassLabel` but has type `Value` instead. ### Steps to reproduce the bug ``` from datasets import load_dataset def test_serdes_from_json(d): dataset = load_dataset(d, split="train") dataset.to_json('_test') dataset_loaded = load_dataset("json", data_files='_test', split='train') try: assert dataset_loaded.info.features == dataset.info.features, "features unequal!" except Exception as ex: print(f'{ex}') print(f'expected {dataset.info.features}, \nactual { dataset_loaded.info.features }') test_serdes_from_json('rotten_tomatoes') ``` Output ``` features unequal! expected {'text': Value(dtype='string', id=None), 'label': ClassLabel(names=['neg', 'pos'], id=None)}, actual {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)} ``` ### Expected behavior The deserialized `features.label` should have type `ClassLabel`. ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.10.144-127.601.amzn2.x86_64-x86_64-with-glibc2.17 - Python version: 3.7.13 - PyArrow version: 7.0.0 - Pandas version: 1.2.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5288/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5287
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5287/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5287/comments
https://api.github.com/repos/huggingface/datasets/issues/5287/events
https://github.com/huggingface/datasets/pull/5287
1,461,971,889
PR_kwDODunzps5Dkttf
5,287
Fix methods using `IterableDataset.map` that lead to `features=None`
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "_The documentation is not available anymore as the PR was closed or merged._", "Maybe other options are:\r\n* Keep the `info.features` to `None` if those were initially `None`\r\n* Infer the features with pre-fetching just if the `info.features` is `None`\r\n* If the `info.features` are there, make sure that after `map` features is not `None`", "Hi @lhoestq something that's still not clear to me is: should we infer the features always when applying a `map` if those are initially `None`, or just assume that if the features are initially `None` those should be left that way unless the user specifically sets those (or during iter)?\r\n\r\nIn this PR I'm using `from datasets.iterable_dataset import _infer_features_from_batch` to infer the features when those are `None` using pre-fetch of `self._head()`, but I'm not sure if that's the expected behavior.\r\n\r\nThanks in advance for your help!", "Also, the PR still has some more work to do, but probably the most relevant thing to fix right now is that the `features` are being set to `None` in the functions `IterableDataset.rename_column`, `IterableDataset.rename_columns`, and `IterableDataset.remove_columns` when the `features` originally had a value. So once that's fixed maybe we can focus on improving the current `map`'s behavior, so as to avoid this from happening also when the user uses `map` directly and not through the functions mentioned above.", "> Cool thank you ! Resolving the features can be expensive sometimes, so maybe we don't resolve the features and we can just rename/remove columns if the features are known (i.e. if they're not None). What do you think ?\r\n\r\nThanks for the feedback! Makes sense to me 👍🏻 I'll commit the comments now!", "Already done @lhoestq, feel free to merge whenever you want! Also before merging, can you please link the following issues https://github.com/huggingface/datasets/issues/3888, https://github.com/huggingface/datasets/issues/5245, and https://github.com/huggingface/datasets/issues/5284, so that those are closed upon merge? Thanks!" ]
2022-11-23T15:33:25
2022-11-28T15:43:14
2022-11-28T12:53:22
CONTRIBUTOR
null
As currently `IterableDataset.map` is setting the `info.features` to `None` every time as we don't know the output of the dataset in advance, `IterableDataset` methods such as `rename_column`, `rename_columns`, and `remove_columns`. that internally use `map` lead to the features being `None`. This PR is related to #3888, #5245, and #5284 ## ✅ Current solution The code in this PR is basically making sure that if the features were there since the beginning and a `rename_column`/`rename_columns` happens, those are kept and the rename is applied to the `Features` too. Also, if the features were not there before applying `rename_column`, `rename_columns` or `remove_columns`, a batch is prefetched and the features are being inferred (that could potentially be part of `IterableDataset.__init__` in case the `info.features` value is `None`). ## 💡 Ideas Some ideas were proposed in https://github.com/huggingface/datasets/issues/3888, but probably the most consistent solution even though it may take some time is to actually do the type inferencing during the `IterableDataset.__init__` in case the provided `info.features` is `None`, otherwise, we can just use the provided features. Additionally, as mentioned at https://github.com/huggingface/datasets/issues/3888, we could also include a `features` parameter to the `map` function, but that's probably more tedious. Also thanks to @lhoestq for sharing some ideas in both https://github.com/huggingface/datasets/issues/3888 and https://github.com/huggingface/datasets/issues/5245 :hugs:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5287/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5287", "html_url": "https://github.com/huggingface/datasets/pull/5287", "diff_url": "https://github.com/huggingface/datasets/pull/5287.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5287.patch", "merged_at": "2022-11-28T12:53:22" }
true
https://api.github.com/repos/huggingface/datasets/issues/5286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5286/comments
https://api.github.com/repos/huggingface/datasets/issues/5286/events
https://github.com/huggingface/datasets/issues/5286
1,461,908,087
I_kwDODunzps5XIvJ3
5,286
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json
{ "login": "roritol", "id": 32490135, "node_id": "MDQ6VXNlcjMyNDkwMTM1", "avatar_url": "https://avatars.githubusercontent.com/u/32490135?v=4", "gravatar_id": "", "url": "https://api.github.com/users/roritol", "html_url": "https://github.com/roritol", "followers_url": "https://api.github.com/users/roritol/followers", "following_url": "https://api.github.com/users/roritol/following{/other_user}", "gists_url": "https://api.github.com/users/roritol/gists{/gist_id}", "starred_url": "https://api.github.com/users/roritol/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/roritol/subscriptions", "organizations_url": "https://api.github.com/users/roritol/orgs", "repos_url": "https://api.github.com/users/roritol/repos", "events_url": "https://api.github.com/users/roritol/events{/privacy}", "received_events_url": "https://api.github.com/users/roritol/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I found a solution \r\n\r\nIf you specifically install datasets==1.18 and then run\r\n\r\nimport datasets\r\nwiki = datasets.load_dataset('wikipedia', '20200501.en')\r\nthen this should work (it worked for me.)" ]
2022-11-23T14:54:15
2022-11-25T11:33:14
2022-11-25T11:33:14
NONE
null
### Describe the bug I follow the steps provided on the website [https://huggingface.co/datasets/wikipedia](https://huggingface.co/datasets/wikipedia) $ pip install apache_beam mwparserfromhell >>> from datasets import load_dataset >>> load_dataset("wikipedia", "20220301.en") however this results in the following error: raise MissingBeamOptions( datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')` If I then prompt the system with: >>> load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner') the following error occurs: raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json Here is the exact code: Python 3.10.6 (main, Nov 2 2022, 18:53:38) [GCC 11.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from datasets import load_dataset >>> load_dataset('wikipedia', '20220301.en') Downloading and preparing dataset wikipedia/20220301.en to /home/[EDITED]/.cache/huggingface/datasets/wikipedia/20220301.en/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559... Downloading: 100%|████████████████████████████████████████████████████████████████████████████| 15.3k/15.3k [00:00<00:00, 22.2MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 1741, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1879, in _download_and_prepare raise MissingBeamOptions( datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')` >>> load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner') Downloading and preparing dataset wikipedia/20220301.en to /home/[EDITED]/.cache/huggingface/datasets/wikipedia/20220301.en/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559... Downloading: 100%|████████████████████████████████████████████████████████████████████████████| 15.3k/15.3k [00:00<00:00, 18.8MB/s] Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 1741, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1909, in _download_and_prepare super()._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 891, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/rorytol/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia.py", line 945, in _split_generators downloaded_files = dl_manager.download_and_extract({"info": info_url}) File "/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py", line 447, in download_and_extract return self.extract(self.download(url_or_urls)) File "/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py", line 311, in download downloaded_path_or_paths = map_nested( File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 444, in map_nested mapped = [ File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 445, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 346, in _single_map_nested return function(data_struct) File "/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py", line 338, in _download return cached_path(url_or_filename, download_config=download_config) File "/usr/local/lib/python3.10/dist-packages/datasets/utils/file_utils.py", line 183, in cached_path output_path = get_from_cache( File "/usr/local/lib/python3.10/dist-packages/datasets/utils/file_utils.py", line 530, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json ### Steps to reproduce the bug $ pip install apache_beam mwparserfromhell >>> from datasets import load_dataset >>> load_dataset("wikipedia", "20220301.en") >>> load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner') ### Expected behavior Download the dataset ### Environment info Running linux on a remote workstation operated through a macbook terminal Python 3.10.6
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5286/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5286/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5285/comments
https://api.github.com/repos/huggingface/datasets/issues/5285/events
https://github.com/huggingface/datasets/pull/5285
1,461,521,215
PR_kwDODunzps5DjLgG
5,285
Save file name in embed_storage
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I updated the tests, met le know if it sounds good to you now :)" ]
2022-11-23T10:55:54
2022-11-24T14:11:41
2022-11-24T14:08:37
MEMBER
null
Having the file name is useful in case we need to check the extension of the file (e.g. mp3), or in general in case it includes some metadata information (track id, image id etc.) Related to https://github.com/huggingface/datasets/issues/5276
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5285/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5285", "html_url": "https://github.com/huggingface/datasets/pull/5285", "diff_url": "https://github.com/huggingface/datasets/pull/5285.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5285.patch", "merged_at": "2022-11-24T14:08:37" }
true
https://api.github.com/repos/huggingface/datasets/issues/5284
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5284/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5284/comments
https://api.github.com/repos/huggingface/datasets/issues/5284/events
https://github.com/huggingface/datasets/issues/5284
1,461,519,733
I_kwDODunzps5XHQV1
5,284
Features of IterableDataset set to None by remove column
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[ { "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false } ]
null
[ "Related to https://github.com/huggingface/datasets/issues/5245", "#self-assign", "Thanks @lhoestq and @alvarobartt!\r\n\r\nThis would be extremely helpful to have working for the Whisper fine-tuning event - we're **only** training using streaming mode, so it'll be quite important to have this feature working to make training as easy as possible!\r\n\r\n_c.f._ https://twitter.com/sanchitgandhi99/status/1592188332171493377", "> Thanks @lhoestq and @alvarobartt!\n> \n> \n> \n> This would be extremely helpful to have working for the Whisper fine-tuning event - we're **only** training using streaming mode, so it'll be quite important to have this feature working to make training as easy as possible!\n> \n> \n> \n> _c.f._ https://twitter.com/sanchitgandhi99/status/1592188332171493377\n\nI'm almost done with at least a temporary fix to `rename_column`, `rename_columns`, and `remove_columns`, just trying to figure out how to extend it to the `map` function itself!\n\nI'll probably open the PR for review either tomorrow or Sunday hopefully! Glad I can help you and HuggingFace 🤗 ", "Awesome - thank you so much for this PR @alvarobartt! Is much appreciated!", "@sanchit-gandhi PR is ready and open for review at #5287, but there's still one issue I may need @lhoestq's input :hugs:", "Let us know @sanchit-gandhi if you need a new release of `datasets` soon with this fix included :)", "Thanks for the fix guys! We can direct people to install `datasets` from main if that's easier!" ]
2022-11-23T10:54:59
2022-11-28T15:18:08
2022-11-28T12:53:24
CONTRIBUTOR
null
### Describe the bug The `remove_column` method of the IterableDataset sets the dataset features to None. ### Steps to reproduce the bug ```python from datasets import Audio, load_dataset # load LS in streaming mode dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True) # check original features print("Original features: ", dataset.features.keys()) # define features to remove: we KEEP audio and text COLUMNS_TO_REMOVE = ['chapter_id', 'speaker_id', 'file', 'id'] dataset = dataset.remove_columns(COLUMNS_TO_REMOVE) # check processed features, uh-oh! print("Processed features: ", dataset.features) # streaming the first audio sample still works print("First sample:", next(iter(ds))) ``` **Print Output:** ``` Original features: dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id']) Processed features: None First sample: {'audio': {'path': '2277-149896-0000.flac', 'array': array([ 0.00186157, 0.0005188 , 0.00024414, ..., -0.00097656, -0.00109863, -0.00146484]), 'sampling_rate': 16000}, 'text': "HE WAS IN A FEVERED STATE OF MIND OWING TO THE BLIGHT HIS WIFE'S ACTION THREATENED TO CAST UPON HIS ENTIRE FUTURE"} ``` ### Expected behavior The features should be those **not** removed by the `remove_column` method, i.e. audio and text. ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.15 - PyArrow version: 9.0.0 - Pandas version: 1.3.5 (Running on Google Colab for a blog post: https://colab.research.google.com/drive/1ySCQREPZEl4msLfxb79pYYOWjUZhkr9y#scrollTo=8pRDGiVmH2ml) cc @polinaeterna @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5284/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5283
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5283/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5283/comments
https://api.github.com/repos/huggingface/datasets/issues/5283/events
https://github.com/huggingface/datasets/pull/5283
1,460,291,003
PR_kwDODunzps5De5M1
5,283
Release: 2.6.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-22T17:36:24
2022-11-22T17:50:12
2022-11-22T17:47:02
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5283/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5283", "html_url": "https://github.com/huggingface/datasets/pull/5283", "diff_url": "https://github.com/huggingface/datasets/pull/5283.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5283.patch", "merged_at": "2022-11-22T17:47:02" }
true
https://api.github.com/repos/huggingface/datasets/issues/5282
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5282/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5282/comments
https://api.github.com/repos/huggingface/datasets/issues/5282/events
https://github.com/huggingface/datasets/pull/5282
1,460,238,928
PR_kwDODunzps5Det2_
5,282
Release: 2.7.1
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2022-11-22T16:58:54
2022-11-22T17:21:28
2022-11-22T17:21:27
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5282/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5282", "html_url": "https://github.com/huggingface/datasets/pull/5282", "diff_url": "https://github.com/huggingface/datasets/pull/5282.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5282.patch", "merged_at": "2022-11-22T17:21:27" }
true
https://api.github.com/repos/huggingface/datasets/issues/5281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5281/comments
https://api.github.com/repos/huggingface/datasets/issues/5281/events
https://github.com/huggingface/datasets/issues/5281
1,459,930,271
I_kwDODunzps5XBMSf
5,281
Support cloud storage in load_dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Or for example an archive on GitHub releases! Before I added support for JXL (locally only, PR still pending) I was considering hosting my files on GitHub instead...", "+1 to this. I would like to use 'audiofolder' with a data_dir that's on S3, for example. I don't want to upload my dataset to the Hub, but I would find all the fingerprinting/caching features useful.", "Adding to the conversation, Dask also uses `fsspec` for this feature.\r\n\r\n[Dask: How to connect to remote data](https://docs.dask.org/en/stable/how-to/connect-to-remote-data.html)\r\n\r\nHappy to help on this feature :D " ]
2022-11-22T14:00:10
2022-12-12T15:00:53
null
MEMBER
null
Would be nice to be able to do ```python data_files=["s3://..."] storage_options = {...} load_dataset(..., data_files=data_files, storage_options=storage_options) ``` or even ```python load_dataset("gs://...") ``` The idea would be to use `fsspec` as in `download_and_prepare` and `save_to_disk`. This has been requested several times already. Some users want to use their data from private cloud storage to train models related: https://github.com/huggingface/datasets/issues/3490 https://github.com/huggingface/datasets/issues/5244 [forum](https://discuss.huggingface.co/t/how-to-use-s3-path-with-load-dataset-with-streaming-true/25739/2)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5281/reactions", "total_count": 9, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 5, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5281/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5280/comments
https://api.github.com/repos/huggingface/datasets/issues/5280/events
https://github.com/huggingface/datasets/issues/5280
1,459,823,179
I_kwDODunzps5XAyJL
5,280
Import error
{ "login": "feketedavid1012", "id": 40760055, "node_id": "MDQ6VXNlcjQwNzYwMDU1", "avatar_url": "https://avatars.githubusercontent.com/u/40760055?v=4", "gravatar_id": "", "url": "https://api.github.com/users/feketedavid1012", "html_url": "https://github.com/feketedavid1012", "followers_url": "https://api.github.com/users/feketedavid1012/followers", "following_url": "https://api.github.com/users/feketedavid1012/following{/other_user}", "gists_url": "https://api.github.com/users/feketedavid1012/gists{/gist_id}", "starred_url": "https://api.github.com/users/feketedavid1012/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/feketedavid1012/subscriptions", "organizations_url": "https://api.github.com/users/feketedavid1012/orgs", "repos_url": "https://api.github.com/users/feketedavid1012/repos", "events_url": "https://api.github.com/users/feketedavid1012/events{/privacy}", "received_events_url": "https://api.github.com/users/feketedavid1012/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Can you \r\n```python\r\nimport platform\r\nprint(platform.python_version())\r\n```\r\nto see that it returns ?", "Hi,\n\n3.8.13\n\nGet Outlook for Android<https://aka.ms/AAb9ysg>\n________________________________\nFrom: Quentin Lhoest ***@***.***>\nSent: Tuesday, November 22, 2022 2:37:02 PM\nTo: huggingface/datasets ***@***.***>\nCc: feketedavid1012 ***@***.***>; Author ***@***.***>\nSubject: Re: [huggingface/datasets] Import error (Issue #5280)\n\n\nHi ! Can you\n\nimport platform\nprint(platform.python_version())\n\nto see that it returns ?\n\n—\nReply to this email directly, view it on GitHub<https://github.com/huggingface/datasets/issues/5280#issuecomment-1323691385>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AJW7F5YGG32W6WABYC25NJTWJTD75ANCNFSM6AAAAAASHZJ2AU>.\nYou are receiving this because you authored the thread.Message ID: ***@***.***>\n", "Then it should work as expected if you use the same python when using `datasets`\r\n\r\nPlease make sure you're running your code in the right environment", "It's the right environment. But in if statement I have\n\"3.8.13\" < 3.7\nAnd in the error message is Python>=3.7 which is true in my case (3.8.13 is greater then 3.7), so I don't understand my python should be below the 3.7 which case the if statement is right, but the message is wrong, or above 3.7 which case if statement is wrong, error message is right.\n\nGet Outlook for Android<https://aka.ms/AAb9ysg>\n________________________________\nFrom: Quentin Lhoest ***@***.***>\nSent: Tuesday, November 22, 2022 2:41:43 PM\nTo: huggingface/datasets ***@***.***>\nCc: feketedavid1012 ***@***.***>; Author ***@***.***>\nSubject: Re: [huggingface/datasets] Import error (Issue #5280)\n\n\nThen it should work as expected if you use the same python when using datasets\n\nPlease make sure you're running your code in the right environment\n\n—\nReply to this email directly, view it on GitHub<https://github.com/huggingface/datasets/issues/5280#issuecomment-1323697094>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AJW7F54JURTAJJWWDO2QGI3WJTERPANCNFSM6AAAAAASHZJ2AU>.\nYou are receiving this because you authored the thread.Message ID: ***@***.***>\n", "If you're having an error then you're not running your code in the right environment." ]
2022-11-22T12:56:43
2022-12-15T19:57:40
2022-12-15T19:57:40
NONE
null
https://github.com/huggingface/datasets/blob/cd3d8e637cfab62d352a3f4e5e60e96597b5f0e9/src/datasets/__init__.py#L28 Hy, I have error at the above line. I have python version 3.8.13, the message says I need python>=3.7, which is True, but I think the if statement not working properly (or the message wrong)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5280/timeline
null
completed
null
null
false