url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.2B
1.82B
| node_id
stringlengths 18
19
| number
int64 4.13k
6.08k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
33.9k
β | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5870 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5870/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5870/comments | https://api.github.com/repos/huggingface/datasets/issues/5870/events | https://github.com/huggingface/datasets/issues/5870 | 1,712,156,282 | I_kwDODunzps5mDW56 | 5,870 | Behaviour difference between datasets.map and IterableDatasets.map | {
"login": "llStringll",
"id": 30209072,
"node_id": "MDQ6VXNlcjMwMjA5MDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/30209072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/llStringll",
"html_url": "https://github.com/llStringll",
"followers_url": "https://api.github.com/users/llStringll/followers",
"following_url": "https://api.github.com/users/llStringll/following{/other_user}",
"gists_url": "https://api.github.com/users/llStringll/gists{/gist_id}",
"starred_url": "https://api.github.com/users/llStringll/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/llStringll/subscriptions",
"organizations_url": "https://api.github.com/users/llStringll/orgs",
"repos_url": "https://api.github.com/users/llStringll/repos",
"events_url": "https://api.github.com/users/llStringll/events{/privacy}",
"received_events_url": "https://api.github.com/users/llStringll/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"PS - some work is definitely needed for 'special cases' docs, not explanations, just usages of 'functions' under mixture of special cases, like a combination of custom databuilder + iterable dataset for large size + dynamic .map() application."
] | 2023-05-16T14:32:57 | 2023-05-16T14:36:05 | null | NONE | null | null | null | ### Describe the bug
All the examples in all the docs mentioned throughout huggingface datasets correspond to datasets object, and not IterableDatasets object. At one point of time, they might have been in sync, but the code for datasets version >=2.9.0 is very different as compared to the docs.
I basically need to .map() a transform on images in an iterable dataset, which was made using a custom databuilder config.
This works very good in map-styles datasets, but the .map() fails in IterableDatasets, show behvaiour as such:
"pixel_values" key not found, KeyError in examples object/dict passed into transform function for map, which works fine with map style, even as batch.
In iterable style, the object/dict passed into map() paramter callable function is completely different as what is mentioned in all examples.
Please look into this. Thank you
My databuilder class is inherited as such:
def _info(self):
print ("Config: ",self.config.__dict__.keys())
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"labels": datasets.Sequence(datasets.Value("uint16")),
# "labels_name": datasets.Value("string"),
# "pixel_values": datasets.Array3D(shape=(3, 1280, 960), dtype="float32"),
"pixel_values": datasets.Array3D(shape=(1280, 960, 3), dtype="uint8"),
"image_s3_path": datasets.Value("string"),
}
),
supervised_keys=None,
homepage="none",
citation="",
)
def _split_generators(self, dl_manager):
records_train = list(db.mini_set.find({'split':'train'},{'image_s3_path':1, 'ocwen_template_name':1}))[:10000]
records_val = list(db.mini_set.find({'split':'val'},{'image_s3_path':1, 'ocwen_template_name':1}))[:1000]
# print (len(records),self.config.num_shards)
# shard_size_train = len(records_train)//self.config.num_shards
# sharded_records_train = [records_train[i:i+shard_size_train] for i in range(0,len(records_train),shard_size_train)]
# shard_size_val = len(records_val)//self.config.num_shards
# sharded_records_val = [records_val[i:i+shard_size_val] for i in range(0,len(records_val),shard_size_val)]
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, gen_kwargs={"records":records_train} # passing list of records, for sharding to take over
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION, gen_kwargs={"records":records_val} # passing list of records, for sharding to take over
),
]
def _generate_examples(self, records):
# print ("Generating examples for [{}] shards".format(len(shards)))
# initiate_db_connection()
# records = list(db.mini_set.find({'split':split},{'image_s3_path':1, 'ocwen_template_name':1}))[:10]
id_ = 0
# for records in shards:
for i,rec in enumerate(records):
img_local_path = fetch_file(rec['image_s3_path'],self.config.buffer_dir)
# t = self.config.processor(Image.open(img_local_path), random_padding=True, return_tensors="np").pixel_values.squeeze()
# print (t.shape, type(t),type(t[0][0][0]))
# sys.exit()
pvs = np.array(Image.open(img_local_path).resize((1280,960))) # image object is wxh, so resize as per that, numpy array of it is hxwxc, transposing to cxwxh
# pvs = self.config.processor(Image.open(img_local_path), random_padding=True, return_tensors="np").pixel_values.astype(np.float16).squeeze()
# print (type(pvs[0][0][0]))
lblids = self.config.processor.tokenizer('<s_class>'+rec['ocwen_template_name']+'</s_class>'+'</s>', add_special_tokens=False, padding=False, truncation=False, return_tensors="np")["input_ids"].squeeze(0) # take padding later, as per batch collating
# print (len(lblids),type(lblids[0]))
# print (type(pvs),pvs.shape,type(pvs[0][0][0]), type(lblids))
yield id_, {"labels":lblids,"pixel_values":pvs,"image_s3_path":rec['image_s3_path']}
id_+=1
os.remove(img_local_path)
and I load it inside my trainer script as such
`ds = load_dataset("/tmp/DonutDS/dataset/", split="train", streaming=True) # iterable dataset, where .map() falls`
or also as
`ds = load_from_disk('/tmp/DonutDS/dataset/') #map style dataset`
Thank you to the team for having such a great library, and for this bug fix in advance!
### Steps to reproduce the bug
Above config can allow one to reproduce the said bug
### Expected behavior
.map() should show some consistency b/w map-style and iterable-style datasets, or atleast the docs should address iterable-style datasets behaviour and examples. I honestly do not figure the use of such docs.
### Environment info
datasets==2.9.0
transformers==4.26.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5870/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5869/comments | https://api.github.com/repos/huggingface/datasets/issues/5869/events | https://github.com/huggingface/datasets/issues/5869 | 1,711,990,003 | I_kwDODunzps5mCuTz | 5,869 | Image Encoding Issue when submitting a Parquet Dataset | {
"login": "PhilippeMoussalli",
"id": 47530815,
"node_id": "MDQ6VXNlcjQ3NTMwODE1",
"avatar_url": "https://avatars.githubusercontent.com/u/47530815?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilippeMoussalli",
"html_url": "https://github.com/PhilippeMoussalli",
"followers_url": "https://api.github.com/users/PhilippeMoussalli/followers",
"following_url": "https://api.github.com/users/PhilippeMoussalli/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilippeMoussalli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilippeMoussalli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilippeMoussalli/subscriptions",
"organizations_url": "https://api.github.com/users/PhilippeMoussalli/orgs",
"repos_url": "https://api.github.com/users/PhilippeMoussalli/repos",
"events_url": "https://api.github.com/users/PhilippeMoussalli/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilippeMoussalli/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @PhilippeMoussalli thanks for opening a detailed issue. It seems the issue is more related to the `datasets` library so I'll ping @lhoestq @mariosasko on this one :) \n\n(edit: also can one of you move the issue to the datasets repo? Thanks in advance π)",
"Hi ! The `Image()` info is stored in the **schema metadata**. More precisely there should be a \"huggingface\" field in the schema metadata that contains the `datasets` feature type of each column.\r\n\r\nTo fix your issue, you can use the same schema as the original Parquet files to write the new ones. You can also get the schema with metadata from a `Features` object, e.g.\r\n\r\n```python\r\nfrom datasets import Features, Image, Value\r\n\r\nfeatures = Features({\"image\": Image(), \"text\": Value(\"string\")})\r\nschema = features.arrow_schema\r\nprint(schema.metadata)\r\n# {b'huggingface': b'{\"info\": {\"features\": {\"image\": {\"_type\": \"Image\"}, \"text\": {\"dtype\": \"string\", \"_type\": \"Value\"}}}}'}\r\n```",
"It appears that the parquet files at `hf://datasets/lambdalabs/pokemon-blip-captions` don't have this metadata, and it is defined in the dataset_infos.json instead (legacy).\r\n\r\nYou can get the right schema with the HF metadata this way:\r\n\r\n```python\r\nfrom datasets import load_dataset_builder\r\n\r\nfeatures = load_dataset_builder(\"lambdalabs/pokemon-blip-captions\").info.features\r\nschema = features.arrow_schema\r\n```",
"Btw in the future we might add support for an dedicated Image extension type in Arrow so that you won't need to add the schema metadata anymore ;)",
"Thanks @Wauplin @lhoestq for the quick reply :)! \r\n\r\nI tried your approach by passing the huggingface schema to the dask writer \r\n\r\n```\r\nfrom datasets import Features, Image, Value\r\ndf = dd.read_parquet(f\"hf://datasets/lambdalabs/pokemon-blip-captions\",index=False)\r\nfeatures = Features({\"image\": Image(), \"text\": Value(\"string\")})\r\nschema = features.arrow_schema\r\ndd.to_parquet(df, path = \"hf://datasets/philippemo/dummy_dataset/data\", schema=schema)\r\n```\r\nAt first it didn't work as I was not able to visualize the images, so then I manually added the `dataset_infos.json` from the example dataset and it worked :)\r\n\r\nHowever, It's not very ideal since there are some metadata in that file that need to be computed in order to load the data properly such as `num_of_bytes` and `num_examples` which might be unknown in my use case. \r\n\r\n![Screenshot from 2023-05-16 16-54-55](https://github.com/huggingface/datasets/assets/47530815/b2b448d2-d3d8-43a7-9682-9c0187a5192b)\r\n\r\nDo you have any pointers there? you mentioned that `datasets_info.json` will be deprecated/legacy. Could you point me to some example image datasets on the hub that are stored as parquet and don't have the `datasets_info.json`?\r\n\r\n",
"You don't need the dataset_infos.json file as long as you have the schema with HF metadata ;)\r\nI could also check that it works fine myself on the git revision without the dataset_infos.json file.\r\n\r\nWhat made you think it didn't work ?",
"> You don't need the dataset_infos.json file as long as you have the schema with HF metadata ;) I could also check that it works fine myself on the git revision without the dataset_infos.json file.\r\n> \r\n> What made you think it didn't work ?\r\n\r\nThose are two identical dataset repos where both were pushed with dask with the specified schema you mentioned above. I then uploaded the `dataset_infos.json` manually taken from the original example dataset into one of them. \r\n\r\n* **With schema**: https://huggingface.co/datasets/philippemo/dummy_dataset_with_schema\r\n* **Without schema**: https://huggingface.co/datasets/philippemo/dummy_dataset_without_schema\r\n\r\nYou can see that in the examples without schema the images fail to render properly. When loaded with `datasets` they return an dict and not a Pillow Image ",
"I see ! I think it's a bug on our side - it should work without the metadata - let me investigate",
"Alright, it's fixed: https://huggingface.co/datasets/philippemo/dummy_dataset_without_schema\r\n\r\nIt shows the image correctly now - even without the extra metadata :)",
"Thanks @lhoestq! \r\nI tested pushing a dataset again without the metadata and it works perfectly! \r\nI appreciate the help",
"Hi @lhoestq, \r\n\r\nI'v tried pushing another dataset again and I think the issue reappeared again: \r\n\r\n```\r\ndf = dd.read_parquet(f\"hf://datasets/lambdalabs/pokemon-blip-captions\")\r\nfeatures = datasets.Features({\"image\": datasets.Image(), \"text\": datasets.Value(\"string\")})\r\nschema = features.arrow_schema\r\ndd.to_parquet(df, path = \"hf://datasets/philippemo/dummy_dataset_without_schema_12_06/data\", schema=schema)\r\n```\r\n\r\nHere is the dataset: \r\n https://huggingface.co/datasets/philippemo/dummy_dataset_without_schema_12_06\r\nThe one that was working 2 weeks ago still seems to be intact though, it might be that It rendered properly when it was initially submitted and after this something was reverted from your side:\r\nhttps://huggingface.co/datasets/philippemo/dummy_dataset_without_schema\r\n\r\nIt's weird because nothing really changed from the implementation, might be another issue in the hub backend. Do you have any pointers on how to resolve this? ",
"We're doing some changes in the way we're handling image parquet datasets right now. We'll include the fix from https://github.com/huggingface/datasets/pull/5921 in the new datasets-server version in the coming days",
"alright thanks for the update :), would that be part of the new release of datasets or is it something separate? if so, where can I track it? ",
"Once the new version of `datasets` is released (tomorrow probably) we'll open an issue on https://github.com/huggingface/datasets-server to update to this version :)",
"Alright we did the update :) This is fixed for good now",
"Yes thanks πππ"
] | 2023-05-16T09:42:58 | 2023-06-16T12:48:38 | 2023-06-16T09:30:48 | NONE | null | null | null | ### Describe the bug
Hello,
I'd like to report an issue related to pushing a dataset represented as a Parquet file to a dataset repository using Dask. Here are the details:
We attempted to load an example dataset in Parquet format from the Hugging Face (HF) filesystem using Dask with the following code snippet:
```
import dask.dataframe as dd
df = dd.read_parquet("hf://datasets/lambdalabs/pokemon-blip-captions",index=False)
```
In this dataset, the "image" column is represented as a dictionary/struct with the format:
```
df = df.compute()
df["image"].iloc[0].keys()
-> dict_keys(['bytes', 'path'])
```
I think this is the format encoded by the [`Image`](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.Image) feature extractor from datasets to format suitable for Arrow.
The next step was to push the dataset to a repository that I created:
```
dd.to_parquet(dask_df, path = "hf://datasets/philippemo/dummy_dataset/data")
```
However, after pushing the dataset using Dask, the "image" column is now represented as the encoded dictionary `(['bytes', 'path'])`, and the images are not properly visualized. You can find the dataset here: [Link to the problematic dataset](https://huggingface.co/datasets/philippemo/dummy_dataset).
It's worth noting that both the original dataset and the one submitted with Dask have the same schema with minor alterations related to metadata:
**[ Schema of original dummy example.](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions/blob/main/data/train-00000-of-00001-566cc9b19d7203f8.parquet)**
```
image: struct<bytes: binary, path: null>
child 0, bytes: binary
child 1, path: null
text: string
```
**[ Schema of pushed dataset with dask](https://huggingface.co/datasets/philippemo/dummy_dataset/blob/main/data/part.0.parquet)**
```
image: struct<bytes: binary, path: null>
child 0, bytes: binary
child 1, path: null
text: string
```
This issue seems to be related to an encoding type that occurs when pushing a model to the hub. Normally, models should be represented as an HF dataset before pushing, but we are working with an example where we need to push large datasets using Dask.
Could you please provide clarification on how to resolve this issue?
Thank you!
### Reproduction
To get the schema I downloaded the parquet files and used pyarrow.parquet to read the schema
```
import pyarrow.parquet
pyarrow.parquet.read_schema(<path_to_parquet>, memory_map=True)
```
### Logs
_No response_
### System info
```shell
- huggingface_hub version: 0.14.1
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/philippe/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: philippemo
- Configured git credential helpers: cache
- FastAI: N/A
- Tensorflow: N/A
- Torch: N/A
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
- Pillow: 9.4.0
- hf_transfer: N/A
- gradio: N/A
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /home/philippe/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /home/philippe/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/philippe/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5869/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5868 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5868/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5868/comments | https://api.github.com/repos/huggingface/datasets/issues/5868/events | https://github.com/huggingface/datasets/issues/5868 | 1,711,173,098 | I_kwDODunzps5l_m3q | 5,868 | Is it possible to change a cached file and 're-cache' it instead of re-generating? | {
"login": "zyh3826",
"id": 31238754,
"node_id": "MDQ6VXNlcjMxMjM4NzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/31238754?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zyh3826",
"html_url": "https://github.com/zyh3826",
"followers_url": "https://api.github.com/users/zyh3826/followers",
"following_url": "https://api.github.com/users/zyh3826/following{/other_user}",
"gists_url": "https://api.github.com/users/zyh3826/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zyh3826/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zyh3826/subscriptions",
"organizations_url": "https://api.github.com/users/zyh3826/orgs",
"repos_url": "https://api.github.com/users/zyh3826/repos",
"events_url": "https://api.github.com/users/zyh3826/events{/privacy}",
"received_events_url": "https://api.github.com/users/zyh3826/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Arrow files/primitives (tables and arrays) are immutable, so re-generating them is the only option, I'm afraid.",
"> \r\n\r\nGot it, thanks for your reply"
] | 2023-05-16T03:45:42 | 2023-05-17T11:21:36 | 2023-05-17T11:21:36 | NONE | null | null | null | ### Feature request
Hi,
I have a huge cached file using `map`(over 500GB), and I want to change an attribution of each element, is there possible to do it using some method instead of re-generating, because `map` takes over 24 hours
### Motivation
For large datasets, I think it is very important because we always face the problem which is changing something in the original cache without re-generating it.
### Your contribution
For now, I can't help, sorry. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5868/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5867 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5867/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5867/comments | https://api.github.com/repos/huggingface/datasets/issues/5867/events | https://github.com/huggingface/datasets/pull/5867 | 1,710,656,067 | PR_kwDODunzps5QizOn | 5,867 | Add logic for hashing modules/functions optimized with `torch.compile` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006598 / 0.011353 (-0.004755) | 0.004565 / 0.011008 (-0.006443) | 0.099063 / 0.038508 (0.060555) | 0.028334 / 0.023109 (0.005225) | 0.323539 / 0.275898 (0.047641) | 0.372462 / 0.323480 (0.048982) | 0.005120 / 0.007986 (-0.002865) | 0.004797 / 0.004328 (0.000468) | 0.076862 / 0.004250 (0.072611) | 0.038021 / 0.037052 (0.000968) | 0.337801 / 0.258489 (0.079312) | 0.374601 / 0.293841 (0.080760) | 0.031158 / 0.128546 (-0.097389) | 0.011672 / 0.075646 (-0.063974) | 0.324913 / 0.419271 (-0.094359) | 0.051702 / 0.043533 (0.008169) | 0.339440 / 0.255139 (0.084301) | 0.372502 / 0.283200 (0.089303) | 0.097590 / 0.141683 (-0.044093) | 1.534238 / 1.452155 (0.082083) | 1.599701 / 1.492716 (0.106985) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204101 / 0.018006 (0.186095) | 0.416981 / 0.000490 (0.416491) | 0.003436 / 0.000200 (0.003236) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023527 / 0.037411 (-0.013885) | 0.095748 / 0.014526 (0.081222) | 0.104498 / 0.176557 (-0.072059) | 0.164000 / 0.737135 (-0.573135) | 0.109170 / 0.296338 (-0.187168) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418239 / 0.215209 (0.203030) | 4.153959 / 2.077655 (2.076305) | 1.856687 / 1.504120 (0.352567) | 1.657818 / 1.541195 (0.116623) | 1.715146 / 1.468490 (0.246656) | 0.700673 / 4.584777 (-3.884103) | 3.401060 / 3.745712 (-0.344652) | 2.891045 / 5.269862 (-2.378816) | 1.519433 / 4.565676 (-3.046243) | 0.083151 / 0.424275 (-0.341124) | 0.012352 / 0.007607 (0.004745) | 0.523901 / 0.226044 (0.297856) | 5.288871 / 2.268929 (3.019943) | 2.322806 / 55.444624 (-53.121818) | 1.982223 / 6.876477 (-4.894253) | 2.074883 / 2.142072 (-0.067189) | 0.812400 / 4.805227 (-3.992827) | 0.152183 / 6.500664 (-6.348481) | 0.066538 / 0.075469 (-0.008931) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223220 / 1.841788 (-0.618567) | 14.024391 / 8.074308 (5.950083) | 14.166657 / 10.191392 (3.975265) | 0.146017 / 0.680424 (-0.534407) | 0.016698 / 0.534201 (-0.517503) | 0.380779 / 0.579283 (-0.198504) | 0.387113 / 0.434364 (-0.047251) | 0.446329 / 0.540337 (-0.094009) | 0.523819 / 1.386936 (-0.863118) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006803 / 0.011353 (-0.004549) | 0.004554 / 0.011008 (-0.006454) | 0.077406 / 0.038508 (0.038897) | 0.028495 / 0.023109 (0.005386) | 0.358847 / 0.275898 (0.082949) | 0.393256 / 0.323480 (0.069776) | 0.005317 / 0.007986 (-0.002669) | 0.004690 / 0.004328 (0.000362) | 0.075842 / 0.004250 (0.071592) | 0.041985 / 0.037052 (0.004933) | 0.367546 / 0.258489 (0.109057) | 0.408019 / 0.293841 (0.114178) | 0.030712 / 0.128546 (-0.097834) | 0.011756 / 0.075646 (-0.063891) | 0.086002 / 0.419271 (-0.333269) | 0.038949 / 0.043533 (-0.004583) | 0.361045 / 0.255139 (0.105906) | 0.381728 / 0.283200 (0.098528) | 0.090692 / 0.141683 (-0.050991) | 1.493251 / 1.452155 (0.041097) | 1.584566 / 1.492716 (0.091850) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217470 / 0.018006 (0.199463) | 0.429955 / 0.000490 (0.429465) | 0.000394 / 0.000200 (0.000194) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026223 / 0.037411 (-0.011189) | 0.102570 / 0.014526 (0.088045) | 0.110848 / 0.176557 (-0.065709) | 0.162413 / 0.737135 (-0.574722) | 0.114579 / 0.296338 (-0.181760) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.464957 / 0.215209 (0.249748) | 4.656597 / 2.077655 (2.578942) | 2.279755 / 1.504120 (0.775636) | 2.230263 / 1.541195 (0.689068) | 2.341540 / 1.468490 (0.873050) | 0.699505 / 4.584777 (-3.885272) | 3.389003 / 3.745712 (-0.356709) | 1.867526 / 5.269862 (-3.402336) | 1.167171 / 4.565676 (-3.398506) | 0.083451 / 0.424275 (-0.340824) | 0.012348 / 0.007607 (0.004741) | 0.584205 / 0.226044 (0.358161) | 5.853623 / 2.268929 (3.584694) | 2.646650 / 55.444624 (-52.797974) | 2.286504 / 6.876477 (-4.589973) | 2.327536 / 2.142072 (0.185464) | 0.811209 / 4.805227 (-3.994018) | 0.151842 / 6.500664 (-6.348822) | 0.067783 / 0.075469 (-0.007686) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330427 / 1.841788 (-0.511360) | 14.668981 / 8.074308 (6.594673) | 13.321154 / 10.191392 (3.129762) | 0.164383 / 0.680424 (-0.516040) | 0.016667 / 0.534201 (-0.517534) | 0.383439 / 0.579283 (-0.195844) | 0.392988 / 0.434364 (-0.041376) | 0.443318 / 0.540337 (-0.097020) | 0.537849 / 1.386936 (-0.849087) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e99bd4583bd636074b1826e2d0581161807480f1 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006379 / 0.011353 (-0.004974) | 0.004691 / 0.011008 (-0.006317) | 0.098047 / 0.038508 (0.059539) | 0.028126 / 0.023109 (0.005017) | 0.327143 / 0.275898 (0.051245) | 0.362482 / 0.323480 (0.039002) | 0.004953 / 0.007986 (-0.003033) | 0.003386 / 0.004328 (-0.000943) | 0.076222 / 0.004250 (0.071971) | 0.037583 / 0.037052 (0.000531) | 0.329661 / 0.258489 (0.071172) | 0.365945 / 0.293841 (0.072104) | 0.030455 / 0.128546 (-0.098091) | 0.011397 / 0.075646 (-0.064249) | 0.323889 / 0.419271 (-0.095383) | 0.043719 / 0.043533 (0.000186) | 0.331499 / 0.255139 (0.076360) | 0.359357 / 0.283200 (0.076158) | 0.088904 / 0.141683 (-0.052779) | 1.458584 / 1.452155 (0.006429) | 1.549375 / 1.492716 (0.056658) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195808 / 0.018006 (0.177802) | 0.411148 / 0.000490 (0.410659) | 0.003602 / 0.000200 (0.003402) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023278 / 0.037411 (-0.014133) | 0.097317 / 0.014526 (0.082791) | 0.102669 / 0.176557 (-0.073888) | 0.168203 / 0.737135 (-0.568933) | 0.105205 / 0.296338 (-0.191133) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424800 / 0.215209 (0.209591) | 4.228444 / 2.077655 (2.150790) | 1.895544 / 1.504120 (0.391424) | 1.698793 / 1.541195 (0.157598) | 1.717931 / 1.468490 (0.249441) | 0.702251 / 4.584777 (-3.882526) | 3.407013 / 3.745712 (-0.338699) | 2.784634 / 5.269862 (-2.485228) | 1.491317 / 4.565676 (-3.074359) | 0.082926 / 0.424275 (-0.341350) | 0.012320 / 0.007607 (0.004713) | 0.524188 / 0.226044 (0.298143) | 5.249798 / 2.268929 (2.980870) | 2.358953 / 55.444624 (-53.085672) | 1.985922 / 6.876477 (-4.890555) | 2.034293 / 2.142072 (-0.107779) | 0.815671 / 4.805227 (-3.989556) | 0.152583 / 6.500664 (-6.348081) | 0.066687 / 0.075469 (-0.008782) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.210901 / 1.841788 (-0.630886) | 13.621765 / 8.074308 (5.547457) | 14.213215 / 10.191392 (4.021823) | 0.143346 / 0.680424 (-0.537078) | 0.016904 / 0.534201 (-0.517297) | 0.379795 / 0.579283 (-0.199489) | 0.381287 / 0.434364 (-0.053077) | 0.449086 / 0.540337 (-0.091251) | 0.538792 / 1.386936 (-0.848144) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006207 / 0.011353 (-0.005146) | 0.004404 / 0.011008 (-0.006604) | 0.076363 / 0.038508 (0.037854) | 0.027335 / 0.023109 (0.004226) | 0.370967 / 0.275898 (0.095069) | 0.401936 / 0.323480 (0.078456) | 0.004835 / 0.007986 (-0.003151) | 0.004559 / 0.004328 (0.000231) | 0.074964 / 0.004250 (0.070713) | 0.038254 / 0.037052 (0.001202) | 0.374799 / 0.258489 (0.116310) | 0.425191 / 0.293841 (0.131350) | 0.035290 / 0.128546 (-0.093256) | 0.011379 / 0.075646 (-0.064267) | 0.085911 / 0.419271 (-0.333360) | 0.043073 / 0.043533 (-0.000460) | 0.373557 / 0.255139 (0.118418) | 0.395179 / 0.283200 (0.111979) | 0.098602 / 0.141683 (-0.043081) | 1.467234 / 1.452155 (0.015079) | 1.571868 / 1.492716 (0.079152) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221848 / 0.018006 (0.203842) | 0.394943 / 0.000490 (0.394454) | 0.002983 / 0.000200 (0.002783) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024385 / 0.037411 (-0.013027) | 0.100087 / 0.014526 (0.085561) | 0.104897 / 0.176557 (-0.071660) | 0.156150 / 0.737135 (-0.580985) | 0.109113 / 0.296338 (-0.187226) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441995 / 0.215209 (0.226786) | 4.415423 / 2.077655 (2.337769) | 2.148791 / 1.504120 (0.644671) | 1.947061 / 1.541195 (0.405866) | 1.954807 / 1.468490 (0.486317) | 0.690245 / 4.584777 (-3.894532) | 3.372766 / 3.745712 (-0.372946) | 1.851073 / 5.269862 (-3.418789) | 1.155558 / 4.565676 (-3.410118) | 0.082796 / 0.424275 (-0.341479) | 0.012845 / 0.007607 (0.005238) | 0.548173 / 0.226044 (0.322129) | 5.530984 / 2.268929 (3.262056) | 2.665360 / 55.444624 (-52.779264) | 2.324266 / 6.876477 (-4.552211) | 2.329397 / 2.142072 (0.187324) | 0.801481 / 4.805227 (-4.003746) | 0.152145 / 6.500664 (-6.348519) | 0.067915 / 0.075469 (-0.007554) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291488 / 1.841788 (-0.550299) | 13.912143 / 8.074308 (5.837835) | 12.975493 / 10.191392 (2.784101) | 0.129915 / 0.680424 (-0.550509) | 0.016516 / 0.534201 (-0.517685) | 0.386979 / 0.579283 (-0.192304) | 0.389163 / 0.434364 (-0.045201) | 0.443324 / 0.540337 (-0.097014) | 0.533744 / 1.386936 (-0.853192) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#eb48834fc2aa45cad73fe70a7ecaa0dd6015b8d0 \"CML watermark\")\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5867). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008635 / 0.011353 (-0.002717) | 0.006014 / 0.011008 (-0.004995) | 0.116314 / 0.038508 (0.077806) | 0.041113 / 0.023109 (0.018004) | 0.358564 / 0.275898 (0.082666) | 0.397547 / 0.323480 (0.074067) | 0.007012 / 0.007986 (-0.000974) | 0.004638 / 0.004328 (0.000310) | 0.086509 / 0.004250 (0.082259) | 0.056731 / 0.037052 (0.019678) | 0.358859 / 0.258489 (0.100370) | 0.425339 / 0.293841 (0.131498) | 0.041780 / 0.128546 (-0.086767) | 0.014203 / 0.075646 (-0.061443) | 0.398240 / 0.419271 (-0.021031) | 0.060180 / 0.043533 (0.016647) | 0.352887 / 0.255139 (0.097748) | 0.381793 / 0.283200 (0.098594) | 0.148578 / 0.141683 (0.006895) | 1.749483 / 1.452155 (0.297328) | 1.869765 / 1.492716 (0.377049) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244435 / 0.018006 (0.226428) | 0.499545 / 0.000490 (0.499055) | 0.004576 / 0.000200 (0.004376) | 0.000147 / 0.000054 (0.000093) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031163 / 0.037411 (-0.006249) | 0.131082 / 0.014526 (0.116556) | 0.137442 / 0.176557 (-0.039114) | 0.203783 / 0.737135 (-0.533352) | 0.144068 / 0.296338 (-0.152270) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.503587 / 0.215209 (0.288378) | 5.011953 / 2.077655 (2.934299) | 2.366968 / 1.504120 (0.862848) | 2.130914 / 1.541195 (0.589719) | 2.243560 / 1.468490 (0.775070) | 0.856719 / 4.584777 (-3.728058) | 4.707445 / 3.745712 (0.961733) | 2.506166 / 5.269862 (-2.763696) | 1.590400 / 4.565676 (-2.975277) | 0.102075 / 0.424275 (-0.322200) | 0.014499 / 0.007607 (0.006892) | 0.624966 / 0.226044 (0.398922) | 6.197671 / 2.268929 (3.928742) | 2.898481 / 55.444624 (-52.546143) | 2.499590 / 6.876477 (-4.376886) | 2.649690 / 2.142072 (0.507617) | 1.012542 / 4.805227 (-3.792685) | 0.202833 / 6.500664 (-6.297831) | 0.078033 / 0.075469 (0.002564) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.448321 / 1.841788 (-0.393467) | 18.084909 / 8.074308 (10.010601) | 17.383027 / 10.191392 (7.191635) | 0.212167 / 0.680424 (-0.468256) | 0.020754 / 0.534201 (-0.513447) | 0.514653 / 0.579283 (-0.064630) | 0.543307 / 0.434364 (0.108944) | 0.653066 / 0.540337 (0.112728) | 0.745773 / 1.386936 (-0.641164) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008576 / 0.011353 (-0.002777) | 0.005834 / 0.011008 (-0.005174) | 0.089842 / 0.038508 (0.051334) | 0.040035 / 0.023109 (0.016926) | 0.449329 / 0.275898 (0.173431) | 0.471572 / 0.323480 (0.148092) | 0.006771 / 0.007986 (-0.001215) | 0.006129 / 0.004328 (0.001800) | 0.090370 / 0.004250 (0.086119) | 0.056924 / 0.037052 (0.019872) | 0.455134 / 0.258489 (0.196645) | 0.502670 / 0.293841 (0.208829) | 0.041689 / 0.128546 (-0.086857) | 0.014447 / 0.075646 (-0.061200) | 0.104528 / 0.419271 (-0.314744) | 0.055535 / 0.043533 (0.012003) | 0.450667 / 0.255139 (0.195528) | 0.453108 / 0.283200 (0.169908) | 0.119296 / 0.141683 (-0.022387) | 1.747359 / 1.452155 (0.295204) | 1.839421 / 1.492716 (0.346705) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.314910 / 0.018006 (0.296904) | 0.495575 / 0.000490 (0.495085) | 0.054702 / 0.000200 (0.054503) | 0.000505 / 0.000054 (0.000450) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033991 / 0.037411 (-0.003420) | 0.133268 / 0.014526 (0.118742) | 0.142286 / 0.176557 (-0.034271) | 0.200562 / 0.737135 (-0.536573) | 0.147161 / 0.296338 (-0.149178) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.520288 / 0.215209 (0.305079) | 5.227684 / 2.077655 (3.150029) | 2.553330 / 1.504120 (1.049210) | 2.324338 / 1.541195 (0.783143) | 2.406790 / 1.468490 (0.938300) | 0.850404 / 4.584777 (-3.734373) | 4.612156 / 3.745712 (0.866444) | 2.592546 / 5.269862 (-2.677316) | 1.708984 / 4.565676 (-2.856692) | 0.103751 / 0.424275 (-0.320524) | 0.014379 / 0.007607 (0.006772) | 0.634661 / 0.226044 (0.408616) | 6.344939 / 2.268929 (4.076010) | 3.179807 / 55.444624 (-52.264817) | 2.831856 / 6.876477 (-4.044621) | 2.866729 / 2.142072 (0.724656) | 0.994519 / 4.805227 (-3.810708) | 0.201566 / 6.500664 (-6.299098) | 0.078902 / 0.075469 (0.003433) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.538738 / 1.841788 (-0.303049) | 18.746367 / 8.074308 (10.672059) | 16.504763 / 10.191392 (6.313371) | 0.197898 / 0.680424 (-0.482526) | 0.020469 / 0.534201 (-0.513732) | 0.529106 / 0.579283 (-0.050177) | 0.536891 / 0.434364 (0.102527) | 0.600947 / 0.540337 (0.060610) | 0.701713 / 1.386936 (-0.685223) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3054f66b4765a520e6fe165c44a4307d40775229 \"CML watermark\")\n"
] | 2023-05-15T19:03:35 | 2023-05-17T13:41:48 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5867",
"html_url": "https://github.com/huggingface/datasets/pull/5867",
"diff_url": "https://github.com/huggingface/datasets/pull/5867.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5867.patch",
"merged_at": null
} | Fix https://github.com/huggingface/datasets/issues/5839
PS: The `Pickler.save` method is becoming a bit messy, so I plan to refactor the pickler a bit at some point. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5867/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5866 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5866/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5866/comments | https://api.github.com/repos/huggingface/datasets/issues/5866/events | https://github.com/huggingface/datasets/issues/5866 | 1,710,496,993 | I_kwDODunzps5l9Bzh | 5,866 | Issue with Sequence features | {
"login": "alialamiidrissi",
"id": 14365168,
"node_id": "MDQ6VXNlcjE0MzY1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/14365168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alialamiidrissi",
"html_url": "https://github.com/alialamiidrissi",
"followers_url": "https://api.github.com/users/alialamiidrissi/followers",
"following_url": "https://api.github.com/users/alialamiidrissi/following{/other_user}",
"gists_url": "https://api.github.com/users/alialamiidrissi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alialamiidrissi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alialamiidrissi/subscriptions",
"organizations_url": "https://api.github.com/users/alialamiidrissi/orgs",
"repos_url": "https://api.github.com/users/alialamiidrissi/repos",
"events_url": "https://api.github.com/users/alialamiidrissi/events{/privacy}",
"received_events_url": "https://api.github.com/users/alialamiidrissi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting! I've opened a PR with a fix."
] | 2023-05-15T17:13:29 | 2023-05-26T11:57:17 | 2023-05-26T11:57:17 | NONE | null | null | null | ### Describe the bug
Sequences features sometimes causes errors when the specified length is not -1
### Steps to reproduce the bug
```python
import numpy as np
from datasets import Features, ClassLabel, Sequence, Value, Dataset
feats = Features(**{'target': ClassLabel(names=[0, 1]),'x': Sequence(feature=Value(dtype='float64',id=None), length=2, id=None)})
Dataset.from_dict({"target": np.ones(2000).astype(int), "x": np.random.rand(2000,2)},features = feats).flatten_indices()
```
Throws:
```
TypeError: Couldn't cast array of type
fixed_size_list<item: double>[2]
to
Sequence(feature=Value(dtype='float64', id=None), length=2, id=None)
```
The same code works without any issues when `length = -1`
EDIT: The error seems to happen only when the length of the dataset is bigger than 1000 for some reason
### Expected behavior
No exception
### Environment info
- `datasets` version: 2.10.1
- Python version: 3.9.5
- PyArrow version: 11.0.0
- Pandas version: 1.4.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5866/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5865 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5865/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5865/comments | https://api.github.com/repos/huggingface/datasets/issues/5865/events | https://github.com/huggingface/datasets/pull/5865 | 1,710,455,738 | PR_kwDODunzps5QiHnw | 5,865 | Deprecate task api | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"If it's easy to keep supporting it we can keep it no ? There are many datasets on the hub that implement the tasks templates in dataset scripts and it's maybe easier to keep task templates than opening PRs to those datasets.",
"do we know if people use the tasks api?\r\n\r\nedit: i mean, i'm fine with removing it if it's not used much, especially considering that it's not documented well.",
"@lhoestq \r\n\r\nLess than 80 public datasets (all canonical) implement `task_templates`, so updating them should be easy.\r\n\r\nPS: I skipped gated datasets when checking for the presence of `task_templates`, but it's safe to assume their contribution to the total count is insignificant.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006480 / 0.011353 (-0.004872) | 0.003904 / 0.011008 (-0.007104) | 0.084287 / 0.038508 (0.045779) | 0.071438 / 0.023109 (0.048329) | 0.309823 / 0.275898 (0.033925) | 0.341038 / 0.323480 (0.017558) | 0.005163 / 0.007986 (-0.002822) | 0.003291 / 0.004328 (-0.001037) | 0.064473 / 0.004250 (0.060222) | 0.053385 / 0.037052 (0.016332) | 0.323561 / 0.258489 (0.065072) | 0.346332 / 0.293841 (0.052491) | 0.030588 / 0.128546 (-0.097958) | 0.008342 / 0.075646 (-0.067305) | 0.287205 / 0.419271 (-0.132067) | 0.051953 / 0.043533 (0.008420) | 0.310925 / 0.255139 (0.055786) | 0.344443 / 0.283200 (0.061244) | 0.022754 / 0.141683 (-0.118928) | 1.459648 / 1.452155 (0.007494) | 1.528413 / 1.492716 (0.035697) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206404 / 0.018006 (0.188398) | 0.461864 / 0.000490 (0.461374) | 0.004501 / 0.000200 (0.004302) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026891 / 0.037411 (-0.010520) | 0.081206 / 0.014526 (0.066680) | 0.093648 / 0.176557 (-0.082908) | 0.148491 / 0.737135 (-0.588645) | 0.093874 / 0.296338 (-0.202464) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401715 / 0.215209 (0.186506) | 4.018597 / 2.077655 (1.940943) | 2.029735 / 1.504120 (0.525615) | 1.860069 / 1.541195 (0.318875) | 1.935712 / 1.468490 (0.467222) | 0.485896 / 4.584777 (-4.098881) | 3.638177 / 3.745712 (-0.107535) | 5.124058 / 5.269862 (-0.145804) | 3.099666 / 4.565676 (-1.466011) | 0.057173 / 0.424275 (-0.367102) | 0.007240 / 0.007607 (-0.000367) | 0.478758 / 0.226044 (0.252713) | 4.798471 / 2.268929 (2.529543) | 2.502980 / 55.444624 (-52.941645) | 2.170650 / 6.876477 (-4.705827) | 2.381394 / 2.142072 (0.239321) | 0.578766 / 4.805227 (-4.226462) | 0.132342 / 6.500664 (-6.368322) | 0.059759 / 0.075469 (-0.015710) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249238 / 1.841788 (-0.592549) | 19.224673 / 8.074308 (11.150365) | 13.786894 / 10.191392 (3.595502) | 0.164633 / 0.680424 (-0.515791) | 0.018065 / 0.534201 (-0.516136) | 0.390589 / 0.579283 (-0.188694) | 0.408993 / 0.434364 (-0.025370) | 0.457001 / 0.540337 (-0.083336) | 0.625327 / 1.386936 (-0.761609) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006827 / 0.011353 (-0.004526) | 0.004007 / 0.011008 (-0.007001) | 0.065239 / 0.038508 (0.026731) | 0.079829 / 0.023109 (0.056719) | 0.400323 / 0.275898 (0.124425) | 0.434158 / 0.323480 (0.110678) | 0.005314 / 0.007986 (-0.002671) | 0.003354 / 0.004328 (-0.000974) | 0.065044 / 0.004250 (0.060794) | 0.060315 / 0.037052 (0.023262) | 0.401513 / 0.258489 (0.143024) | 0.441119 / 0.293841 (0.147278) | 0.031783 / 0.128546 (-0.096763) | 0.008608 / 0.075646 (-0.067038) | 0.071755 / 0.419271 (-0.347517) | 0.048816 / 0.043533 (0.005283) | 0.393896 / 0.255139 (0.138757) | 0.412156 / 0.283200 (0.128956) | 0.024410 / 0.141683 (-0.117272) | 1.515159 / 1.452155 (0.063005) | 1.562217 / 1.492716 (0.069501) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229993 / 0.018006 (0.211987) | 0.449898 / 0.000490 (0.449409) | 0.000376 / 0.000200 (0.000176) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030297 / 0.037411 (-0.007115) | 0.086737 / 0.014526 (0.072212) | 0.098312 / 0.176557 (-0.078244) | 0.152890 / 0.737135 (-0.584246) | 0.099335 / 0.296338 (-0.197003) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415786 / 0.215209 (0.200577) | 4.137606 / 2.077655 (2.059952) | 2.120082 / 1.504120 (0.615963) | 1.943984 / 1.541195 (0.402789) | 2.040821 / 1.468490 (0.572331) | 0.479273 / 4.584777 (-4.105504) | 3.563854 / 3.745712 (-0.181858) | 3.396071 / 5.269862 (-1.873790) | 2.011302 / 4.565676 (-2.554374) | 0.057202 / 0.424275 (-0.367073) | 0.007338 / 0.007607 (-0.000269) | 0.488378 / 0.226044 (0.262333) | 4.881615 / 2.268929 (2.612686) | 2.669685 / 55.444624 (-52.774939) | 2.258236 / 6.876477 (-4.618241) | 2.343303 / 2.142072 (0.201230) | 0.606762 / 4.805227 (-4.198466) | 0.133190 / 6.500664 (-6.367475) | 0.062971 / 0.075469 (-0.012498) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.345215 / 1.841788 (-0.496573) | 20.023713 / 8.074308 (11.949405) | 14.555777 / 10.191392 (4.364385) | 0.162388 / 0.680424 (-0.518036) | 0.018528 / 0.534201 (-0.515673) | 0.393055 / 0.579283 (-0.186229) | 0.411820 / 0.434364 (-0.022544) | 0.461705 / 0.540337 (-0.078633) | 0.629395 / 1.386936 (-0.757541) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4f54f2ff4c68a00242789e9890e3b46cab320448 \"CML watermark\")\n",
"Ok ! I also know https://huggingface.co/datasets/hf-internal-testing/cats_vs_dogs_sample/blob/main/cats_vs_dogs_sample.py that needs to be updated as well",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009100 / 0.011353 (-0.002253) | 0.005158 / 0.011008 (-0.005850) | 0.109291 / 0.038508 (0.070782) | 0.086053 / 0.023109 (0.062943) | 0.469859 / 0.275898 (0.193961) | 0.476142 / 0.323480 (0.152662) | 0.006739 / 0.007986 (-0.001247) | 0.005077 / 0.004328 (0.000748) | 0.078193 / 0.004250 (0.073943) | 0.065956 / 0.037052 (0.028904) | 0.490323 / 0.258489 (0.231834) | 0.497418 / 0.293841 (0.203577) | 0.060562 / 0.128546 (-0.067984) | 0.016321 / 0.075646 (-0.059325) | 0.379703 / 0.419271 (-0.039568) | 0.087335 / 0.043533 (0.043802) | 0.488240 / 0.255139 (0.233101) | 0.497391 / 0.283200 (0.214191) | 0.040699 / 0.141683 (-0.100984) | 1.778925 / 1.452155 (0.326770) | 1.856436 / 1.492716 (0.363720) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236428 / 0.018006 (0.218422) | 0.551950 / 0.000490 (0.551460) | 0.007400 / 0.000200 (0.007201) | 0.000120 / 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028461 / 0.037411 (-0.008950) | 0.093441 / 0.014526 (0.078915) | 0.103868 / 0.176557 (-0.072688) | 0.176269 / 0.737135 (-0.560867) | 0.107760 / 0.296338 (-0.188578) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.593382 / 0.215209 (0.378173) | 5.863711 / 2.077655 (3.786057) | 2.493777 / 1.504120 (0.989657) | 2.088547 / 1.541195 (0.547352) | 2.173147 / 1.468490 (0.704656) | 0.875661 / 4.584777 (-3.709116) | 5.209023 / 3.745712 (1.463310) | 4.483261 / 5.269862 (-0.786600) | 2.843288 / 4.565676 (-1.722388) | 0.098488 / 0.424275 (-0.325787) | 0.008371 / 0.007607 (0.000764) | 0.668413 / 0.226044 (0.442368) | 6.709802 / 2.268929 (4.440873) | 3.132453 / 55.444624 (-52.312172) | 2.428736 / 6.876477 (-4.447741) | 2.560867 / 2.142072 (0.418794) | 0.983550 / 4.805227 (-3.821677) | 0.207072 / 6.500664 (-6.293592) | 0.073786 / 0.075469 (-0.001683) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.625871 / 1.841788 (-0.215917) | 23.481015 / 8.074308 (15.406707) | 20.556677 / 10.191392 (10.365285) | 0.238147 / 0.680424 (-0.442277) | 0.029453 / 0.534201 (-0.504748) | 0.464589 / 0.579283 (-0.114695) | 0.599129 / 0.434364 (0.164765) | 0.550146 / 0.540337 (0.009808) | 0.794646 / 1.386936 (-0.592290) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008613 / 0.011353 (-0.002739) | 0.004979 / 0.011008 (-0.006030) | 0.078095 / 0.038508 (0.039587) | 0.080285 / 0.023109 (0.057176) | 0.482881 / 0.275898 (0.206983) | 0.520442 / 0.323480 (0.196962) | 0.006241 / 0.007986 (-0.001744) | 0.003964 / 0.004328 (-0.000364) | 0.080027 / 0.004250 (0.075777) | 0.065209 / 0.037052 (0.028157) | 0.476113 / 0.258489 (0.217623) | 0.535383 / 0.293841 (0.241542) | 0.053084 / 0.128546 (-0.075462) | 0.014284 / 0.075646 (-0.061362) | 0.083859 / 0.419271 (-0.335413) | 0.061024 / 0.043533 (0.017492) | 0.477810 / 0.255139 (0.222671) | 0.508718 / 0.283200 (0.225518) | 0.036602 / 0.141683 (-0.105081) | 1.810422 / 1.452155 (0.358267) | 1.832833 / 1.492716 (0.340117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.281443 / 0.018006 (0.263437) | 0.568249 / 0.000490 (0.567760) | 0.000493 / 0.000200 (0.000293) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033302 / 0.037411 (-0.004110) | 0.100433 / 0.014526 (0.085907) | 0.105465 / 0.176557 (-0.071091) | 0.161986 / 0.737135 (-0.575149) | 0.115736 / 0.296338 (-0.180603) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.622892 / 0.215209 (0.407683) | 6.144361 / 2.077655 (4.066706) | 2.849443 / 1.504120 (1.345323) | 2.544097 / 1.541195 (1.002902) | 2.579859 / 1.468490 (1.111369) | 0.826078 / 4.584777 (-3.758699) | 5.021808 / 3.745712 (1.276096) | 4.694784 / 5.269862 (-0.575077) | 2.796263 / 4.565676 (-1.769413) | 0.090983 / 0.424275 (-0.333292) | 0.008445 / 0.007607 (0.000838) | 0.744675 / 0.226044 (0.518631) | 7.662989 / 2.268929 (5.394060) | 3.665611 / 55.444624 (-51.779013) | 2.942836 / 6.876477 (-3.933641) | 2.874402 / 2.142072 (0.732329) | 1.010097 / 4.805227 (-3.795130) | 0.218008 / 6.500664 (-6.282656) | 0.087359 / 0.075469 (0.011890) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.655631 / 1.841788 (-0.186157) | 23.539596 / 8.074308 (15.465288) | 20.909512 / 10.191392 (10.718120) | 0.202092 / 0.680424 (-0.478332) | 0.029807 / 0.534201 (-0.504394) | 0.487591 / 0.579283 (-0.091692) | 0.573719 / 0.434364 (0.139355) | 0.531168 / 0.540337 (-0.009170) | 0.742375 / 1.386936 (-0.644561) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#aa231a7be55c6bca2bede8af4ac6da63c3162116 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006247 / 0.011353 (-0.005106) | 0.003650 / 0.011008 (-0.007358) | 0.079655 / 0.038508 (0.041147) | 0.060279 / 0.023109 (0.037170) | 0.309033 / 0.275898 (0.033135) | 0.338479 / 0.323480 (0.014999) | 0.004651 / 0.007986 (-0.003335) | 0.002849 / 0.004328 (-0.001480) | 0.062852 / 0.004250 (0.058602) | 0.049230 / 0.037052 (0.012178) | 0.312502 / 0.258489 (0.054012) | 0.354558 / 0.293841 (0.060717) | 0.027497 / 0.128546 (-0.101049) | 0.007885 / 0.075646 (-0.067762) | 0.260232 / 0.419271 (-0.159040) | 0.045459 / 0.043533 (0.001926) | 0.311629 / 0.255139 (0.056490) | 0.367806 / 0.283200 (0.084606) | 0.020875 / 0.141683 (-0.120808) | 1.423802 / 1.452155 (-0.028352) | 1.497729 / 1.492716 (0.005013) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185629 / 0.018006 (0.167623) | 0.441421 / 0.000490 (0.440931) | 0.004847 / 0.000200 (0.004647) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022428 / 0.037411 (-0.014984) | 0.073375 / 0.014526 (0.058849) | 0.083194 / 0.176557 (-0.093363) | 0.143984 / 0.737135 (-0.593151) | 0.084128 / 0.296338 (-0.212211) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397220 / 0.215209 (0.182010) | 3.954394 / 2.077655 (1.876740) | 1.920638 / 1.504120 (0.416518) | 1.744284 / 1.541195 (0.203089) | 1.802623 / 1.468490 (0.334133) | 0.501988 / 4.584777 (-4.082789) | 3.096071 / 3.745712 (-0.649642) | 4.648267 / 5.269862 (-0.621595) | 2.770995 / 4.565676 (-1.794682) | 0.057513 / 0.424275 (-0.366762) | 0.006315 / 0.007607 (-0.001292) | 0.467683 / 0.226044 (0.241639) | 4.683959 / 2.268929 (2.415031) | 2.384980 / 55.444624 (-53.059645) | 2.030894 / 6.876477 (-4.845583) | 2.148374 / 2.142072 (0.006302) | 0.585142 / 4.805227 (-4.220085) | 0.123173 / 6.500664 (-6.377491) | 0.059140 / 0.075469 (-0.016329) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244707 / 1.841788 (-0.597080) | 18.176043 / 8.074308 (10.101735) | 13.742770 / 10.191392 (3.551378) | 0.149692 / 0.680424 (-0.530732) | 0.016591 / 0.534201 (-0.517610) | 0.342138 / 0.579283 (-0.237145) | 0.353931 / 0.434364 (-0.080433) | 0.392317 / 0.540337 (-0.148020) | 0.524011 / 1.386936 (-0.862925) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005937 / 0.011353 (-0.005416) | 0.003609 / 0.011008 (-0.007399) | 0.061729 / 0.038508 (0.023221) | 0.057844 / 0.023109 (0.034735) | 0.418051 / 0.275898 (0.142153) | 0.453014 / 0.323480 (0.129534) | 0.004530 / 0.007986 (-0.003456) | 0.002861 / 0.004328 (-0.001468) | 0.062236 / 0.004250 (0.057986) | 0.048612 / 0.037052 (0.011560) | 0.418487 / 0.258489 (0.159998) | 0.455114 / 0.293841 (0.161273) | 0.027419 / 0.128546 (-0.101127) | 0.007919 / 0.075646 (-0.067728) | 0.066940 / 0.419271 (-0.352331) | 0.041816 / 0.043533 (-0.001717) | 0.419788 / 0.255139 (0.164649) | 0.439682 / 0.283200 (0.156483) | 0.020902 / 0.141683 (-0.120781) | 1.473993 / 1.452155 (0.021838) | 1.532438 / 1.492716 (0.039722) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228766 / 0.018006 (0.210760) | 0.412189 / 0.000490 (0.411699) | 0.000371 / 0.000200 (0.000171) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026139 / 0.037411 (-0.011272) | 0.076626 / 0.014526 (0.062100) | 0.088262 / 0.176557 (-0.088295) | 0.143096 / 0.737135 (-0.594039) | 0.089642 / 0.296338 (-0.206696) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423030 / 0.215209 (0.207821) | 4.218333 / 2.077655 (2.140679) | 2.280943 / 1.504120 (0.776823) | 2.051746 / 1.541195 (0.510551) | 2.101085 / 1.468490 (0.632595) | 0.495860 / 4.584777 (-4.088917) | 3.108065 / 3.745712 (-0.637647) | 2.944188 / 5.269862 (-2.325673) | 1.833693 / 4.565676 (-2.731984) | 0.057509 / 0.424275 (-0.366766) | 0.006406 / 0.007607 (-0.001201) | 0.497208 / 0.226044 (0.271164) | 4.974972 / 2.268929 (2.706044) | 2.786639 / 55.444624 (-52.657985) | 2.423815 / 6.876477 (-4.452662) | 2.446377 / 2.142072 (0.304305) | 0.584521 / 4.805227 (-4.220706) | 0.124129 / 6.500664 (-6.376535) | 0.061373 / 0.075469 (-0.014096) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.307076 / 1.841788 (-0.534711) | 18.443873 / 8.074308 (10.369565) | 13.835730 / 10.191392 (3.644338) | 0.159795 / 0.680424 (-0.520629) | 0.016643 / 0.534201 (-0.517558) | 0.334300 / 0.579283 (-0.244983) | 0.347136 / 0.434364 (-0.087228) | 0.394633 / 0.540337 (-0.145704) | 0.552445 / 1.386936 (-0.834491) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8cfc0262363ea8cbd8c78537a09f851ec6ec30f5 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007273 / 0.011353 (-0.004080) | 0.004704 / 0.011008 (-0.006304) | 0.105857 / 0.038508 (0.067349) | 0.062493 / 0.023109 (0.039384) | 0.325704 / 0.275898 (0.049806) | 0.355795 / 0.323480 (0.032315) | 0.005552 / 0.007986 (-0.002433) | 0.003543 / 0.004328 (-0.000785) | 0.068098 / 0.004250 (0.063848) | 0.049563 / 0.037052 (0.012511) | 0.362956 / 0.258489 (0.104467) | 0.376047 / 0.293841 (0.082206) | 0.039272 / 0.128546 (-0.089275) | 0.011521 / 0.075646 (-0.064125) | 0.291899 / 0.419271 (-0.127373) | 0.056916 / 0.043533 (0.013383) | 0.365352 / 0.255139 (0.110213) | 0.357251 / 0.283200 (0.074051) | 0.031670 / 0.141683 (-0.110013) | 1.533294 / 1.452155 (0.081140) | 1.566580 / 1.492716 (0.073864) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219812 / 0.018006 (0.201805) | 0.499808 / 0.000490 (0.499318) | 0.000343 / 0.000200 (0.000143) | 0.000066 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024011 / 0.037411 (-0.013400) | 0.079686 / 0.014526 (0.065161) | 0.087925 / 0.176557 (-0.088631) | 0.149065 / 0.737135 (-0.588071) | 0.088514 / 0.296338 (-0.207824) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.495003 / 0.215209 (0.279794) | 5.106371 / 2.077655 (3.028717) | 2.285497 / 1.504120 (0.781377) | 2.056052 / 1.541195 (0.514858) | 2.024913 / 1.468490 (0.556423) | 0.726048 / 4.584777 (-3.858729) | 4.873945 / 3.745712 (1.128233) | 7.488671 / 5.269862 (2.218809) | 4.361208 / 4.565676 (-0.204469) | 0.089014 / 0.424275 (-0.335261) | 0.007178 / 0.007607 (-0.000429) | 0.633669 / 0.226044 (0.407625) | 6.328154 / 2.268929 (4.059226) | 3.071598 / 55.444624 (-52.373026) | 2.416077 / 6.876477 (-4.460399) | 2.431033 / 2.142072 (0.288961) | 0.918167 / 4.805227 (-3.887060) | 0.193829 / 6.500664 (-6.306836) | 0.073446 / 0.075469 (-0.002023) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.344994 / 1.841788 (-0.496793) | 19.911699 / 8.074308 (11.837391) | 17.182697 / 10.191392 (6.991305) | 0.216932 / 0.680424 (-0.463492) | 0.025415 / 0.534201 (-0.508786) | 0.416806 / 0.579283 (-0.162477) | 0.524934 / 0.434364 (0.090570) | 0.510783 / 0.540337 (-0.029554) | 0.687856 / 1.386936 (-0.699081) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008469 / 0.011353 (-0.002884) | 0.003797 / 0.011008 (-0.007211) | 0.067276 / 0.038508 (0.028768) | 0.066825 / 0.023109 (0.043716) | 0.394976 / 0.275898 (0.119078) | 0.432563 / 0.323480 (0.109083) | 0.006003 / 0.007986 (-0.001982) | 0.003399 / 0.004328 (-0.000930) | 0.070899 / 0.004250 (0.066649) | 0.050940 / 0.037052 (0.013887) | 0.378291 / 0.258489 (0.119802) | 0.429889 / 0.293841 (0.136048) | 0.043245 / 0.128546 (-0.085302) | 0.012182 / 0.075646 (-0.063465) | 0.074560 / 0.419271 (-0.344711) | 0.065290 / 0.043533 (0.021757) | 0.371209 / 0.255139 (0.116070) | 0.389731 / 0.283200 (0.106532) | 0.045729 / 0.141683 (-0.095954) | 1.451785 / 1.452155 (-0.000370) | 1.598539 / 1.492716 (0.105822) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261357 / 0.018006 (0.243351) | 0.520142 / 0.000490 (0.519653) | 0.008305 / 0.000200 (0.008105) | 0.000089 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026492 / 0.037411 (-0.010919) | 0.082430 / 0.014526 (0.067904) | 0.095979 / 0.176557 (-0.080578) | 0.151752 / 0.737135 (-0.585383) | 0.090086 / 0.296338 (-0.206252) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.535967 / 0.215209 (0.320758) | 5.228605 / 2.077655 (3.150950) | 2.395078 / 1.504120 (0.890959) | 2.185500 / 1.541195 (0.644306) | 2.219456 / 1.468490 (0.750966) | 0.764794 / 4.584777 (-3.819983) | 4.796617 / 3.745712 (1.050905) | 4.143450 / 5.269862 (-1.126411) | 2.527391 / 4.565676 (-2.038286) | 0.081418 / 0.424275 (-0.342857) | 0.007170 / 0.007607 (-0.000437) | 0.706071 / 0.226044 (0.480026) | 6.501060 / 2.268929 (4.232131) | 3.176315 / 55.444624 (-52.268309) | 2.443245 / 6.876477 (-4.433232) | 2.517832 / 2.142072 (0.375759) | 0.916254 / 4.805227 (-3.888973) | 0.184282 / 6.500664 (-6.316382) | 0.062613 / 0.075469 (-0.012857) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.444283 / 1.841788 (-0.397504) | 20.227311 / 8.074308 (12.153003) | 17.512856 / 10.191392 (7.321464) | 0.219556 / 0.680424 (-0.460868) | 0.024705 / 0.534201 (-0.509496) | 0.423215 / 0.579283 (-0.156068) | 0.513103 / 0.434364 (0.078739) | 0.473853 / 0.540337 (-0.066485) | 0.738165 / 1.386936 (-0.648771) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b65660b7c6e853391991734210e38f805459b0ed \"CML watermark\")\n"
] | 2023-05-15T16:48:24 | 2023-07-10T12:33:59 | 2023-07-10T12:24:01 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5865",
"html_url": "https://github.com/huggingface/datasets/pull/5865",
"diff_url": "https://github.com/huggingface/datasets/pull/5865.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5865.patch",
"merged_at": "2023-07-10T12:24:01"
} | The task API is not well adopted in the ecosystem, so this PR deprecates it. The `train_eval_index` is a newer, more flexible solution that should be used instead (I think?).
These are the projects that still use the task API :
* the image classification example in Transformers: [here](https://github.com/huggingface/transformers/blob/8f76dc8e5aaad58f2df7748b6d6970376f315a9a/examples/pytorch/image-classification/run_image_classification_no_trainer.py#L262) and [here](https://github.com/huggingface/transformers/blob/8f76dc8e5aaad58f2df7748b6d6970376f315a9a/examples/tensorflow/image-classification/run_image_classification.py#L277)
* autotrain: [here](https://github.com/huggingface/autotrain-backend/blob/455e274004b56f9377d64db4ab03671508fcc4cd/zeus/zeus/run/utils.py#L666)
* api-inference-community: [here](https://github.com/huggingface/api-inference-community/blob/fb8fb29d577a5bf01c82944db745489a6d6ed3d4/manage.py#L64) (but the rest of the code does not call the `resolve_dataset` function)
So we need to update these files after the merge.
cc @lewtun | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5865/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5864 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5864/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5864/comments | https://api.github.com/repos/huggingface/datasets/issues/5864/events | https://github.com/huggingface/datasets/issues/5864 | 1,710,450,047 | I_kwDODunzps5l82V_ | 5,864 | Slow iteration over Torch tensors | {
"login": "crisostomi",
"id": 51738205,
"node_id": "MDQ6VXNlcjUxNzM4MjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/51738205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/crisostomi",
"html_url": "https://github.com/crisostomi",
"followers_url": "https://api.github.com/users/crisostomi/followers",
"following_url": "https://api.github.com/users/crisostomi/following{/other_user}",
"gists_url": "https://api.github.com/users/crisostomi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/crisostomi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/crisostomi/subscriptions",
"organizations_url": "https://api.github.com/users/crisostomi/orgs",
"repos_url": "https://api.github.com/users/crisostomi/repos",
"events_url": "https://api.github.com/users/crisostomi/events{/privacy}",
"received_events_url": "https://api.github.com/users/crisostomi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I am highly interested performance of dataset so I ran your example as a curious user.\r\n```python\r\ntrain_dataset.cast_column(\"x\", Array3D(shape=img_shape, dtype=\"float32\"))\r\n```\r\nhave return values and \"x\" is a new column, it shoulde be\r\n```python\r\nds=train_dataset.cast_column(\"img\", Array3D(shape=(3,32,32), dtype=\"float32\"))\r\n```\r\nI rewrite your example as\r\n```python\r\ntrain_dataset = load_dataset(\r\n 'cifar100',\r\n split='train',\r\n use_auth_token=True,\r\n)\r\ntransform_func = torchvision.transforms.Compose([\r\n ToTensor(), \r\n Normalize(mean=[0.485, 0.456, 0.406], std= [0.229, 0.224, 0.225]),] \r\n)\r\n \r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"img\": transform_func(x[\"img\"])},\r\n)\r\nds=train_dataset.cast_column(\"img\", Array3D(shape=(3,32,32), dtype=\"float32\"))\r\nfor i in tqdm(ds):\r\n pass\r\n```\r\nthat require ~11s in my environment. While\r\n```python\r\nds = load_dataset(\r\n 'cifar100',\r\n split='train',\r\n use_auth_token=True,\r\n)\r\n\r\nfor i in tqdm(ds):\r\n pass\r\n```\r\nonly need ~6s. (So I guess it's still undesirable)"
] | 2023-05-15T16:43:58 | 2023-05-16T03:27:38 | null | NONE | null | null | null | ### Describe the bug
I have a problem related to this [issue](https://github.com/huggingface/datasets/issues/5841): I get a way slower iteration when using a Torch dataloader if I use vanilla Numpy tensors or if I first apply a ToTensor transform to the input. In particular, it takes 5 seconds to iterate over the vanilla input and ~30s after the transformation.
### Steps to reproduce the bug
Here is the minimum code to reproduce the problem
```python
import numpy as np
from datasets import Dataset, DatasetDict, load_dataset, Array3D, Image, Features
from torch.utils.data import DataLoader
from tqdm import tqdm
import torchvision
from torchvision.transforms import ToTensor, Normalize
#################################
# Without transform
#################################
train_dataset = load_dataset(
'cifar100',
split='train',
use_auth_token=True,
)
train_dataset.set_format(type="numpy", columns=["img", "fine_label"])
train_loader= DataLoader(
train_dataset,
batch_size=100,
pin_memory=False,
shuffle=True,
num_workers=8,
)
for batch in tqdm(train_loader, desc="Loading data, no transform"):
pass
#################################
# With transform
#################################
transform_func = torchvision.transforms.Compose([
ToTensor(),
Normalize(mean=[0.485, 0.456, 0.406], std= [0.229, 0.224, 0.225]),]
)
train_dataset = train_dataset.map(
desc=f"Preprocessing samples",
function=lambda x: {"img": transform_func(x["img"])},
)
train_dataset.set_format(type="numpy", columns=["img", "fine_label"])
train_loader= DataLoader(
train_dataset,
batch_size=100,
pin_memory=False,
shuffle=True,
num_workers=8,
)
for batch in tqdm(train_loader, desc="Loading data after transform"):
pass
```
I have also tried converting the Image column to an Array3D
```python
img_shape = train_dataset[0]["img"].shape
features = train_dataset.features.copy()
features["x"] = Array3D(shape=img_shape, dtype="float32")
train_dataset = train_dataset.map(
desc=f"Preprocessing samples",
function=lambda x: {"x": np.array(x["img"], dtype=np.uint8)},
features=features,
)
train_dataset.cast_column("x", Array3D(shape=img_shape, dtype="float32"))
train_dataset.set_format(type="numpy", columns=["x", "fine_label"])
```
but to no avail. Any clue?
### Expected behavior
The iteration should take approximately the same time with or without the transformation, as it doesn't change the shape of the input. What may be the issue here?
### Environment info
```
- `datasets` version: 2.12.0
- Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5864/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5863 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5863/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5863/comments | https://api.github.com/repos/huggingface/datasets/issues/5863/events | https://github.com/huggingface/datasets/pull/5863 | 1,710,335,905 | PR_kwDODunzps5QhtlM | 5,863 | Use a new low-memory approach for tf dataset index shuffling | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5863). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007764 / 0.011353 (-0.003588) | 0.005397 / 0.011008 (-0.005611) | 0.097995 / 0.038508 (0.059487) | 0.036360 / 0.023109 (0.013251) | 0.312148 / 0.275898 (0.036250) | 0.349427 / 0.323480 (0.025947) | 0.006635 / 0.007986 (-0.001350) | 0.004373 / 0.004328 (0.000044) | 0.074350 / 0.004250 (0.070099) | 0.054667 / 0.037052 (0.017614) | 0.301621 / 0.258489 (0.043132) | 0.364233 / 0.293841 (0.070392) | 0.035356 / 0.128546 (-0.093191) | 0.012512 / 0.075646 (-0.063134) | 0.333399 / 0.419271 (-0.085873) | 0.051363 / 0.043533 (0.007830) | 0.302372 / 0.255139 (0.047233) | 0.326542 / 0.283200 (0.043343) | 0.118610 / 0.141683 (-0.023073) | 1.438485 / 1.452155 (-0.013669) | 1.539131 / 1.492716 (0.046415) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.010920 / 0.018006 (-0.007086) | 0.561263 / 0.000490 (0.560773) | 0.003972 / 0.000200 (0.003772) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030333 / 0.037411 (-0.007078) | 0.113608 / 0.014526 (0.099083) | 0.125802 / 0.176557 (-0.050755) | 0.183885 / 0.737135 (-0.553250) | 0.130242 / 0.296338 (-0.166097) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404147 / 0.215209 (0.188938) | 4.021990 / 2.077655 (1.944335) | 1.821450 / 1.504120 (0.317330) | 1.619032 / 1.541195 (0.077837) | 1.791267 / 1.468490 (0.322777) | 0.706683 / 4.584777 (-3.878094) | 3.819056 / 3.745712 (0.073344) | 3.485714 / 5.269862 (-1.784147) | 1.938968 / 4.565676 (-2.626709) | 0.086501 / 0.424275 (-0.337774) | 0.012300 / 0.007607 (0.004693) | 0.503600 / 0.226044 (0.277555) | 5.042123 / 2.268929 (2.773195) | 2.269712 / 55.444624 (-53.174912) | 1.944912 / 6.876477 (-4.931565) | 2.155196 / 2.142072 (0.013123) | 0.853434 / 4.805227 (-3.951793) | 0.175554 / 6.500664 (-6.325110) | 0.072005 / 0.075469 (-0.003464) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.203765 / 1.841788 (-0.638022) | 15.836634 / 8.074308 (7.762326) | 15.707348 / 10.191392 (5.515956) | 0.164828 / 0.680424 (-0.515596) | 0.018115 / 0.534201 (-0.516086) | 0.434591 / 0.579283 (-0.144692) | 0.437858 / 0.434364 (0.003495) | 0.524672 / 0.540337 (-0.015665) | 0.610535 / 1.386936 (-0.776401) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007558 / 0.011353 (-0.003795) | 0.005258 / 0.011008 (-0.005750) | 0.075263 / 0.038508 (0.036755) | 0.033915 / 0.023109 (0.010805) | 0.371368 / 0.275898 (0.095470) | 0.399239 / 0.323480 (0.075760) | 0.006547 / 0.007986 (-0.001439) | 0.004675 / 0.004328 (0.000347) | 0.074230 / 0.004250 (0.069980) | 0.054653 / 0.037052 (0.017601) | 0.376655 / 0.258489 (0.118166) | 0.438437 / 0.293841 (0.144596) | 0.035838 / 0.128546 (-0.092709) | 0.012641 / 0.075646 (-0.063005) | 0.087279 / 0.419271 (-0.331993) | 0.046311 / 0.043533 (0.002778) | 0.356649 / 0.255139 (0.101510) | 0.377876 / 0.283200 (0.094677) | 0.108097 / 0.141683 (-0.033586) | 1.478461 / 1.452155 (0.026306) | 1.560375 / 1.492716 (0.067658) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.316384 / 0.018006 (0.298378) | 0.539382 / 0.000490 (0.538892) | 0.002029 / 0.000200 (0.001829) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029950 / 0.037411 (-0.007462) | 0.111371 / 0.014526 (0.096846) | 0.125254 / 0.176557 (-0.051303) | 0.173064 / 0.737135 (-0.564071) | 0.130446 / 0.296338 (-0.165893) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424882 / 0.215209 (0.209673) | 4.241575 / 2.077655 (2.163920) | 2.096216 / 1.504120 (0.592096) | 1.916017 / 1.541195 (0.374823) | 2.016318 / 1.468490 (0.547828) | 0.701197 / 4.584777 (-3.883580) | 3.762365 / 3.745712 (0.016652) | 3.307805 / 5.269862 (-1.962057) | 1.841752 / 4.565676 (-2.723925) | 0.086003 / 0.424275 (-0.338272) | 0.012247 / 0.007607 (0.004640) | 0.532926 / 0.226044 (0.306882) | 5.370509 / 2.268929 (3.101580) | 2.587853 / 55.444624 (-52.856772) | 2.264541 / 6.876477 (-4.611936) | 2.374833 / 2.142072 (0.232760) | 0.827751 / 4.805227 (-3.977476) | 0.169454 / 6.500664 (-6.331210) | 0.066340 / 0.075469 (-0.009129) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.319128 / 1.841788 (-0.522660) | 16.702085 / 8.074308 (8.627777) | 13.559957 / 10.191392 (3.368565) | 0.146659 / 0.680424 (-0.533765) | 0.017384 / 0.534201 (-0.516817) | 0.421126 / 0.579283 (-0.158157) | 0.422067 / 0.434364 (-0.012297) | 0.490615 / 0.540337 (-0.049723) | 0.587151 / 1.386936 (-0.799785) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#79f4b6de25128999f5fc0a7bde9aa71c461f518f \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006604 / 0.011353 (-0.004749) | 0.004508 / 0.011008 (-0.006500) | 0.098652 / 0.038508 (0.060144) | 0.028172 / 0.023109 (0.005063) | 0.366997 / 0.275898 (0.091099) | 0.403691 / 0.323480 (0.080211) | 0.005127 / 0.007986 (-0.002859) | 0.003340 / 0.004328 (-0.000989) | 0.075408 / 0.004250 (0.071157) | 0.038049 / 0.037052 (0.000996) | 0.367914 / 0.258489 (0.109425) | 0.410958 / 0.293841 (0.117118) | 0.030454 / 0.128546 (-0.098093) | 0.011422 / 0.075646 (-0.064224) | 0.325048 / 0.419271 (-0.094223) | 0.042959 / 0.043533 (-0.000574) | 0.374536 / 0.255139 (0.119397) | 0.394738 / 0.283200 (0.111538) | 0.090481 / 0.141683 (-0.051201) | 1.504858 / 1.452155 (0.052703) | 1.569072 / 1.492716 (0.076356) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.010062 / 0.018006 (-0.007945) | 0.408619 / 0.000490 (0.408130) | 0.002307 / 0.000200 (0.002107) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022898 / 0.037411 (-0.014514) | 0.096975 / 0.014526 (0.082449) | 0.103032 / 0.176557 (-0.073524) | 0.164877 / 0.737135 (-0.572259) | 0.107324 / 0.296338 (-0.189014) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446652 / 0.215209 (0.231442) | 4.466939 / 2.077655 (2.389285) | 2.204590 / 1.504120 (0.700471) | 2.004048 / 1.541195 (0.462853) | 2.053035 / 1.468490 (0.584545) | 0.696617 / 4.584777 (-3.888160) | 3.391173 / 3.745712 (-0.354539) | 1.863306 / 5.269862 (-3.406556) | 1.160637 / 4.565676 (-3.405039) | 0.083115 / 0.424275 (-0.341160) | 0.012470 / 0.007607 (0.004862) | 0.547207 / 0.226044 (0.321163) | 5.500667 / 2.268929 (3.231739) | 2.656615 / 55.444624 (-52.788009) | 2.313281 / 6.876477 (-4.563195) | 2.395632 / 2.142072 (0.253559) | 0.815361 / 4.805227 (-3.989867) | 0.152112 / 6.500664 (-6.348552) | 0.067485 / 0.075469 (-0.007984) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206975 / 1.841788 (-0.634813) | 13.684136 / 8.074308 (5.609828) | 13.919129 / 10.191392 (3.727737) | 0.140767 / 0.680424 (-0.539657) | 0.016445 / 0.534201 (-0.517756) | 0.379136 / 0.579283 (-0.200147) | 0.385395 / 0.434364 (-0.048969) | 0.445781 / 0.540337 (-0.094556) | 0.522056 / 1.386936 (-0.864880) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006370 / 0.011353 (-0.004983) | 0.004514 / 0.011008 (-0.006495) | 0.075671 / 0.038508 (0.037163) | 0.026723 / 0.023109 (0.003614) | 0.359819 / 0.275898 (0.083921) | 0.387935 / 0.323480 (0.064456) | 0.004888 / 0.007986 (-0.003098) | 0.004619 / 0.004328 (0.000290) | 0.075546 / 0.004250 (0.071295) | 0.039024 / 0.037052 (0.001971) | 0.361173 / 0.258489 (0.102684) | 0.411425 / 0.293841 (0.117584) | 0.030842 / 0.128546 (-0.097705) | 0.011555 / 0.075646 (-0.064091) | 0.084697 / 0.419271 (-0.334574) | 0.039281 / 0.043533 (-0.004252) | 0.370082 / 0.255139 (0.114943) | 0.382113 / 0.283200 (0.098913) | 0.091237 / 0.141683 (-0.050445) | 1.534185 / 1.452155 (0.082030) | 1.576488 / 1.492716 (0.083772) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226568 / 0.018006 (0.208562) | 0.401566 / 0.000490 (0.401076) | 0.002915 / 0.000200 (0.002715) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025357 / 0.037411 (-0.012054) | 0.099747 / 0.014526 (0.085221) | 0.106443 / 0.176557 (-0.070113) | 0.157147 / 0.737135 (-0.579989) | 0.110759 / 0.296338 (-0.185580) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444648 / 0.215209 (0.229439) | 4.437930 / 2.077655 (2.360275) | 2.154033 / 1.504120 (0.649913) | 1.958351 / 1.541195 (0.417157) | 1.991031 / 1.468490 (0.522541) | 0.691440 / 4.584777 (-3.893337) | 3.369087 / 3.745712 (-0.376625) | 1.847103 / 5.269862 (-3.422758) | 1.152509 / 4.565676 (-3.413168) | 0.082519 / 0.424275 (-0.341756) | 0.012609 / 0.007607 (0.005001) | 0.547267 / 0.226044 (0.321222) | 5.501335 / 2.268929 (3.232407) | 2.621079 / 55.444624 (-52.823545) | 2.281332 / 6.876477 (-4.595145) | 2.300427 / 2.142072 (0.158354) | 0.803611 / 4.805227 (-4.001616) | 0.151784 / 6.500664 (-6.348880) | 0.067801 / 0.075469 (-0.007669) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.343201 / 1.841788 (-0.498587) | 13.901033 / 8.074308 (5.826725) | 13.114738 / 10.191392 (2.923346) | 0.149358 / 0.680424 (-0.531066) | 0.016596 / 0.534201 (-0.517605) | 0.377310 / 0.579283 (-0.201973) | 0.387045 / 0.434364 (-0.047319) | 0.441272 / 0.540337 (-0.099065) | 0.525783 / 1.386936 (-0.861153) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c127e5575ab4e22648976ad268d76264ef5d04f8 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008147 / 0.011353 (-0.003205) | 0.005531 / 0.011008 (-0.005477) | 0.099796 / 0.038508 (0.061288) | 0.041574 / 0.023109 (0.018465) | 0.315752 / 0.275898 (0.039854) | 0.369846 / 0.323480 (0.046366) | 0.006489 / 0.007986 (-0.001497) | 0.004339 / 0.004328 (0.000010) | 0.074769 / 0.004250 (0.070519) | 0.051313 / 0.037052 (0.014261) | 0.313463 / 0.258489 (0.054974) | 0.369918 / 0.293841 (0.076077) | 0.035893 / 0.128546 (-0.092653) | 0.012487 / 0.075646 (-0.063159) | 0.336464 / 0.419271 (-0.082807) | 0.052870 / 0.043533 (0.009337) | 0.310795 / 0.255139 (0.055656) | 0.333146 / 0.283200 (0.049946) | 0.112813 / 0.141683 (-0.028870) | 1.488192 / 1.452155 (0.036038) | 1.563438 / 1.492716 (0.070721) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.015015 / 0.018006 (-0.002991) | 0.531783 / 0.000490 (0.531294) | 0.005039 / 0.000200 (0.004839) | 0.000103 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030205 / 0.037411 (-0.007207) | 0.115997 / 0.014526 (0.101471) | 0.122958 / 0.176557 (-0.053599) | 0.186956 / 0.737135 (-0.550180) | 0.130268 / 0.296338 (-0.166071) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402648 / 0.215209 (0.187439) | 3.996121 / 2.077655 (1.918466) | 1.811715 / 1.504120 (0.307595) | 1.640805 / 1.541195 (0.099610) | 1.810478 / 1.468490 (0.341988) | 0.699996 / 4.584777 (-3.884781) | 3.834020 / 3.745712 (0.088308) | 3.688364 / 5.269862 (-1.581498) | 1.973828 / 4.565676 (-2.591849) | 0.087085 / 0.424275 (-0.337190) | 0.012501 / 0.007607 (0.004894) | 0.498934 / 0.226044 (0.272889) | 4.977608 / 2.268929 (2.708680) | 2.258678 / 55.444624 (-53.185947) | 1.934251 / 6.876477 (-4.942226) | 2.177409 / 2.142072 (0.035337) | 0.873470 / 4.805227 (-3.931757) | 0.173132 / 6.500664 (-6.327532) | 0.069144 / 0.075469 (-0.006325) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.181554 / 1.841788 (-0.660234) | 15.694468 / 8.074308 (7.620160) | 15.026954 / 10.191392 (4.835562) | 0.167092 / 0.680424 (-0.513332) | 0.017921 / 0.534201 (-0.516280) | 0.425649 / 0.579283 (-0.153634) | 0.423225 / 0.434364 (-0.011139) | 0.522132 / 0.540337 (-0.018205) | 0.612806 / 1.386936 (-0.774130) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007896 / 0.011353 (-0.003457) | 0.005581 / 0.011008 (-0.005427) | 0.076338 / 0.038508 (0.037830) | 0.037064 / 0.023109 (0.013954) | 0.399706 / 0.275898 (0.123808) | 0.431698 / 0.323480 (0.108218) | 0.006846 / 0.007986 (-0.001140) | 0.006010 / 0.004328 (0.001682) | 0.075771 / 0.004250 (0.071520) | 0.058214 / 0.037052 (0.021161) | 0.395753 / 0.258489 (0.137264) | 0.459925 / 0.293841 (0.166084) | 0.036349 / 0.128546 (-0.092197) | 0.012720 / 0.075646 (-0.062926) | 0.087248 / 0.419271 (-0.332024) | 0.049405 / 0.043533 (0.005872) | 0.387576 / 0.255139 (0.132437) | 0.409861 / 0.283200 (0.126661) | 0.111639 / 0.141683 (-0.030043) | 1.482840 / 1.452155 (0.030685) | 1.574465 / 1.492716 (0.081749) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.320628 / 0.018006 (0.302622) | 0.556338 / 0.000490 (0.555848) | 0.000445 / 0.000200 (0.000245) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032905 / 0.037411 (-0.004507) | 0.121253 / 0.014526 (0.106727) | 0.127241 / 0.176557 (-0.049316) | 0.178090 / 0.737135 (-0.559045) | 0.143285 / 0.296338 (-0.153054) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437852 / 0.215209 (0.222643) | 4.369770 / 2.077655 (2.292115) | 2.219932 / 1.504120 (0.715812) | 2.032520 / 1.541195 (0.491325) | 2.154300 / 1.468490 (0.685810) | 0.678942 / 4.584777 (-3.905835) | 3.768148 / 3.745712 (0.022436) | 2.152738 / 5.269862 (-3.117124) | 1.341480 / 4.565676 (-3.224197) | 0.084326 / 0.424275 (-0.339949) | 0.012288 / 0.007607 (0.004681) | 0.547677 / 0.226044 (0.321633) | 5.496777 / 2.268929 (3.227848) | 2.702267 / 55.444624 (-52.742357) | 2.388580 / 6.876477 (-4.487897) | 2.471673 / 2.142072 (0.329601) | 0.833645 / 4.805227 (-3.971582) | 0.167113 / 6.500664 (-6.333551) | 0.067658 / 0.075469 (-0.007811) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.282050 / 1.841788 (-0.559737) | 16.413677 / 8.074308 (8.339369) | 14.080910 / 10.191392 (3.889518) | 0.171782 / 0.680424 (-0.508642) | 0.018186 / 0.534201 (-0.516015) | 0.425244 / 0.579283 (-0.154039) | 0.430260 / 0.434364 (-0.004104) | 0.500838 / 0.540337 (-0.039499) | 0.591900 / 1.386936 (-0.795036) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5fc5c538de84da400118e3712077acc580ce85c4 \"CML watermark\")\n",
"The approach we take here is to no longer materialize the entire index array or shuffle buffer. Instead, we do the following:\r\n\r\n1) Generate a dataset with `tf.data.Dataset.range`. This dataset is not materialized - it's basically a range iterator.\r\n2) When we begin iterating over a dataset, generate a random seed. This value is constant for each pass over the dataset, and is regenerated if we start a new iteration or epoch over the dataset.\r\n3) Map the range dataset and the random seed with `tf.random.index_shuffle`. This converts indices into the equivalent values in a permuted array. In other words `tf.random.index_shuffle(indices, maxval=50_000_000)` is equivalent to `np.random.permutation(50_000_000)[indices]`, but without ever materializing the `np.random.permutation(50_000_000)` array.\r\n\r\nUsing this approach gives us a complete iteration over the dataset that does not skip any samples, compiles in TF and also never materializes the complete index array, which should avoid the memory usage issues. I'm testing that now!",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008395 / 0.011353 (-0.002958) | 0.005893 / 0.011008 (-0.005115) | 0.117081 / 0.038508 (0.078573) | 0.040987 / 0.023109 (0.017878) | 0.394234 / 0.275898 (0.118336) | 0.447036 / 0.323480 (0.123556) | 0.006703 / 0.007986 (-0.001283) | 0.006085 / 0.004328 (0.001757) | 0.086479 / 0.004250 (0.082228) | 0.050192 / 0.037052 (0.013140) | 0.400958 / 0.258489 (0.142469) | 0.455551 / 0.293841 (0.161710) | 0.041481 / 0.128546 (-0.087065) | 0.014135 / 0.075646 (-0.061511) | 0.399929 / 0.419271 (-0.019343) | 0.060824 / 0.043533 (0.017291) | 0.395946 / 0.255139 (0.140807) | 0.428811 / 0.283200 (0.145611) | 0.120057 / 0.141683 (-0.021626) | 1.703244 / 1.452155 (0.251090) | 1.841153 / 1.492716 (0.348436) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.021826 / 0.018006 (0.003820) | 0.494279 / 0.000490 (0.493789) | 0.011258 / 0.000200 (0.011058) | 0.000382 / 0.000054 (0.000328) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031651 / 0.037411 (-0.005760) | 0.132871 / 0.014526 (0.118345) | 0.137388 / 0.176557 (-0.039169) | 0.205808 / 0.737135 (-0.531327) | 0.147585 / 0.296338 (-0.148753) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474483 / 0.215209 (0.259274) | 4.726568 / 2.077655 (2.648914) | 2.136172 / 1.504120 (0.632052) | 1.918364 / 1.541195 (0.377169) | 2.068794 / 1.468490 (0.600304) | 0.836481 / 4.584777 (-3.748296) | 4.550583 / 3.745712 (0.804871) | 2.456287 / 5.269862 (-2.813574) | 1.563127 / 4.565676 (-3.002550) | 0.102541 / 0.424275 (-0.321734) | 0.014492 / 0.007607 (0.006885) | 0.598572 / 0.226044 (0.372528) | 5.953321 / 2.268929 (3.684392) | 2.695210 / 55.444624 (-52.749414) | 2.294317 / 6.876477 (-4.582160) | 2.456585 / 2.142072 (0.314513) | 1.019907 / 4.805227 (-3.785320) | 0.201225 / 6.500664 (-6.299439) | 0.077113 / 0.075469 (0.001644) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.497662 / 1.841788 (-0.344126) | 18.216941 / 8.074308 (10.142633) | 17.016638 / 10.191392 (6.825246) | 0.193271 / 0.680424 (-0.487153) | 0.020440 / 0.534201 (-0.513761) | 0.509361 / 0.579283 (-0.069922) | 0.513389 / 0.434364 (0.079025) | 0.622266 / 0.540337 (0.081928) | 0.741733 / 1.386936 (-0.645203) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008641 / 0.011353 (-0.002712) | 0.005792 / 0.011008 (-0.005216) | 0.086020 / 0.038508 (0.047512) | 0.040005 / 0.023109 (0.016896) | 0.435120 / 0.275898 (0.159222) | 0.480269 / 0.323480 (0.156789) | 0.006669 / 0.007986 (-0.001317) | 0.006039 / 0.004328 (0.001711) | 0.083468 / 0.004250 (0.079218) | 0.057700 / 0.037052 (0.020648) | 0.416418 / 0.258489 (0.157929) | 0.508286 / 0.293841 (0.214445) | 0.041198 / 0.128546 (-0.087349) | 0.014346 / 0.075646 (-0.061301) | 0.100553 / 0.419271 (-0.318718) | 0.054201 / 0.043533 (0.010668) | 0.438232 / 0.255139 (0.183093) | 0.454707 / 0.283200 (0.171508) | 0.118332 / 0.141683 (-0.023351) | 1.657607 / 1.452155 (0.205452) | 1.825510 / 1.492716 (0.332794) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236156 / 0.018006 (0.218150) | 0.487612 / 0.000490 (0.487123) | 0.005747 / 0.000200 (0.005547) | 0.000111 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035127 / 0.037411 (-0.002284) | 0.132013 / 0.014526 (0.117487) | 0.142316 / 0.176557 (-0.034241) | 0.198627 / 0.737135 (-0.538508) | 0.145454 / 0.296338 (-0.150885) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.513041 / 0.215209 (0.297832) | 5.066197 / 2.077655 (2.988542) | 2.508779 / 1.504120 (1.004659) | 2.273901 / 1.541195 (0.732706) | 2.364958 / 1.468490 (0.896468) | 0.811367 / 4.584777 (-3.773410) | 4.504744 / 3.745712 (0.759032) | 2.499811 / 5.269862 (-2.770050) | 1.583349 / 4.565676 (-2.982328) | 0.101701 / 0.424275 (-0.322574) | 0.014379 / 0.007607 (0.006772) | 0.669506 / 0.226044 (0.443462) | 6.556702 / 2.268929 (4.287774) | 3.123457 / 55.444624 (-52.321167) | 2.731997 / 6.876477 (-4.144480) | 2.862866 / 2.142072 (0.720794) | 0.992956 / 4.805227 (-3.812271) | 0.200473 / 6.500664 (-6.300191) | 0.078780 / 0.075469 (0.003311) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.540718 / 1.841788 (-0.301070) | 18.749344 / 8.074308 (10.675036) | 15.648983 / 10.191392 (5.457591) | 0.174089 / 0.680424 (-0.506335) | 0.020441 / 0.534201 (-0.513760) | 0.503742 / 0.579283 (-0.075541) | 0.500648 / 0.434364 (0.066284) | 0.598558 / 0.540337 (0.058221) | 0.712093 / 1.386936 (-0.674843) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#621554280f964b5fe87ece1a46b794406d943b1e \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009940 / 0.011353 (-0.001412) | 0.006193 / 0.011008 (-0.004815) | 0.125874 / 0.038508 (0.087366) | 0.038664 / 0.023109 (0.015555) | 0.380013 / 0.275898 (0.104115) | 0.430152 / 0.323480 (0.106672) | 0.006961 / 0.007986 (-0.001025) | 0.004749 / 0.004328 (0.000420) | 0.099743 / 0.004250 (0.095492) | 0.052349 / 0.037052 (0.015297) | 0.433354 / 0.258489 (0.174865) | 0.436273 / 0.293841 (0.142433) | 0.053929 / 0.128546 (-0.074617) | 0.019369 / 0.075646 (-0.056278) | 0.421783 / 0.419271 (0.002511) | 0.062746 / 0.043533 (0.019213) | 0.377225 / 0.255139 (0.122086) | 0.413708 / 0.283200 (0.130508) | 0.111371 / 0.141683 (-0.030312) | 1.819166 / 1.452155 (0.367011) | 1.974527 / 1.492716 (0.481810) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090664 / 0.018006 (0.072658) | 0.566166 / 0.000490 (0.565676) | 0.079305 / 0.000200 (0.079105) | 0.000755 / 0.000054 (0.000700) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029720 / 0.037411 (-0.007691) | 0.126030 / 0.014526 (0.111504) | 0.146020 / 0.176557 (-0.030537) | 0.210354 / 0.737135 (-0.526781) | 0.149428 / 0.296338 (-0.146910) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.624371 / 0.215209 (0.409162) | 6.332839 / 2.077655 (4.255184) | 2.547784 / 1.504120 (1.043664) | 2.150508 / 1.541195 (0.609313) | 2.240816 / 1.468490 (0.772326) | 1.271131 / 4.584777 (-3.313646) | 5.642726 / 3.745712 (1.897014) | 3.212988 / 5.269862 (-2.056874) | 2.258123 / 4.565676 (-2.307553) | 0.149477 / 0.424275 (-0.274798) | 0.014603 / 0.007607 (0.006996) | 0.782155 / 0.226044 (0.556111) | 7.855191 / 2.268929 (5.586262) | 3.308638 / 55.444624 (-52.135986) | 2.548142 / 6.876477 (-4.328335) | 2.627374 / 2.142072 (0.485301) | 1.515170 / 4.805227 (-3.290058) | 0.262479 / 6.500664 (-6.238185) | 0.082181 / 0.075469 (0.006712) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.573169 / 1.841788 (-0.268618) | 18.105719 / 8.074308 (10.031411) | 22.015179 / 10.191392 (11.823787) | 0.254678 / 0.680424 (-0.425746) | 0.027098 / 0.534201 (-0.507103) | 0.578045 / 0.579283 (-0.001238) | 0.647130 / 0.434364 (0.212766) | 0.650522 / 0.540337 (0.110185) | 0.797713 / 1.386936 (-0.589223) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010376 / 0.011353 (-0.000977) | 0.005990 / 0.011008 (-0.005018) | 0.097144 / 0.038508 (0.058635) | 0.038205 / 0.023109 (0.015096) | 0.468347 / 0.275898 (0.192449) | 0.497646 / 0.323480 (0.174166) | 0.006916 / 0.007986 (-0.001069) | 0.004760 / 0.004328 (0.000431) | 0.109838 / 0.004250 (0.105587) | 0.048321 / 0.037052 (0.011269) | 0.437458 / 0.258489 (0.178969) | 0.534864 / 0.293841 (0.241023) | 0.053655 / 0.128546 (-0.074892) | 0.021915 / 0.075646 (-0.053732) | 0.121047 / 0.419271 (-0.298224) | 0.059694 / 0.043533 (0.016162) | 0.466937 / 0.255139 (0.211798) | 0.482030 / 0.283200 (0.198831) | 0.117458 / 0.141683 (-0.024225) | 1.835551 / 1.452155 (0.383396) | 1.965748 / 1.492716 (0.473031) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234885 / 0.018006 (0.216879) | 0.529925 / 0.000490 (0.529436) | 0.000484 / 0.000200 (0.000284) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030959 / 0.037411 (-0.006453) | 0.128905 / 0.014526 (0.114379) | 0.136913 / 0.176557 (-0.039643) | 0.195133 / 0.737135 (-0.542002) | 0.147929 / 0.296338 (-0.148410) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.715661 / 0.215209 (0.500451) | 6.994125 / 2.077655 (4.916470) | 3.033178 / 1.504120 (1.529058) | 2.663709 / 1.541195 (1.122515) | 2.707558 / 1.468490 (1.239068) | 1.316195 / 4.584777 (-3.268582) | 5.688264 / 3.745712 (1.942552) | 3.260897 / 5.269862 (-2.008964) | 2.134985 / 4.565676 (-2.430691) | 0.153945 / 0.424275 (-0.270330) | 0.014727 / 0.007607 (0.007119) | 0.911339 / 0.226044 (0.685294) | 8.902640 / 2.268929 (6.633711) | 3.806606 / 55.444624 (-51.638018) | 3.052238 / 6.876477 (-3.824238) | 3.046945 / 2.142072 (0.904873) | 1.559837 / 4.805227 (-3.245390) | 0.272276 / 6.500664 (-6.228388) | 0.087728 / 0.075469 (0.012259) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.712691 / 1.841788 (-0.129097) | 18.127575 / 8.074308 (10.053267) | 19.734063 / 10.191392 (9.542671) | 0.235006 / 0.680424 (-0.445418) | 0.027581 / 0.534201 (-0.506620) | 0.551080 / 0.579283 (-0.028203) | 0.608564 / 0.434364 (0.174200) | 0.636578 / 0.540337 (0.096241) | 0.732374 / 1.386936 (-0.654562) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#36911ca06d9c4e37ce36da6228cb3af8b40c2add \"CML watermark\")\n",
"Looks good in testing - this should be ready for review! cc @lhoestq @massquantity",
"Looks good to me, though i doubt that very few people will upgrade to TF >= 2.9 unless their memory is full:)",
"Is it more efficient than using numpy to shuffle as in multiprocessing ? Why not use the same strategy ?",
"Good question, honestly! The NumPy strategy works fine, but requires us to handle multiple processes instead of doing everything in `tf.data`. We could just scrap this entire code path and always use the multiprocessing NumPy approach, but I think single-threaded throughput would be lower if we did that. If you prefer it for code simplicity, though, I can do that.\r\n\r\nIn the longer term, I'm hoping that `tf.data` gets native support for our data structures and we can transition the whole pipeline to pure `tf.data`, but that still hasn't happened π« ",
"And @massquantity TF 2.13 is going to release in a couple of days, so I hope most users are at least on TF 2.9 by now!",
"Unless there is a big gap in performance I think code simplicity would be appreciated ^^",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008638 / 0.011353 (-0.002715) | 0.006013 / 0.011008 (-0.004995) | 0.116456 / 0.038508 (0.077948) | 0.040419 / 0.023109 (0.017310) | 0.418374 / 0.275898 (0.142476) | 0.447693 / 0.323480 (0.124213) | 0.007002 / 0.007986 (-0.000984) | 0.006175 / 0.004328 (0.001847) | 0.087801 / 0.004250 (0.083550) | 0.051980 / 0.037052 (0.014928) | 0.393275 / 0.258489 (0.134786) | 0.449601 / 0.293841 (0.155760) | 0.041670 / 0.128546 (-0.086876) | 0.014396 / 0.075646 (-0.061251) | 0.399175 / 0.419271 (-0.020096) | 0.060635 / 0.043533 (0.017102) | 0.391449 / 0.255139 (0.136310) | 0.420713 / 0.283200 (0.137513) | 0.121369 / 0.141683 (-0.020314) | 1.692630 / 1.452155 (0.240475) | 1.815526 / 1.492716 (0.322810) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244321 / 0.018006 (0.226315) | 0.487947 / 0.000490 (0.487458) | 0.004563 / 0.000200 (0.004363) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033425 / 0.037411 (-0.003987) | 0.134458 / 0.014526 (0.119932) | 0.138810 / 0.176557 (-0.037746) | 0.208871 / 0.737135 (-0.528264) | 0.147964 / 0.296338 (-0.148374) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.483347 / 0.215209 (0.268138) | 4.799550 / 2.077655 (2.721895) | 2.174149 / 1.504120 (0.670029) | 1.943276 / 1.541195 (0.402081) | 2.010884 / 1.468490 (0.542394) | 0.832030 / 4.584777 (-3.752747) | 4.716713 / 3.745712 (0.971001) | 4.615810 / 5.269862 (-0.654052) | 2.379600 / 4.565676 (-2.186077) | 0.103560 / 0.424275 (-0.320715) | 0.014683 / 0.007607 (0.007076) | 0.598558 / 0.226044 (0.372514) | 5.999126 / 2.268929 (3.730197) | 2.677819 / 55.444624 (-52.766805) | 2.320838 / 6.876477 (-4.555639) | 2.503684 / 2.142072 (0.361611) | 1.016459 / 4.805227 (-3.788769) | 0.201672 / 6.500664 (-6.298992) | 0.079310 / 0.075469 (0.003841) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.446374 / 1.841788 (-0.395413) | 19.219310 / 8.074308 (11.145002) | 17.294665 / 10.191392 (7.103273) | 0.246115 / 0.680424 (-0.434309) | 0.021406 / 0.534201 (-0.512795) | 0.524084 / 0.579283 (-0.055200) | 0.511254 / 0.434364 (0.076890) | 0.621304 / 0.540337 (0.080966) | 0.727088 / 1.386936 (-0.659848) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008907 / 0.011353 (-0.002446) | 0.006165 / 0.011008 (-0.004843) | 0.090786 / 0.038508 (0.052278) | 0.040893 / 0.023109 (0.017784) | 0.451252 / 0.275898 (0.175354) | 0.477811 / 0.323480 (0.154331) | 0.007418 / 0.007986 (-0.000568) | 0.005789 / 0.004328 (0.001461) | 0.087422 / 0.004250 (0.083171) | 0.061800 / 0.037052 (0.024748) | 0.459085 / 0.258489 (0.200596) | 0.488897 / 0.293841 (0.195056) | 0.048157 / 0.128546 (-0.080389) | 0.014676 / 0.075646 (-0.060970) | 0.104372 / 0.419271 (-0.314900) | 0.058066 / 0.043533 (0.014534) | 0.446131 / 0.255139 (0.190992) | 0.460428 / 0.283200 (0.177228) | 0.128492 / 0.141683 (-0.013191) | 1.811419 / 1.452155 (0.359265) | 1.894781 / 1.492716 (0.402064) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220527 / 0.018006 (0.202520) | 0.487663 / 0.000490 (0.487173) | 0.003864 / 0.000200 (0.003664) | 0.000162 / 0.000054 (0.000107) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036354 / 0.037411 (-0.001057) | 0.140469 / 0.014526 (0.125944) | 0.149990 / 0.176557 (-0.026566) | 0.212369 / 0.737135 (-0.524766) | 0.154000 / 0.296338 (-0.142338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.514172 / 0.215209 (0.298963) | 5.129247 / 2.077655 (3.051593) | 2.536773 / 1.504120 (1.032653) | 2.317253 / 1.541195 (0.776058) | 2.424066 / 1.468490 (0.955576) | 0.836160 / 4.584777 (-3.748617) | 4.906235 / 3.745712 (1.160523) | 4.431395 / 5.269862 (-0.838467) | 2.332845 / 4.565676 (-2.232831) | 0.102867 / 0.424275 (-0.321409) | 0.014851 / 0.007607 (0.007244) | 0.644104 / 0.226044 (0.418060) | 6.415847 / 2.268929 (4.146918) | 3.186984 / 55.444624 (-52.257641) | 2.774125 / 6.876477 (-4.102352) | 2.848045 / 2.142072 (0.705972) | 1.018757 / 4.805227 (-3.786470) | 0.212333 / 6.500664 (-6.288331) | 0.079405 / 0.075469 (0.003936) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.748375 / 1.841788 (-0.093412) | 19.733829 / 8.074308 (11.659521) | 15.766665 / 10.191392 (5.575273) | 0.192087 / 0.680424 (-0.488337) | 0.027641 / 0.534201 (-0.506560) | 0.504101 / 0.579283 (-0.075182) | 0.493815 / 0.434364 (0.059451) | 0.583247 / 0.540337 (0.042910) | 0.697432 / 1.386936 (-0.689504) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#95c177e02ca20bf7bb3ed8f185d2d6f05a5e5f30 \"CML watermark\")\n",
"Hi @lhoestq, I tried moving everything to the NumPy path but ran into issues - the `SharedMemory` constructs it depends on were only added in Python 3.8. As a result, if we move everything to that path then `to_tf_dataset` does not work on older Python versions.\r\n\r\nFor now, how do you feel about reverting and using my original solution, which has fallbacks for all versions of Python and TensorFlow? Once our minimum versions pass Python 3.8 or TF 2.9 we can remove the older code paths.",
"Gentle ping on this question @lhoestq!",
"Ah yes indeed. Feel free to revert and add comments to explain why you needed to have a different approach for single process",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008395 / 0.011353 (-0.002958) | 0.005773 / 0.011008 (-0.005235) | 0.115702 / 0.038508 (0.077194) | 0.039897 / 0.023109 (0.016788) | 0.483140 / 0.275898 (0.207242) | 0.531288 / 0.323480 (0.207808) | 0.006739 / 0.007986 (-0.001246) | 0.004419 / 0.004328 (0.000090) | 0.086374 / 0.004250 (0.082124) | 0.056498 / 0.037052 (0.019446) | 0.491589 / 0.258489 (0.233100) | 0.556366 / 0.293841 (0.262525) | 0.041366 / 0.128546 (-0.087181) | 0.014373 / 0.075646 (-0.061274) | 0.395504 / 0.419271 (-0.023767) | 0.094382 / 0.043533 (0.050849) | 0.483000 / 0.255139 (0.227861) | 0.522693 / 0.283200 (0.239494) | 0.138804 / 0.141683 (-0.002879) | 1.719563 / 1.452155 (0.267409) | 1.853470 / 1.492716 (0.360753) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235616 / 0.018006 (0.217610) | 0.483267 / 0.000490 (0.482777) | 0.008663 / 0.000200 (0.008463) | 0.000401 / 0.000054 (0.000347) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033124 / 0.037411 (-0.004287) | 0.128821 / 0.014526 (0.114295) | 0.138910 / 0.176557 (-0.037647) | 0.213570 / 0.737135 (-0.523566) | 0.146646 / 0.296338 (-0.149693) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.479998 / 0.215209 (0.264789) | 4.772325 / 2.077655 (2.694670) | 2.228424 / 1.504120 (0.724304) | 2.000915 / 1.541195 (0.459721) | 2.105799 / 1.468490 (0.637309) | 0.824235 / 4.584777 (-3.760542) | 4.511902 / 3.745712 (0.766189) | 4.723073 / 5.269862 (-0.546789) | 2.333442 / 4.565676 (-2.232235) | 0.101161 / 0.424275 (-0.323114) | 0.014403 / 0.007607 (0.006796) | 0.596395 / 0.226044 (0.370351) | 5.961046 / 2.268929 (3.692117) | 2.746679 / 55.444624 (-52.697946) | 2.352085 / 6.876477 (-4.524392) | 2.609812 / 2.142072 (0.467740) | 0.996950 / 4.805227 (-3.808277) | 0.197923 / 6.500664 (-6.302741) | 0.075546 / 0.075469 (0.000077) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.529896 / 1.841788 (-0.311892) | 18.183887 / 8.074308 (10.109578) | 16.352332 / 10.191392 (6.160940) | 0.213504 / 0.680424 (-0.466920) | 0.020388 / 0.534201 (-0.513813) | 0.497832 / 0.579283 (-0.081451) | 0.495477 / 0.434364 (0.061113) | 0.585984 / 0.540337 (0.045647) | 0.688726 / 1.386936 (-0.698210) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008422 / 0.011353 (-0.002931) | 0.005876 / 0.011008 (-0.005132) | 0.089310 / 0.038508 (0.050802) | 0.039769 / 0.023109 (0.016660) | 0.425279 / 0.275898 (0.149381) | 0.470818 / 0.323480 (0.147338) | 0.006519 / 0.007986 (-0.001467) | 0.006276 / 0.004328 (0.001948) | 0.085753 / 0.004250 (0.081503) | 0.053867 / 0.037052 (0.016815) | 0.429193 / 0.258489 (0.170704) | 0.480278 / 0.293841 (0.186437) | 0.040657 / 0.128546 (-0.087889) | 0.014055 / 0.075646 (-0.061591) | 0.101422 / 0.419271 (-0.317849) | 0.053803 / 0.043533 (0.010271) | 0.428348 / 0.255139 (0.173209) | 0.452193 / 0.283200 (0.168994) | 0.124914 / 0.141683 (-0.016769) | 1.750122 / 1.452155 (0.297968) | 1.850875 / 1.492716 (0.358159) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249958 / 0.018006 (0.231952) | 0.485183 / 0.000490 (0.484694) | 0.000472 / 0.000200 (0.000272) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034563 / 0.037411 (-0.002848) | 0.135565 / 0.014526 (0.121039) | 0.143271 / 0.176557 (-0.033285) | 0.199080 / 0.737135 (-0.538056) | 0.149336 / 0.296338 (-0.147003) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.526170 / 0.215209 (0.310961) | 5.270960 / 2.077655 (3.193305) | 2.664585 / 1.504120 (1.160465) | 2.440027 / 1.541195 (0.898832) | 2.612764 / 1.468490 (1.144274) | 0.828965 / 4.584777 (-3.755812) | 4.769983 / 3.745712 (1.024271) | 2.441962 / 5.269862 (-2.827900) | 1.549032 / 4.565676 (-3.016644) | 0.100851 / 0.424275 (-0.323424) | 0.014425 / 0.007607 (0.006818) | 0.640908 / 0.226044 (0.414864) | 6.399041 / 2.268929 (4.130113) | 3.242424 / 55.444624 (-52.202200) | 2.836317 / 6.876477 (-4.040160) | 2.933010 / 2.142072 (0.790938) | 1.002277 / 4.805227 (-3.802950) | 0.201247 / 6.500664 (-6.299417) | 0.078777 / 0.075469 (0.003308) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.620415 / 1.841788 (-0.221373) | 19.153631 / 8.074308 (11.079323) | 16.744068 / 10.191392 (6.552676) | 0.167327 / 0.680424 (-0.513097) | 0.020186 / 0.534201 (-0.514015) | 0.503683 / 0.579283 (-0.075600) | 0.500051 / 0.434364 (0.065687) | 0.587188 / 0.540337 (0.046850) | 0.699975 / 1.386936 (-0.686961) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#291d7ffa695edb4b4e818c783b16d3466246cd56 \"CML watermark\")\n",
"This is probably ready, but likely conflicts with #5883. I'll wait for that PR to be merged and then rebase and merge this one.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008387 / 0.011353 (-0.002965) | 0.005824 / 0.011008 (-0.005184) | 0.117721 / 0.038508 (0.079213) | 0.040420 / 0.023109 (0.017311) | 0.404961 / 0.275898 (0.129063) | 0.426695 / 0.323480 (0.103215) | 0.006634 / 0.007986 (-0.001352) | 0.006033 / 0.004328 (0.001705) | 0.088652 / 0.004250 (0.084402) | 0.048075 / 0.037052 (0.011022) | 0.400683 / 0.258489 (0.142194) | 0.432489 / 0.293841 (0.138648) | 0.042065 / 0.128546 (-0.086482) | 0.014071 / 0.075646 (-0.061575) | 0.399398 / 0.419271 (-0.019873) | 0.066034 / 0.043533 (0.022501) | 0.400056 / 0.255139 (0.144918) | 0.421130 / 0.283200 (0.137930) | 0.119721 / 0.141683 (-0.021962) | 1.752166 / 1.452155 (0.300011) | 1.820161 / 1.492716 (0.327444) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244264 / 0.018006 (0.226258) | 0.480882 / 0.000490 (0.480392) | 0.005604 / 0.000200 (0.005404) | 0.000175 / 0.000054 (0.000121) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032397 / 0.037411 (-0.005015) | 0.131632 / 0.014526 (0.117106) | 0.139765 / 0.176557 (-0.036792) | 0.213135 / 0.737135 (-0.524000) | 0.147891 / 0.296338 (-0.148447) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474534 / 0.215209 (0.259325) | 4.730424 / 2.077655 (2.652770) | 2.163706 / 1.504120 (0.659586) | 1.936051 / 1.541195 (0.394857) | 2.012185 / 1.468490 (0.543695) | 0.826583 / 4.584777 (-3.758194) | 4.921494 / 3.745712 (1.175782) | 2.431401 / 5.269862 (-2.838460) | 1.566020 / 4.565676 (-2.999656) | 0.101255 / 0.424275 (-0.323020) | 0.014553 / 0.007607 (0.006946) | 0.608301 / 0.226044 (0.382256) | 6.089801 / 2.268929 (3.820873) | 2.691986 / 55.444624 (-52.752638) | 2.296498 / 6.876477 (-4.579979) | 2.455388 / 2.142072 (0.313315) | 0.984342 / 4.805227 (-3.820885) | 0.200447 / 6.500664 (-6.300217) | 0.077602 / 0.075469 (0.002133) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.445067 / 1.841788 (-0.396721) | 18.588670 / 8.074308 (10.514362) | 16.950216 / 10.191392 (6.758824) | 0.169688 / 0.680424 (-0.510736) | 0.020544 / 0.534201 (-0.513657) | 0.508506 / 0.579283 (-0.070777) | 0.516218 / 0.434364 (0.081854) | 0.646072 / 0.540337 (0.105734) | 0.763227 / 1.386936 (-0.623709) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008816 / 0.011353 (-0.002537) | 0.006016 / 0.011008 (-0.004992) | 0.090946 / 0.038508 (0.052438) | 0.040189 / 0.023109 (0.017080) | 0.446723 / 0.275898 (0.170825) | 0.494633 / 0.323480 (0.171153) | 0.007206 / 0.007986 (-0.000779) | 0.004508 / 0.004328 (0.000180) | 0.088477 / 0.004250 (0.084226) | 0.055587 / 0.037052 (0.018535) | 0.445349 / 0.258489 (0.186860) | 0.504940 / 0.293841 (0.211099) | 0.041976 / 0.128546 (-0.086570) | 0.014296 / 0.075646 (-0.061351) | 0.102835 / 0.419271 (-0.316436) | 0.054786 / 0.043533 (0.011253) | 0.444789 / 0.255139 (0.189651) | 0.472306 / 0.283200 (0.189106) | 0.123365 / 0.141683 (-0.018318) | 1.725803 / 1.452155 (0.273648) | 1.832216 / 1.492716 (0.339500) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252680 / 0.018006 (0.234674) | 0.476719 / 0.000490 (0.476229) | 0.000461 / 0.000200 (0.000261) | 0.000067 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035961 / 0.037411 (-0.001450) | 0.135399 / 0.014526 (0.120873) | 0.147549 / 0.176557 (-0.029007) | 0.207468 / 0.737135 (-0.529667) | 0.151591 / 0.296338 (-0.144747) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.528143 / 0.215209 (0.312934) | 5.270766 / 2.077655 (3.193111) | 2.675644 / 1.504120 (1.171524) | 2.472855 / 1.541195 (0.931660) | 2.636020 / 1.468490 (1.167530) | 0.841325 / 4.584777 (-3.743452) | 4.702290 / 3.745712 (0.956578) | 2.523537 / 5.269862 (-2.746325) | 1.595617 / 4.565676 (-2.970059) | 0.102095 / 0.424275 (-0.322180) | 0.014568 / 0.007607 (0.006961) | 0.652090 / 0.226044 (0.426046) | 6.503086 / 2.268929 (4.234158) | 3.277025 / 55.444624 (-52.167599) | 2.931264 / 6.876477 (-3.945213) | 3.021667 / 2.142072 (0.879594) | 1.002560 / 4.805227 (-3.802668) | 0.202621 / 6.500664 (-6.298043) | 0.080583 / 0.075469 (0.005114) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.639281 / 1.841788 (-0.202507) | 18.911529 / 8.074308 (10.837220) | 17.082795 / 10.191392 (6.891403) | 0.179456 / 0.680424 (-0.500968) | 0.021740 / 0.534201 (-0.512460) | 0.526426 / 0.579283 (-0.052857) | 0.535083 / 0.434364 (0.100719) | 0.583304 / 0.540337 (0.042967) | 0.696733 / 1.386936 (-0.690203) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#757f19283f22eeb3e9aedefd82abc0aa2235f797 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006823 / 0.011353 (-0.004530) | 0.004847 / 0.011008 (-0.006161) | 0.096038 / 0.038508 (0.057530) | 0.033037 / 0.023109 (0.009928) | 0.298379 / 0.275898 (0.022481) | 0.333319 / 0.323480 (0.009839) | 0.005343 / 0.007986 (-0.002643) | 0.003863 / 0.004328 (-0.000465) | 0.072928 / 0.004250 (0.068678) | 0.040898 / 0.037052 (0.003846) | 0.303116 / 0.258489 (0.044627) | 0.334021 / 0.293841 (0.040181) | 0.034780 / 0.128546 (-0.093767) | 0.011978 / 0.075646 (-0.063668) | 0.331642 / 0.419271 (-0.087629) | 0.052729 / 0.043533 (0.009196) | 0.298586 / 0.255139 (0.043447) | 0.319296 / 0.283200 (0.036097) | 0.097711 / 0.141683 (-0.043972) | 1.416899 / 1.452155 (-0.035256) | 1.546008 / 1.492716 (0.053292) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234303 / 0.018006 (0.216296) | 0.492767 / 0.000490 (0.492278) | 0.004935 / 0.000200 (0.004736) | 0.000106 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030617 / 0.037411 (-0.006795) | 0.121203 / 0.014526 (0.106677) | 0.126677 / 0.176557 (-0.049879) | 0.186379 / 0.737135 (-0.550756) | 0.129849 / 0.296338 (-0.166490) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416324 / 0.215209 (0.201115) | 4.135563 / 2.077655 (2.057908) | 1.976182 / 1.504120 (0.472062) | 1.807611 / 1.541195 (0.266416) | 1.886282 / 1.468490 (0.417792) | 0.713006 / 4.584777 (-3.871771) | 3.899205 / 3.745712 (0.153493) | 2.283427 / 5.269862 (-2.986435) | 1.543088 / 4.565676 (-3.022589) | 0.086189 / 0.424275 (-0.338087) | 0.012908 / 0.007607 (0.005301) | 0.516156 / 0.226044 (0.290112) | 5.144199 / 2.268929 (2.875271) | 2.460142 / 55.444624 (-52.984482) | 2.209054 / 6.876477 (-4.667423) | 2.325277 / 2.142072 (0.183204) | 0.849890 / 4.805227 (-3.955337) | 0.173687 / 6.500664 (-6.326977) | 0.070178 / 0.075469 (-0.005291) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.241790 / 1.841788 (-0.599997) | 16.047257 / 8.074308 (7.972949) | 15.774146 / 10.191392 (5.582754) | 0.145871 / 0.680424 (-0.534553) | 0.018106 / 0.534201 (-0.516095) | 0.433642 / 0.579283 (-0.145641) | 0.425311 / 0.434364 (-0.009053) | 0.533963 / 0.540337 (-0.006375) | 0.638786 / 1.386936 (-0.748151) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007242 / 0.011353 (-0.004111) | 0.005599 / 0.011008 (-0.005410) | 0.073443 / 0.038508 (0.034935) | 0.033764 / 0.023109 (0.010655) | 0.365990 / 0.275898 (0.090092) | 0.392943 / 0.323480 (0.069463) | 0.005987 / 0.007986 (-0.001999) | 0.004312 / 0.004328 (-0.000016) | 0.072831 / 0.004250 (0.068580) | 0.048854 / 0.037052 (0.011802) | 0.362477 / 0.258489 (0.103988) | 0.399993 / 0.293841 (0.106152) | 0.035602 / 0.128546 (-0.092944) | 0.012445 / 0.075646 (-0.063202) | 0.085768 / 0.419271 (-0.333504) | 0.048544 / 0.043533 (0.005011) | 0.362246 / 0.255139 (0.107107) | 0.388753 / 0.283200 (0.105554) | 0.109829 / 0.141683 (-0.031854) | 1.546881 / 1.452155 (0.094726) | 1.619454 / 1.492716 (0.126737) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.189926 / 0.018006 (0.171920) | 0.447936 / 0.000490 (0.447446) | 0.002354 / 0.000200 (0.002155) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031740 / 0.037411 (-0.005671) | 0.122595 / 0.014526 (0.108069) | 0.128389 / 0.176557 (-0.048168) | 0.180570 / 0.737135 (-0.556566) | 0.132939 / 0.296338 (-0.163399) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425073 / 0.215209 (0.209863) | 4.238964 / 2.077655 (2.161309) | 2.095116 / 1.504120 (0.590996) | 1.913925 / 1.541195 (0.372730) | 2.024669 / 1.468490 (0.556179) | 0.699172 / 4.584777 (-3.885605) | 3.845807 / 3.745712 (0.100094) | 2.167502 / 5.269862 (-3.102360) | 1.375267 / 4.565676 (-3.190410) | 0.086739 / 0.424275 (-0.337536) | 0.012198 / 0.007607 (0.004591) | 0.525975 / 0.226044 (0.299931) | 5.249449 / 2.268929 (2.980521) | 2.550565 / 55.444624 (-52.894060) | 2.257557 / 6.876477 (-4.618920) | 2.298936 / 2.142072 (0.156863) | 0.850295 / 4.805227 (-3.954932) | 0.170506 / 6.500664 (-6.330158) | 0.065659 / 0.075469 (-0.009810) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330556 / 1.841788 (-0.511231) | 16.920203 / 8.074308 (8.845894) | 15.966739 / 10.191392 (5.775347) | 0.164000 / 0.680424 (-0.516424) | 0.018211 / 0.534201 (-0.515990) | 0.436253 / 0.579283 (-0.143030) | 0.449666 / 0.434364 (0.015302) | 0.522287 / 0.540337 (-0.018050) | 0.615944 / 1.386936 (-0.770992) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#824f96c11a02b3817d6b1bf4dfed0abab27777f0 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007273 / 0.011353 (-0.004080) | 0.005198 / 0.011008 (-0.005810) | 0.114362 / 0.038508 (0.075854) | 0.031113 / 0.023109 (0.008003) | 0.378568 / 0.275898 (0.102670) | 0.441695 / 0.323480 (0.118215) | 0.006037 / 0.007986 (-0.001949) | 0.005102 / 0.004328 (0.000774) | 0.098682 / 0.004250 (0.094432) | 0.042797 / 0.037052 (0.005745) | 0.360028 / 0.258489 (0.101539) | 0.435757 / 0.293841 (0.141916) | 0.041438 / 0.128546 (-0.087109) | 0.013728 / 0.075646 (-0.061918) | 0.376154 / 0.419271 (-0.043117) | 0.075324 / 0.043533 (0.031791) | 0.357221 / 0.255139 (0.102082) | 0.416378 / 0.283200 (0.133178) | 0.110707 / 0.141683 (-0.030975) | 1.603215 / 1.452155 (0.151061) | 1.736843 / 1.492716 (0.244127) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249479 / 0.018006 (0.231473) | 0.513205 / 0.000490 (0.512715) | 0.003856 / 0.000200 (0.003656) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027750 / 0.037411 (-0.009661) | 0.105437 / 0.014526 (0.090911) | 0.115903 / 0.176557 (-0.060653) | 0.179662 / 0.737135 (-0.557474) | 0.116305 / 0.296338 (-0.180033) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.551681 / 0.215209 (0.336472) | 5.544590 / 2.077655 (3.466935) | 2.193933 / 1.504120 (0.689813) | 1.898395 / 1.541195 (0.357201) | 1.877288 / 1.468490 (0.408798) | 0.858097 / 4.584777 (-3.726680) | 4.920982 / 3.745712 (1.175270) | 2.478220 / 5.269862 (-2.791641) | 1.779608 / 4.565676 (-2.786069) | 0.101321 / 0.424275 (-0.322954) | 0.012627 / 0.007607 (0.005020) | 0.674865 / 0.226044 (0.448820) | 6.808224 / 2.268929 (4.539295) | 2.822466 / 55.444624 (-52.622159) | 2.170379 / 6.876477 (-4.706098) | 2.224278 / 2.142072 (0.082205) | 1.032763 / 4.805227 (-3.772464) | 0.198851 / 6.500664 (-6.301813) | 0.069249 / 0.075469 (-0.006220) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.425987 / 1.841788 (-0.415801) | 16.212942 / 8.074308 (8.138634) | 18.945770 / 10.191392 (8.754378) | 0.192901 / 0.680424 (-0.487522) | 0.025343 / 0.534201 (-0.508858) | 0.465441 / 0.579283 (-0.113842) | 0.540966 / 0.434364 (0.106602) | 0.576736 / 0.540337 (0.036399) | 0.675717 / 1.386936 (-0.711219) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007426 / 0.011353 (-0.003927) | 0.005023 / 0.011008 (-0.005985) | 0.085083 / 0.038508 (0.046575) | 0.030559 / 0.023109 (0.007449) | 0.398461 / 0.275898 (0.122563) | 0.418998 / 0.323480 (0.095518) | 0.006697 / 0.007986 (-0.001288) | 0.004665 / 0.004328 (0.000337) | 0.087724 / 0.004250 (0.083473) | 0.045799 / 0.037052 (0.008747) | 0.395165 / 0.258489 (0.136676) | 0.430172 / 0.293841 (0.136331) | 0.040486 / 0.128546 (-0.088060) | 0.014237 / 0.075646 (-0.061409) | 0.099429 / 0.419271 (-0.319843) | 0.056006 / 0.043533 (0.012473) | 0.389046 / 0.255139 (0.133907) | 0.419559 / 0.283200 (0.136359) | 0.108550 / 0.141683 (-0.033132) | 1.614052 / 1.452155 (0.161897) | 1.677785 / 1.492716 (0.185069) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202178 / 0.018006 (0.184172) | 0.486365 / 0.000490 (0.485875) | 0.003844 / 0.000200 (0.003644) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027963 / 0.037411 (-0.009449) | 0.110399 / 0.014526 (0.095873) | 0.122266 / 0.176557 (-0.054291) | 0.178551 / 0.737135 (-0.558585) | 0.129259 / 0.296338 (-0.167080) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.604178 / 0.215209 (0.388969) | 6.135943 / 2.077655 (4.058288) | 2.547576 / 1.504120 (1.043456) | 2.262470 / 1.541195 (0.721276) | 2.275402 / 1.468490 (0.806912) | 0.878804 / 4.584777 (-3.705972) | 5.152200 / 3.745712 (1.406488) | 2.553715 / 5.269862 (-2.716147) | 1.580959 / 4.565676 (-2.984717) | 0.107895 / 0.424275 (-0.316380) | 0.012751 / 0.007607 (0.005143) | 0.770678 / 0.226044 (0.544633) | 7.744303 / 2.268929 (5.475374) | 3.342037 / 55.444624 (-52.102588) | 2.756848 / 6.876477 (-4.119629) | 2.739357 / 2.142072 (0.597285) | 1.086330 / 4.805227 (-3.718897) | 0.230983 / 6.500664 (-6.269681) | 0.073771 / 0.075469 (-0.001698) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.493441 / 1.841788 (-0.348347) | 16.621611 / 8.074308 (8.547303) | 19.081000 / 10.191392 (8.889608) | 0.215623 / 0.680424 (-0.464801) | 0.025660 / 0.534201 (-0.508541) | 0.446490 / 0.579283 (-0.132793) | 0.560078 / 0.434364 (0.125714) | 0.527231 / 0.540337 (-0.013106) | 0.636551 / 1.386936 (-0.750385) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b899ea45c0a7e724ceb5f43c3a8b9fdb081fa67a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008266 / 0.011353 (-0.003087) | 0.005082 / 0.011008 (-0.005927) | 0.119858 / 0.038508 (0.081350) | 0.032907 / 0.023109 (0.009798) | 0.362816 / 0.275898 (0.086918) | 0.403684 / 0.323480 (0.080204) | 0.006296 / 0.007986 (-0.001690) | 0.006220 / 0.004328 (0.001891) | 0.095609 / 0.004250 (0.091359) | 0.048734 / 0.037052 (0.011682) | 0.385724 / 0.258489 (0.127235) | 0.424315 / 0.293841 (0.130475) | 0.042344 / 0.128546 (-0.086202) | 0.016147 / 0.075646 (-0.059500) | 0.409661 / 0.419271 (-0.009610) | 0.057900 / 0.043533 (0.014367) | 0.387013 / 0.255139 (0.131874) | 0.388901 / 0.283200 (0.105702) | 0.103920 / 0.141683 (-0.037762) | 1.732730 / 1.452155 (0.280575) | 1.863912 / 1.492716 (0.371196) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237406 / 0.018006 (0.219400) | 0.514398 / 0.000490 (0.513909) | 0.005941 / 0.000200 (0.005741) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027524 / 0.037411 (-0.009888) | 0.116498 / 0.014526 (0.101972) | 0.129034 / 0.176557 (-0.047522) | 0.218272 / 0.737135 (-0.518864) | 0.148389 / 0.296338 (-0.147950) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.604555 / 0.215209 (0.389346) | 5.921576 / 2.077655 (3.843921) | 2.410483 / 1.504120 (0.906363) | 2.220286 / 1.541195 (0.679092) | 2.138880 / 1.468490 (0.670390) | 0.934962 / 4.584777 (-3.649815) | 5.808855 / 3.745712 (2.063143) | 4.881554 / 5.269862 (-0.388308) | 2.536408 / 4.565676 (-2.029268) | 0.124260 / 0.424275 (-0.300015) | 0.017798 / 0.007607 (0.010190) | 0.778991 / 0.226044 (0.552947) | 7.899262 / 2.268929 (5.630333) | 3.208667 / 55.444624 (-52.235957) | 2.631182 / 6.876477 (-4.245295) | 2.676199 / 2.142072 (0.534127) | 1.165516 / 4.805227 (-3.639711) | 0.228751 / 6.500664 (-6.271913) | 0.081378 / 0.075469 (0.005909) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.522156 / 1.841788 (-0.319632) | 17.975381 / 8.074308 (9.901073) | 18.918882 / 10.191392 (8.727490) | 0.223984 / 0.680424 (-0.456440) | 0.025171 / 0.534201 (-0.509030) | 0.467894 / 0.579283 (-0.111389) | 0.559501 / 0.434364 (0.125137) | 0.550392 / 0.540337 (0.010055) | 0.696923 / 1.386936 (-0.690013) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008577 / 0.011353 (-0.002775) | 0.006735 / 0.011008 (-0.004273) | 0.095108 / 0.038508 (0.056600) | 0.035059 / 0.023109 (0.011950) | 0.448576 / 0.275898 (0.172677) | 0.492049 / 0.323480 (0.168569) | 0.006600 / 0.007986 (-0.001385) | 0.004760 / 0.004328 (0.000431) | 0.094670 / 0.004250 (0.090419) | 0.052543 / 0.037052 (0.015491) | 0.458927 / 0.258489 (0.200438) | 0.511522 / 0.293841 (0.217681) | 0.046046 / 0.128546 (-0.082500) | 0.015227 / 0.075646 (-0.060419) | 0.114585 / 0.419271 (-0.304686) | 0.057569 / 0.043533 (0.014036) | 0.441989 / 0.255139 (0.186850) | 0.487001 / 0.283200 (0.203801) | 0.115688 / 0.141683 (-0.025995) | 1.777366 / 1.452155 (0.325211) | 1.906216 / 1.492716 (0.413499) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224880 / 0.018006 (0.206874) | 0.504153 / 0.000490 (0.503664) | 0.001143 / 0.000200 (0.000943) | 0.000111 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033618 / 0.037411 (-0.003793) | 0.127396 / 0.014526 (0.112870) | 0.135648 / 0.176557 (-0.040909) | 0.193140 / 0.737135 (-0.543995) | 0.142129 / 0.296338 (-0.154209) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.692845 / 0.215209 (0.477636) | 6.804897 / 2.077655 (4.727242) | 2.851041 / 1.504120 (1.346921) | 2.480698 / 1.541195 (0.939504) | 2.488619 / 1.468490 (1.020129) | 0.970439 / 4.584777 (-3.614338) | 5.466059 / 3.745712 (1.720347) | 2.790261 / 5.269862 (-2.479601) | 1.727638 / 4.565676 (-2.838039) | 0.116345 / 0.424275 (-0.307930) | 0.014348 / 0.007607 (0.006740) | 0.845510 / 0.226044 (0.619465) | 8.397198 / 2.268929 (6.128270) | 3.591998 / 55.444624 (-51.852626) | 2.858339 / 6.876477 (-4.018137) | 2.905075 / 2.142072 (0.763003) | 1.193569 / 4.805227 (-3.611658) | 0.243091 / 6.500664 (-6.257573) | 0.082198 / 0.075469 (0.006729) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.610327 / 1.841788 (-0.231461) | 17.191414 / 8.074308 (9.117106) | 20.176518 / 10.191392 (9.985126) | 0.246574 / 0.680424 (-0.433850) | 0.024343 / 0.534201 (-0.509858) | 0.482091 / 0.579283 (-0.097192) | 0.585241 / 0.434364 (0.150877) | 0.558833 / 0.540337 (0.018496) | 0.654811 / 1.386936 (-0.732125) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#81761dbfa738354a9c50309313dfe90bea26d872 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006353 / 0.011353 (-0.004999) | 0.004393 / 0.011008 (-0.006616) | 0.098751 / 0.038508 (0.060242) | 0.029090 / 0.023109 (0.005981) | 0.304169 / 0.275898 (0.028271) | 0.339879 / 0.323480 (0.016399) | 0.005577 / 0.007986 (-0.002408) | 0.003516 / 0.004328 (-0.000813) | 0.077347 / 0.004250 (0.073097) | 0.041935 / 0.037052 (0.004882) | 0.305865 / 0.258489 (0.047376) | 0.357063 / 0.293841 (0.063222) | 0.025245 / 0.128546 (-0.103301) | 0.008753 / 0.075646 (-0.066893) | 0.316734 / 0.419271 (-0.102538) | 0.043464 / 0.043533 (-0.000069) | 0.300944 / 0.255139 (0.045805) | 0.330091 / 0.283200 (0.046891) | 0.088593 / 0.141683 (-0.053090) | 1.588958 / 1.452155 (0.136803) | 1.641376 / 1.492716 (0.148660) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220290 / 0.018006 (0.202284) | 0.445430 / 0.000490 (0.444940) | 0.004800 / 0.000200 (0.004600) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023828 / 0.037411 (-0.013583) | 0.103446 / 0.014526 (0.088920) | 0.110668 / 0.176557 (-0.065889) | 0.169604 / 0.737135 (-0.567531) | 0.114818 / 0.296338 (-0.181520) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416951 / 0.215209 (0.201742) | 4.138917 / 2.077655 (2.061263) | 1.891265 / 1.504120 (0.387145) | 1.687068 / 1.541195 (0.145873) | 1.726618 / 1.468490 (0.258128) | 0.546977 / 4.584777 (-4.037800) | 3.536153 / 3.745712 (-0.209560) | 1.795206 / 5.269862 (-3.474656) | 1.019845 / 4.565676 (-3.545831) | 0.067040 / 0.424275 (-0.357235) | 0.012038 / 0.007607 (0.004431) | 0.520583 / 0.226044 (0.294539) | 5.211520 / 2.268929 (2.942591) | 2.336136 / 55.444624 (-53.108488) | 2.011262 / 6.876477 (-4.865215) | 2.137311 / 2.142072 (-0.004762) | 0.654779 / 4.805227 (-4.150448) | 0.134555 / 6.500664 (-6.366109) | 0.066427 / 0.075469 (-0.009042) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.240187 / 1.841788 (-0.601600) | 14.104063 / 8.074308 (6.029755) | 13.369572 / 10.191392 (3.178180) | 0.147891 / 0.680424 (-0.532533) | 0.016993 / 0.534201 (-0.517208) | 0.364863 / 0.579283 (-0.214420) | 0.398684 / 0.434364 (-0.035680) | 0.430524 / 0.540337 (-0.109813) | 0.520920 / 1.386936 (-0.866016) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006845 / 0.011353 (-0.004508) | 0.004420 / 0.011008 (-0.006588) | 0.078334 / 0.038508 (0.039825) | 0.030566 / 0.023109 (0.007457) | 0.409568 / 0.275898 (0.133670) | 0.458389 / 0.323480 (0.134910) | 0.005739 / 0.007986 (-0.002247) | 0.005222 / 0.004328 (0.000893) | 0.076066 / 0.004250 (0.071816) | 0.049239 / 0.037052 (0.012187) | 0.409841 / 0.258489 (0.151352) | 0.472250 / 0.293841 (0.178409) | 0.025463 / 0.128546 (-0.103084) | 0.008738 / 0.075646 (-0.066909) | 0.083114 / 0.419271 (-0.336157) | 0.041233 / 0.043533 (-0.002300) | 0.407158 / 0.255139 (0.152019) | 0.438724 / 0.283200 (0.155524) | 0.097974 / 0.141683 (-0.043709) | 1.536514 / 1.452155 (0.084360) | 1.636704 / 1.492716 (0.143987) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.240589 / 0.018006 (0.222583) | 0.440328 / 0.000490 (0.439838) | 0.000937 / 0.000200 (0.000737) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027559 / 0.037411 (-0.009853) | 0.109930 / 0.014526 (0.095405) | 0.113366 / 0.176557 (-0.063190) | 0.166849 / 0.737135 (-0.570286) | 0.118872 / 0.296338 (-0.177467) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474120 / 0.215209 (0.258911) | 4.739222 / 2.077655 (2.661567) | 2.484386 / 1.504120 (0.980266) | 2.281937 / 1.541195 (0.740742) | 2.362974 / 1.468490 (0.894484) | 0.549897 / 4.584777 (-4.034879) | 3.425540 / 3.745712 (-0.320172) | 1.765810 / 5.269862 (-3.504051) | 1.008277 / 4.565676 (-3.557400) | 0.067288 / 0.424275 (-0.356987) | 0.011954 / 0.007607 (0.004347) | 0.577216 / 0.226044 (0.351172) | 5.790659 / 2.268929 (3.521731) | 2.946732 / 55.444624 (-52.497892) | 2.608835 / 6.876477 (-4.267641) | 2.642987 / 2.142072 (0.500915) | 0.652798 / 4.805227 (-4.152429) | 0.135909 / 6.500664 (-6.364755) | 0.068480 / 0.075469 (-0.006989) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.353550 / 1.841788 (-0.488237) | 14.732084 / 8.074308 (6.657775) | 14.439174 / 10.191392 (4.247782) | 0.131445 / 0.680424 (-0.548979) | 0.016608 / 0.534201 (-0.517593) | 0.368103 / 0.579283 (-0.211180) | 0.393918 / 0.434364 (-0.040446) | 0.423562 / 0.540337 (-0.116776) | 0.515041 / 1.386936 (-0.871895) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8907bdb23f78545303eb3bb0561e33ec6787f96c \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006414 / 0.011353 (-0.004938) | 0.004704 / 0.011008 (-0.006305) | 0.096012 / 0.038508 (0.057504) | 0.032910 / 0.023109 (0.009800) | 0.290676 / 0.275898 (0.014778) | 0.319646 / 0.323480 (-0.003834) | 0.005806 / 0.007986 (-0.002180) | 0.004008 / 0.004328 (-0.000320) | 0.073982 / 0.004250 (0.069731) | 0.048985 / 0.037052 (0.011933) | 0.299498 / 0.258489 (0.041009) | 0.338118 / 0.293841 (0.044277) | 0.027680 / 0.128546 (-0.100866) | 0.009051 / 0.075646 (-0.066595) | 0.325051 / 0.419271 (-0.094221) | 0.051011 / 0.043533 (0.007478) | 0.292249 / 0.255139 (0.037110) | 0.315733 / 0.283200 (0.032533) | 0.100327 / 0.141683 (-0.041356) | 1.481862 / 1.452155 (0.029707) | 1.544884 / 1.492716 (0.052168) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.289610 / 0.018006 (0.271603) | 0.510164 / 0.000490 (0.509675) | 0.004726 / 0.000200 (0.004526) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027617 / 0.037411 (-0.009794) | 0.107593 / 0.014526 (0.093068) | 0.122783 / 0.176557 (-0.053774) | 0.181086 / 0.737135 (-0.556049) | 0.128030 / 0.296338 (-0.168308) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403571 / 0.215209 (0.188362) | 4.002881 / 2.077655 (1.925227) | 1.805550 / 1.504120 (0.301430) | 1.619165 / 1.541195 (0.077971) | 1.606536 / 1.468490 (0.138046) | 0.518917 / 4.584777 (-4.065860) | 3.731498 / 3.745712 (-0.014214) | 3.206645 / 5.269862 (-2.063217) | 1.641615 / 4.565676 (-2.924062) | 0.065100 / 0.424275 (-0.359175) | 0.011396 / 0.007607 (0.003789) | 0.500597 / 0.226044 (0.274553) | 4.992293 / 2.268929 (2.723364) | 2.278726 / 55.444624 (-53.165898) | 1.960823 / 6.876477 (-4.915654) | 2.038684 / 2.142072 (-0.103388) | 0.640910 / 4.805227 (-4.164318) | 0.140597 / 6.500664 (-6.360067) | 0.062114 / 0.075469 (-0.013355) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.167366 / 1.841788 (-0.674422) | 14.748193 / 8.074308 (6.673884) | 13.592381 / 10.191392 (3.400989) | 0.165341 / 0.680424 (-0.515083) | 0.017360 / 0.534201 (-0.516841) | 0.393448 / 0.579283 (-0.185836) | 0.422951 / 0.434364 (-0.011413) | 0.460491 / 0.540337 (-0.079847) | 0.558238 / 1.386936 (-0.828698) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006373 / 0.011353 (-0.004980) | 0.004587 / 0.011008 (-0.006421) | 0.076421 / 0.038508 (0.037913) | 0.032162 / 0.023109 (0.009052) | 0.385531 / 0.275898 (0.109633) | 0.410424 / 0.323480 (0.086944) | 0.006154 / 0.007986 (-0.001832) | 0.005533 / 0.004328 (0.001205) | 0.077035 / 0.004250 (0.072784) | 0.051571 / 0.037052 (0.014519) | 0.393283 / 0.258489 (0.134794) | 0.433756 / 0.293841 (0.139915) | 0.028381 / 0.128546 (-0.100165) | 0.009034 / 0.075646 (-0.066613) | 0.083836 / 0.419271 (-0.335435) | 0.048246 / 0.043533 (0.004713) | 0.385437 / 0.255139 (0.130298) | 0.394187 / 0.283200 (0.110987) | 0.105453 / 0.141683 (-0.036230) | 1.459173 / 1.452155 (0.007018) | 1.575083 / 1.492716 (0.082367) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.320324 / 0.018006 (0.302318) | 0.502945 / 0.000490 (0.502455) | 0.004470 / 0.000200 (0.004270) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028118 / 0.037411 (-0.009293) | 0.111430 / 0.014526 (0.096904) | 0.123141 / 0.176557 (-0.053415) | 0.175215 / 0.737135 (-0.561920) | 0.126429 / 0.296338 (-0.169909) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433407 / 0.215209 (0.218198) | 4.329945 / 2.077655 (2.252291) | 2.096822 / 1.504120 (0.592702) | 1.908173 / 1.541195 (0.366978) | 1.967167 / 1.468490 (0.498676) | 0.529207 / 4.584777 (-4.055570) | 3.798424 / 3.745712 (0.052712) | 3.050716 / 5.269862 (-2.219146) | 1.445009 / 4.565676 (-3.120668) | 0.066467 / 0.424275 (-0.357809) | 0.011698 / 0.007607 (0.004090) | 0.528660 / 0.226044 (0.302615) | 5.282069 / 2.268929 (3.013141) | 2.535501 / 55.444624 (-52.909124) | 2.202856 / 6.876477 (-4.673621) | 2.293225 / 2.142072 (0.151153) | 0.640216 / 4.805227 (-4.165011) | 0.140884 / 6.500664 (-6.359780) | 0.064231 / 0.075469 (-0.011238) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.292129 / 1.841788 (-0.549659) | 15.371370 / 8.074308 (7.297062) | 15.114854 / 10.191392 (4.923462) | 0.176870 / 0.680424 (-0.503554) | 0.017380 / 0.534201 (-0.516821) | 0.398156 / 0.579283 (-0.181127) | 0.442277 / 0.434364 (0.007913) | 0.467093 / 0.540337 (-0.073244) | 0.561599 / 1.386936 (-0.825337) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#323747a5ff7d9b204ea3c4989d658af7102f7bbd \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009360 / 0.011353 (-0.001993) | 0.006297 / 0.011008 (-0.004712) | 0.133131 / 0.038508 (0.094623) | 0.040261 / 0.023109 (0.017152) | 0.419101 / 0.275898 (0.143203) | 0.453087 / 0.323480 (0.129607) | 0.007718 / 0.007986 (-0.000268) | 0.005698 / 0.004328 (0.001369) | 0.102261 / 0.004250 (0.098010) | 0.055147 / 0.037052 (0.018095) | 0.428355 / 0.258489 (0.169866) | 0.505241 / 0.293841 (0.211400) | 0.046745 / 0.128546 (-0.081802) | 0.015559 / 0.075646 (-0.060088) | 0.441775 / 0.419271 (0.022503) | 0.070165 / 0.043533 (0.026632) | 0.421957 / 0.255139 (0.166818) | 0.445156 / 0.283200 (0.161957) | 0.126321 / 0.141683 (-0.015362) | 1.900486 / 1.452155 (0.448331) | 2.088630 / 1.492716 (0.595913) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260244 / 0.018006 (0.242237) | 0.606317 / 0.000490 (0.605828) | 0.006827 / 0.000200 (0.006627) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031958 / 0.037411 (-0.005453) | 0.139362 / 0.014526 (0.124836) | 0.148748 / 0.176557 (-0.027809) | 0.226269 / 0.737135 (-0.510866) | 0.161145 / 0.296338 (-0.135194) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.666287 / 0.215209 (0.451078) | 6.588707 / 2.077655 (4.511053) | 2.736155 / 1.504120 (1.232035) | 2.329601 / 1.541195 (0.788406) | 2.324991 / 1.468490 (0.856501) | 0.943608 / 4.584777 (-3.641169) | 6.051653 / 3.745712 (2.305941) | 2.929150 / 5.269862 (-2.340711) | 1.804461 / 4.565676 (-2.761216) | 0.113302 / 0.424275 (-0.310973) | 0.015245 / 0.007607 (0.007638) | 0.827029 / 0.226044 (0.600984) | 8.211536 / 2.268929 (5.942608) | 3.445231 / 55.444624 (-51.999393) | 2.756728 / 6.876477 (-4.119748) | 2.904039 / 2.142072 (0.761966) | 1.162339 / 4.805227 (-3.642888) | 0.231168 / 6.500664 (-6.269496) | 0.089038 / 0.075469 (0.013569) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.640619 / 1.841788 (-0.201169) | 20.034157 / 8.074308 (11.959849) | 22.346006 / 10.191392 (12.154614) | 0.255300 / 0.680424 (-0.425124) | 0.031452 / 0.534201 (-0.502749) | 0.563290 / 0.579283 (-0.015993) | 0.653556 / 0.434364 (0.219192) | 0.687663 / 0.540337 (0.147326) | 0.816432 / 1.386936 (-0.570504) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010340 / 0.011353 (-0.001013) | 0.006245 / 0.011008 (-0.004764) | 0.128012 / 0.038508 (0.089504) | 0.041799 / 0.023109 (0.018690) | 0.533340 / 0.275898 (0.257442) | 0.592243 / 0.323480 (0.268763) | 0.009256 / 0.007986 (0.001271) | 0.005310 / 0.004328 (0.000982) | 0.110973 / 0.004250 (0.106722) | 0.065465 / 0.037052 (0.028412) | 0.533845 / 0.258489 (0.275356) | 0.602190 / 0.293841 (0.308349) | 0.060245 / 0.128546 (-0.068301) | 0.016954 / 0.075646 (-0.058693) | 0.119727 / 0.419271 (-0.299545) | 0.064628 / 0.043533 (0.021095) | 0.558229 / 0.255139 (0.303090) | 0.563696 / 0.283200 (0.280496) | 0.137225 / 0.141683 (-0.004458) | 2.038605 / 1.452155 (0.586451) | 2.158655 / 1.492716 (0.665939) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.327067 / 0.018006 (0.309061) | 0.628812 / 0.000490 (0.628323) | 0.010259 / 0.000200 (0.010059) | 0.000123 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037023 / 0.037411 (-0.000388) | 0.142462 / 0.014526 (0.127936) | 0.158165 / 0.176557 (-0.018392) | 0.220808 / 0.737135 (-0.516328) | 0.163608 / 0.296338 (-0.132731) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.776119 / 0.215209 (0.560910) | 7.813044 / 2.077655 (5.735389) | 3.610901 / 1.504120 (2.106781) | 3.195144 / 1.541195 (1.653950) | 3.218245 / 1.468490 (1.749755) | 1.092732 / 4.584777 (-3.492045) | 5.965526 / 3.745712 (2.219813) | 2.914683 / 5.269862 (-2.355179) | 1.848397 / 4.565676 (-2.717280) | 0.114436 / 0.424275 (-0.309839) | 0.014794 / 0.007607 (0.007187) | 0.887141 / 0.226044 (0.661096) | 9.009743 / 2.268929 (6.740815) | 4.180143 / 55.444624 (-51.264481) | 3.452194 / 6.876477 (-3.424283) | 3.493520 / 2.142072 (1.351448) | 1.233327 / 4.805227 (-3.571900) | 0.235390 / 6.500664 (-6.265274) | 0.099544 / 0.075469 (0.024075) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.853482 / 1.841788 (0.011694) | 20.071177 / 8.074308 (11.996869) | 24.507618 / 10.191392 (14.316226) | 0.260164 / 0.680424 (-0.420260) | 0.028433 / 0.534201 (-0.505768) | 0.549181 / 0.579283 (-0.030102) | 0.650069 / 0.434364 (0.215705) | 0.629541 / 0.540337 (0.089203) | 0.808932 / 1.386936 (-0.578004) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f39ba76af62c8037de3f464e87cbb095f8729062 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009537 / 0.011353 (-0.001816) | 0.006036 / 0.011008 (-0.004972) | 0.141210 / 0.038508 (0.102701) | 0.037493 / 0.023109 (0.014384) | 0.404285 / 0.275898 (0.128386) | 0.458906 / 0.323480 (0.135427) | 0.007224 / 0.007986 (-0.000761) | 0.005148 / 0.004328 (0.000819) | 0.103889 / 0.004250 (0.099639) | 0.048877 / 0.037052 (0.011824) | 0.413220 / 0.258489 (0.154731) | 0.458153 / 0.293841 (0.164312) | 0.046008 / 0.128546 (-0.082538) | 0.015116 / 0.075646 (-0.060531) | 0.439836 / 0.419271 (0.020565) | 0.067527 / 0.043533 (0.023994) | 0.435794 / 0.255139 (0.180656) | 0.451687 / 0.283200 (0.168487) | 0.121274 / 0.141683 (-0.020409) | 1.950199 / 1.452155 (0.498044) | 2.035589 / 1.492716 (0.542873) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.247056 / 0.018006 (0.229050) | 0.550348 / 0.000490 (0.549858) | 0.005504 / 0.000200 (0.005305) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032171 / 0.037411 (-0.005240) | 0.135983 / 0.014526 (0.121457) | 0.149587 / 0.176557 (-0.026970) | 0.233414 / 0.737135 (-0.503722) | 0.152598 / 0.296338 (-0.143740) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.634813 / 0.215209 (0.419604) | 6.453619 / 2.077655 (4.375964) | 2.582070 / 1.504120 (1.077951) | 2.214292 / 1.541195 (0.673097) | 2.220012 / 1.468490 (0.751522) | 0.987374 / 4.584777 (-3.597403) | 5.543760 / 3.745712 (1.798047) | 2.808865 / 5.269862 (-2.460996) | 1.714713 / 4.565676 (-2.850963) | 0.111016 / 0.424275 (-0.313259) | 0.014688 / 0.007607 (0.007081) | 0.842542 / 0.226044 (0.616498) | 8.414336 / 2.268929 (6.145407) | 3.501021 / 55.444624 (-51.943604) | 2.665335 / 6.876477 (-4.211142) | 2.843706 / 2.142072 (0.701633) | 1.196398 / 4.805227 (-3.608829) | 0.245508 / 6.500664 (-6.255156) | 0.086970 / 0.075469 (0.011501) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.590244 / 1.841788 (-0.251544) | 18.694141 / 8.074308 (10.619833) | 21.752463 / 10.191392 (11.561071) | 0.264511 / 0.680424 (-0.415913) | 0.028713 / 0.534201 (-0.505488) | 0.531102 / 0.579283 (-0.048181) | 0.626302 / 0.434364 (0.191938) | 0.624541 / 0.540337 (0.084203) | 0.745745 / 1.386936 (-0.641191) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010097 / 0.011353 (-0.001256) | 0.005558 / 0.011008 (-0.005451) | 0.111326 / 0.038508 (0.072818) | 0.036465 / 0.023109 (0.013356) | 0.472116 / 0.275898 (0.196218) | 0.524479 / 0.323480 (0.200999) | 0.007466 / 0.007986 (-0.000520) | 0.005440 / 0.004328 (0.001112) | 0.103482 / 0.004250 (0.099231) | 0.053217 / 0.037052 (0.016165) | 0.476685 / 0.258489 (0.218196) | 0.554011 / 0.293841 (0.260170) | 0.047157 / 0.128546 (-0.081390) | 0.015895 / 0.075646 (-0.059751) | 0.115997 / 0.419271 (-0.303274) | 0.062290 / 0.043533 (0.018758) | 0.474166 / 0.255139 (0.219027) | 0.498854 / 0.283200 (0.215655) | 0.121798 / 0.141683 (-0.019885) | 1.956583 / 1.452155 (0.504428) | 2.069620 / 1.492716 (0.576904) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278637 / 0.018006 (0.260631) | 0.555295 / 0.000490 (0.554805) | 0.007401 / 0.000200 (0.007201) | 0.000121 / 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033576 / 0.037411 (-0.003835) | 0.136479 / 0.014526 (0.121954) | 0.153960 / 0.176557 (-0.022597) | 0.203422 / 0.737135 (-0.533713) | 0.154159 / 0.296338 (-0.142180) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.672561 / 0.215209 (0.457352) | 6.956675 / 2.077655 (4.879020) | 3.063636 / 1.504120 (1.559516) | 2.668256 / 1.541195 (1.127061) | 2.794793 / 1.468490 (1.326303) | 0.964242 / 4.584777 (-3.620535) | 5.785992 / 3.745712 (2.040279) | 2.850079 / 5.269862 (-2.419782) | 1.782491 / 4.565676 (-2.783186) | 0.114859 / 0.424275 (-0.309416) | 0.015229 / 0.007607 (0.007622) | 0.858406 / 0.226044 (0.632362) | 8.646296 / 2.268929 (6.377367) | 3.842133 / 55.444624 (-51.602492) | 3.180017 / 6.876477 (-3.696460) | 3.241315 / 2.142072 (1.099243) | 1.248988 / 4.805227 (-3.556239) | 0.235075 / 6.500664 (-6.265589) | 0.087192 / 0.075469 (0.011723) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.783877 / 1.841788 (-0.057910) | 19.477223 / 8.074308 (11.402914) | 22.926734 / 10.191392 (12.735342) | 0.246970 / 0.680424 (-0.433454) | 0.026386 / 0.534201 (-0.507815) | 0.517599 / 0.579283 (-0.061684) | 0.626504 / 0.434364 (0.192140) | 0.606943 / 0.540337 (0.066606) | 0.739115 / 1.386936 (-0.647821) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e8f051a41454f8625091338e6b53119a5eb9b2a0 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008085 / 0.011353 (-0.003268) | 0.005568 / 0.011008 (-0.005440) | 0.119674 / 0.038508 (0.081166) | 0.040452 / 0.023109 (0.017343) | 0.360288 / 0.275898 (0.084390) | 0.409448 / 0.323480 (0.085968) | 0.007281 / 0.007986 (-0.000705) | 0.004931 / 0.004328 (0.000602) | 0.089956 / 0.004250 (0.085706) | 0.056088 / 0.037052 (0.019036) | 0.384708 / 0.258489 (0.126219) | 0.423506 / 0.293841 (0.129665) | 0.033280 / 0.128546 (-0.095266) | 0.010696 / 0.075646 (-0.064951) | 0.394851 / 0.419271 (-0.024421) | 0.058412 / 0.043533 (0.014879) | 0.361514 / 0.255139 (0.106375) | 0.399121 / 0.283200 (0.115921) | 0.117927 / 0.141683 (-0.023756) | 1.791499 / 1.452155 (0.339344) | 1.889000 / 1.492716 (0.396284) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253324 / 0.018006 (0.235318) | 0.536151 / 0.000490 (0.535661) | 0.010450 / 0.000200 (0.010250) | 0.000171 / 0.000054 (0.000117) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034646 / 0.037411 (-0.002765) | 0.145999 / 0.014526 (0.131473) | 0.153793 / 0.176557 (-0.022763) | 0.232871 / 0.737135 (-0.504265) | 0.161151 / 0.296338 (-0.135188) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471407 / 0.215209 (0.256197) | 4.715702 / 2.077655 (2.638047) | 2.228939 / 1.504120 (0.724819) | 2.008511 / 1.541195 (0.467317) | 2.135182 / 1.468490 (0.666692) | 0.620720 / 4.584777 (-3.964057) | 4.960731 / 3.745712 (1.215019) | 2.222469 / 5.269862 (-3.047393) | 1.284467 / 4.565676 (-3.281209) | 0.077931 / 0.424275 (-0.346344) | 0.013935 / 0.007607 (0.006328) | 0.593164 / 0.226044 (0.367120) | 5.940829 / 2.268929 (3.671900) | 2.664277 / 55.444624 (-52.780347) | 2.290655 / 6.876477 (-4.585822) | 2.496664 / 2.142072 (0.354592) | 0.759166 / 4.805227 (-4.046061) | 0.168011 / 6.500664 (-6.332653) | 0.077993 / 0.075469 (0.002524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.440663 / 1.841788 (-0.401125) | 19.105377 / 8.074308 (11.031069) | 16.068118 / 10.191392 (5.876726) | 0.193024 / 0.680424 (-0.487400) | 0.022348 / 0.534201 (-0.511853) | 0.517454 / 0.579283 (-0.061829) | 0.528072 / 0.434364 (0.093708) | 0.565293 / 0.540337 (0.024955) | 0.676578 / 1.386936 (-0.710358) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008089 / 0.011353 (-0.003264) | 0.005287 / 0.011008 (-0.005721) | 0.087964 / 0.038508 (0.049456) | 0.041548 / 0.023109 (0.018439) | 0.437733 / 0.275898 (0.161835) | 0.487878 / 0.323480 (0.164398) | 0.006898 / 0.007986 (-0.001087) | 0.004649 / 0.004328 (0.000320) | 0.086982 / 0.004250 (0.082732) | 0.056874 / 0.037052 (0.019822) | 0.437397 / 0.258489 (0.178908) | 0.490636 / 0.293841 (0.196795) | 0.033550 / 0.128546 (-0.094997) | 0.010430 / 0.075646 (-0.065216) | 0.096076 / 0.419271 (-0.323196) | 0.054028 / 0.043533 (0.010495) | 0.450262 / 0.255139 (0.195123) | 0.465566 / 0.283200 (0.182366) | 0.119987 / 0.141683 (-0.021696) | 1.764428 / 1.452155 (0.312273) | 1.841547 / 1.492716 (0.348831) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271427 / 0.018006 (0.253420) | 0.506386 / 0.000490 (0.505896) | 0.001213 / 0.000200 (0.001013) | 0.000125 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036159 / 0.037411 (-0.001253) | 0.140578 / 0.014526 (0.126053) | 0.147517 / 0.176557 (-0.029040) | 0.206215 / 0.737135 (-0.530921) | 0.152560 / 0.296338 (-0.143779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.522833 / 0.215209 (0.307624) | 5.215732 / 2.077655 (3.138077) | 2.553406 / 1.504120 (1.049286) | 2.344815 / 1.541195 (0.803620) | 2.422377 / 1.468490 (0.953886) | 0.631197 / 4.584777 (-3.953580) | 4.906216 / 3.745712 (1.160504) | 2.212923 / 5.269862 (-3.056938) | 1.352937 / 4.565676 (-3.212740) | 0.079141 / 0.424275 (-0.345135) | 0.013691 / 0.007607 (0.006084) | 0.634939 / 0.226044 (0.408895) | 6.578770 / 2.268929 (4.309842) | 3.080339 / 55.444624 (-52.364286) | 2.710243 / 6.876477 (-4.166234) | 2.740476 / 2.142072 (0.598404) | 0.783610 / 4.805227 (-4.021617) | 0.171589 / 6.500664 (-6.329075) | 0.077311 / 0.075469 (0.001842) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.584847 / 1.841788 (-0.256941) | 19.510132 / 8.074308 (11.435824) | 18.074572 / 10.191392 (7.883180) | 0.173494 / 0.680424 (-0.506930) | 0.021149 / 0.534201 (-0.513052) | 0.469026 / 0.579283 (-0.110258) | 0.518463 / 0.434364 (0.084099) | 0.550363 / 0.540337 (0.010026) | 0.667087 / 1.386936 (-0.719849) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5dfcd876c25cc0ffbd6b5b518b017419390a8ada \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007144 / 0.011353 (-0.004209) | 0.004783 / 0.011008 (-0.006225) | 0.103991 / 0.038508 (0.065483) | 0.039098 / 0.023109 (0.015989) | 0.319851 / 0.275898 (0.043952) | 0.356104 / 0.323480 (0.032625) | 0.007077 / 0.007986 (-0.000909) | 0.004188 / 0.004328 (-0.000141) | 0.078360 / 0.004250 (0.074109) | 0.050951 / 0.037052 (0.013899) | 0.321791 / 0.258489 (0.063302) | 0.356123 / 0.293841 (0.062283) | 0.028967 / 0.128546 (-0.099579) | 0.009091 / 0.075646 (-0.066555) | 0.355265 / 0.419271 (-0.064007) | 0.052521 / 0.043533 (0.008988) | 0.317333 / 0.255139 (0.062194) | 0.340747 / 0.283200 (0.057547) | 0.104354 / 0.141683 (-0.037329) | 1.522791 / 1.452155 (0.070636) | 1.579835 / 1.492716 (0.087118) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260539 / 0.018006 (0.242532) | 0.454230 / 0.000490 (0.453740) | 0.036588 / 0.000200 (0.036388) | 0.000289 / 0.000054 (0.000235) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028375 / 0.037411 (-0.009036) | 0.118939 / 0.014526 (0.104413) | 0.126553 / 0.176557 (-0.050004) | 0.184596 / 0.737135 (-0.552539) | 0.130583 / 0.296338 (-0.165755) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417353 / 0.215209 (0.202144) | 4.171595 / 2.077655 (2.093940) | 1.855096 / 1.504120 (0.350976) | 1.673941 / 1.541195 (0.132747) | 1.761370 / 1.468490 (0.292880) | 0.544081 / 4.584777 (-4.040696) | 3.851877 / 3.745712 (0.106165) | 1.896661 / 5.269862 (-3.373200) | 1.093303 / 4.565676 (-3.472373) | 0.067967 / 0.424275 (-0.356308) | 0.012313 / 0.007607 (0.004706) | 0.532316 / 0.226044 (0.306272) | 5.336016 / 2.268929 (3.067087) | 2.344780 / 55.444624 (-53.099845) | 1.993909 / 6.876477 (-4.882568) | 2.167324 / 2.142072 (0.025251) | 0.670334 / 4.805227 (-4.134893) | 0.147705 / 6.500664 (-6.352959) | 0.067634 / 0.075469 (-0.007835) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251005 / 1.841788 (-0.590783) | 15.405531 / 8.074308 (7.331223) | 14.197019 / 10.191392 (4.005627) | 0.144230 / 0.680424 (-0.536193) | 0.018352 / 0.534201 (-0.515849) | 0.427536 / 0.579283 (-0.151748) | 0.433135 / 0.434364 (-0.001229) | 0.502624 / 0.540337 (-0.037713) | 0.612312 / 1.386936 (-0.774624) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007011 / 0.011353 (-0.004342) | 0.004857 / 0.011008 (-0.006151) | 0.077797 / 0.038508 (0.039289) | 0.035411 / 0.023109 (0.012302) | 0.368234 / 0.275898 (0.092336) | 0.408359 / 0.323480 (0.084879) | 0.005883 / 0.007986 (-0.002102) | 0.004311 / 0.004328 (-0.000017) | 0.077216 / 0.004250 (0.072966) | 0.052062 / 0.037052 (0.015010) | 0.368502 / 0.258489 (0.110013) | 0.428681 / 0.293841 (0.134840) | 0.028889 / 0.128546 (-0.099657) | 0.009146 / 0.075646 (-0.066501) | 0.085515 / 0.419271 (-0.333756) | 0.050216 / 0.043533 (0.006683) | 0.359562 / 0.255139 (0.104423) | 0.378335 / 0.283200 (0.095135) | 0.106351 / 0.141683 (-0.035332) | 1.538943 / 1.452155 (0.086788) | 1.663572 / 1.492716 (0.170855) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216917 / 0.018006 (0.198911) | 0.444130 / 0.000490 (0.443641) | 0.002640 / 0.000200 (0.002440) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032509 / 0.037411 (-0.004902) | 0.123955 / 0.014526 (0.109430) | 0.133236 / 0.176557 (-0.043321) | 0.187408 / 0.737135 (-0.549727) | 0.136696 / 0.296338 (-0.159643) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443714 / 0.215209 (0.228505) | 4.416973 / 2.077655 (2.339318) | 2.145279 / 1.504120 (0.641159) | 1.946669 / 1.541195 (0.405474) | 2.044105 / 1.468490 (0.575614) | 0.534463 / 4.584777 (-4.050314) | 3.824926 / 3.745712 (0.079214) | 3.151796 / 5.269862 (-2.118066) | 1.497513 / 4.565676 (-3.068164) | 0.066799 / 0.424275 (-0.357476) | 0.012408 / 0.007607 (0.004801) | 0.544182 / 0.226044 (0.318138) | 5.419403 / 2.268929 (3.150474) | 2.605191 / 55.444624 (-52.839433) | 2.285354 / 6.876477 (-4.591123) | 2.359520 / 2.142072 (0.217448) | 0.655489 / 4.805227 (-4.149738) | 0.143496 / 6.500664 (-6.357168) | 0.066782 / 0.075469 (-0.008687) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.329370 / 1.841788 (-0.512418) | 16.058019 / 8.074308 (7.983711) | 15.119769 / 10.191392 (4.928377) | 0.147967 / 0.680424 (-0.532457) | 0.018360 / 0.534201 (-0.515841) | 0.436847 / 0.579283 (-0.142436) | 0.435136 / 0.434364 (0.000773) | 0.507176 / 0.540337 (-0.033161) | 0.610627 / 1.386936 (-0.776309) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b4cc3ee6d8945052283076854eb77575d52b7432 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006425 / 0.011353 (-0.004927) | 0.003710 / 0.011008 (-0.007298) | 0.102072 / 0.038508 (0.063564) | 0.033974 / 0.023109 (0.010865) | 0.273146 / 0.275898 (-0.002752) | 0.313254 / 0.323480 (-0.010226) | 0.004889 / 0.007986 (-0.003096) | 0.004803 / 0.004328 (0.000475) | 0.067359 / 0.004250 (0.063109) | 0.040281 / 0.037052 (0.003228) | 0.302106 / 0.258489 (0.043617) | 0.318039 / 0.293841 (0.024198) | 0.028839 / 0.128546 (-0.099707) | 0.008726 / 0.075646 (-0.066921) | 0.322532 / 0.419271 (-0.096739) | 0.048845 / 0.043533 (0.005312) | 0.299836 / 0.255139 (0.044697) | 0.300983 / 0.283200 (0.017784) | 0.103384 / 0.141683 (-0.038299) | 1.417245 / 1.452155 (-0.034910) | 1.538819 / 1.492716 (0.046102) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219798 / 0.018006 (0.201792) | 0.442297 / 0.000490 (0.441807) | 0.013792 / 0.000200 (0.013592) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024996 / 0.037411 (-0.012416) | 0.098558 / 0.014526 (0.084032) | 0.116423 / 0.176557 (-0.060133) | 0.163481 / 0.737135 (-0.573654) | 0.115031 / 0.296338 (-0.181308) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392411 / 0.215209 (0.177202) | 4.025992 / 2.077655 (1.948337) | 1.850809 / 1.504120 (0.346690) | 1.668330 / 1.541195 (0.127136) | 1.627041 / 1.468490 (0.158551) | 0.510721 / 4.584777 (-4.074055) | 3.841318 / 3.745712 (0.095606) | 3.416979 / 5.269862 (-1.852883) | 1.640796 / 4.565676 (-2.924880) | 0.061968 / 0.424275 (-0.362307) | 0.010281 / 0.007607 (0.002674) | 0.485592 / 0.226044 (0.259548) | 4.872205 / 2.268929 (2.603277) | 2.146753 / 55.444624 (-53.297871) | 1.832087 / 6.876477 (-5.044390) | 1.920928 / 2.142072 (-0.221144) | 0.606363 / 4.805227 (-4.198864) | 0.134351 / 6.500664 (-6.366313) | 0.057583 / 0.075469 (-0.017886) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.153048 / 1.841788 (-0.688739) | 14.165743 / 8.074308 (6.091435) | 12.237798 / 10.191392 (2.046406) | 0.159815 / 0.680424 (-0.520608) | 0.018226 / 0.534201 (-0.515975) | 0.372390 / 0.579283 (-0.206893) | 0.396552 / 0.434364 (-0.037811) | 0.439445 / 0.540337 (-0.100892) | 0.521924 / 1.386936 (-0.865012) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006162 / 0.011353 (-0.005191) | 0.004006 / 0.011008 (-0.007002) | 0.067226 / 0.038508 (0.028718) | 0.030285 / 0.023109 (0.007176) | 0.361220 / 0.275898 (0.085322) | 0.386783 / 0.323480 (0.063303) | 0.005202 / 0.007986 (-0.002784) | 0.003453 / 0.004328 (-0.000876) | 0.068299 / 0.004250 (0.064048) | 0.041433 / 0.037052 (0.004381) | 0.360222 / 0.258489 (0.101733) | 0.399327 / 0.293841 (0.105486) | 0.026066 / 0.128546 (-0.102480) | 0.008025 / 0.075646 (-0.067621) | 0.079588 / 0.419271 (-0.339683) | 0.042616 / 0.043533 (-0.000917) | 0.347639 / 0.255139 (0.092500) | 0.386092 / 0.283200 (0.102893) | 0.100869 / 0.141683 (-0.040814) | 1.386901 / 1.452155 (-0.065254) | 1.471523 / 1.492716 (-0.021193) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217020 / 0.018006 (0.199014) | 0.431033 / 0.000490 (0.430543) | 0.002902 / 0.000200 (0.002702) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027396 / 0.037411 (-0.010015) | 0.114154 / 0.014526 (0.099629) | 0.117918 / 0.176557 (-0.058638) | 0.173342 / 0.737135 (-0.563794) | 0.125812 / 0.296338 (-0.170526) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424843 / 0.215209 (0.209634) | 4.324828 / 2.077655 (2.247174) | 2.188263 / 1.504120 (0.684143) | 1.912288 / 1.541195 (0.371094) | 2.011621 / 1.468490 (0.543131) | 0.560944 / 4.584777 (-4.023833) | 3.975047 / 3.745712 (0.229335) | 3.130242 / 5.269862 (-2.139619) | 1.667902 / 4.565676 (-2.897775) | 0.062245 / 0.424275 (-0.362030) | 0.011300 / 0.007607 (0.003692) | 0.498571 / 0.226044 (0.272527) | 5.024887 / 2.268929 (2.755958) | 2.482967 / 55.444624 (-52.961657) | 2.216125 / 6.876477 (-4.660352) | 2.175856 / 2.142072 (0.033783) | 0.615207 / 4.805227 (-4.190021) | 0.133808 / 6.500664 (-6.366856) | 0.058681 / 0.075469 (-0.016788) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.370150 / 1.841788 (-0.471637) | 14.580907 / 8.074308 (6.506599) | 14.209955 / 10.191392 (4.018563) | 0.139738 / 0.680424 (-0.540686) | 0.018722 / 0.534201 (-0.515479) | 0.375755 / 0.579283 (-0.203528) | 0.428335 / 0.434364 (-0.006029) | 0.438957 / 0.540337 (-0.101380) | 0.541130 / 1.386936 (-0.845806) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c14806a42a20f44a60f3663642bae1de199ab1ec \"CML watermark\")\n"
] | 2023-05-15T15:28:34 | 2023-06-08T16:40:18 | 2023-06-08T16:32:51 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5863",
"html_url": "https://github.com/huggingface/datasets/pull/5863",
"diff_url": "https://github.com/huggingface/datasets/pull/5863.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5863.patch",
"merged_at": "2023-06-08T16:32:50"
} | This PR tries out a new approach to generating the index tensor in `to_tf_dataset`, which should reduce memory usage for very large datasets. I'll need to do some testing before merging it!
Fixes #5855 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5863/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5862 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5862/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5862/comments | https://api.github.com/repos/huggingface/datasets/issues/5862/events | https://github.com/huggingface/datasets/issues/5862 | 1,710,140,646 | I_kwDODunzps5l7qzm | 5,862 | IndexError: list index out of range with data hosted on Zenodo | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"This error is also raised when data is hosted on Google Drive:\r\n- https://huggingface.co/datasets/docred/discussions/5\r\n- https://huggingface.co/datasets/linnaeus/discussions/3\r\n- https://huggingface.co/datasets/poleval2019_mt/discussions/3\r\n- https://huggingface.co/datasets/reddit_tifu/discussions/2\r\n- https://huggingface.co/datasets/species_800/discussions/3\r\n- https://huggingface.co/datasets/wiki_lingua/discussions/1\r\n- https://huggingface.co/datasets/yoruba_text_c3/discussions/1"
] | 2023-05-15T13:47:19 | 2023-06-16T14:54:02 | null | MEMBER | null | null | null | The dataset viewer sometimes raises an `IndexError`:
```
IndexError: list index out of range
```
See:
- huggingface/datasets-server#1151
- https://huggingface.co/datasets/reddit/discussions/5
- huggingface/datasets-server#1118
- https://huggingface.co/datasets/krr-oxford/OntoLAMA/discussions/1
- https://huggingface.co/datasets/hyperpartisan_news_detection/discussions/3
- https://huggingface.co/datasets/um005/discussions/2
- https://huggingface.co/datasets/tapaco/discussions/2
- https://huggingface.co/datasets/common_language/discussions/3
- https://huggingface.co/datasets/pass/discussions/1
After investigation:
- This happens with data files hosted on Zenodo
- Indeed, there is an underlying 429 HTTP error: Too Many Requests
Note that some time ago, it also happened with data files hosted on Google Drive. See:
- #4581
- #4580
The reason then was that there was a 403 HTTP error: Forbidden
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5862/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5861/comments | https://api.github.com/repos/huggingface/datasets/issues/5861/events | https://github.com/huggingface/datasets/pull/5861 | 1,709,807,340 | PR_kwDODunzps5Qf55q | 5,861 | Better error message when combining dataset dicts instead of datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007167 / 0.011353 (-0.004185) | 0.004914 / 0.011008 (-0.006094) | 0.096858 / 0.038508 (0.058350) | 0.033468 / 0.023109 (0.010359) | 0.297276 / 0.275898 (0.021378) | 0.344289 / 0.323480 (0.020809) | 0.005703 / 0.007986 (-0.002282) | 0.003972 / 0.004328 (-0.000357) | 0.075191 / 0.004250 (0.070940) | 0.046247 / 0.037052 (0.009194) | 0.317857 / 0.258489 (0.059368) | 0.347263 / 0.293841 (0.053422) | 0.035017 / 0.128546 (-0.093529) | 0.012036 / 0.075646 (-0.063611) | 0.332522 / 0.419271 (-0.086750) | 0.050188 / 0.043533 (0.006655) | 0.296627 / 0.255139 (0.041488) | 0.319196 / 0.283200 (0.035997) | 0.101100 / 0.141683 (-0.040583) | 1.484536 / 1.452155 (0.032382) | 1.606364 / 1.492716 (0.113648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203954 / 0.018006 (0.185948) | 0.436505 / 0.000490 (0.436015) | 0.003853 / 0.000200 (0.003654) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025834 / 0.037411 (-0.011578) | 0.105759 / 0.014526 (0.091233) | 0.114289 / 0.176557 (-0.062268) | 0.174388 / 0.737135 (-0.562748) | 0.122248 / 0.296338 (-0.174090) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404218 / 0.215209 (0.189009) | 4.027900 / 2.077655 (1.950245) | 1.854757 / 1.504120 (0.350637) | 1.668882 / 1.541195 (0.127687) | 1.731451 / 1.468490 (0.262961) | 0.707843 / 4.584777 (-3.876934) | 3.756386 / 3.745712 (0.010674) | 2.067751 / 5.269862 (-3.202110) | 1.313039 / 4.565676 (-3.252638) | 0.086442 / 0.424275 (-0.337833) | 0.012329 / 0.007607 (0.004722) | 0.505964 / 0.226044 (0.279919) | 5.050788 / 2.268929 (2.781860) | 2.353936 / 55.444624 (-53.090688) | 2.055560 / 6.876477 (-4.820917) | 2.162948 / 2.142072 (0.020876) | 0.850532 / 4.805227 (-3.954696) | 0.168560 / 6.500664 (-6.332104) | 0.063143 / 0.075469 (-0.012326) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.182723 / 1.841788 (-0.659065) | 14.779342 / 8.074308 (6.705034) | 14.461572 / 10.191392 (4.270180) | 0.163120 / 0.680424 (-0.517303) | 0.017978 / 0.534201 (-0.516223) | 0.419168 / 0.579283 (-0.160115) | 0.420955 / 0.434364 (-0.013409) | 0.509710 / 0.540337 (-0.030628) | 0.619586 / 1.386936 (-0.767350) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006804 / 0.011353 (-0.004549) | 0.005136 / 0.011008 (-0.005872) | 0.074910 / 0.038508 (0.036402) | 0.032552 / 0.023109 (0.009443) | 0.374998 / 0.275898 (0.099100) | 0.399219 / 0.323480 (0.075739) | 0.005615 / 0.007986 (-0.002371) | 0.004118 / 0.004328 (-0.000210) | 0.074219 / 0.004250 (0.069969) | 0.045924 / 0.037052 (0.008871) | 0.383228 / 0.258489 (0.124739) | 0.407195 / 0.293841 (0.113354) | 0.035460 / 0.128546 (-0.093086) | 0.012460 / 0.075646 (-0.063187) | 0.087077 / 0.419271 (-0.332195) | 0.050507 / 0.043533 (0.006974) | 0.369001 / 0.255139 (0.113862) | 0.385761 / 0.283200 (0.102561) | 0.106999 / 0.141683 (-0.034684) | 1.465456 / 1.452155 (0.013302) | 1.556962 / 1.492716 (0.064246) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214926 / 0.018006 (0.196920) | 0.436893 / 0.000490 (0.436403) | 0.003388 / 0.000200 (0.003188) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029919 / 0.037411 (-0.007492) | 0.110859 / 0.014526 (0.096333) | 0.120617 / 0.176557 (-0.055939) | 0.171781 / 0.737135 (-0.565355) | 0.125627 / 0.296338 (-0.170712) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436024 / 0.215209 (0.220815) | 4.359167 / 2.077655 (2.281512) | 2.188399 / 1.504120 (0.684279) | 2.001196 / 1.541195 (0.460001) | 2.023710 / 1.468490 (0.555220) | 0.713799 / 4.584777 (-3.870978) | 3.832217 / 3.745712 (0.086504) | 3.269351 / 5.269862 (-2.000510) | 1.534608 / 4.565676 (-3.031068) | 0.088505 / 0.424275 (-0.335770) | 0.012345 / 0.007607 (0.004738) | 0.542446 / 0.226044 (0.316401) | 5.377757 / 2.268929 (3.108828) | 2.659837 / 55.444624 (-52.784787) | 2.272356 / 6.876477 (-4.604120) | 2.297289 / 2.142072 (0.155217) | 0.855276 / 4.805227 (-3.949952) | 0.170666 / 6.500664 (-6.329998) | 0.064549 / 0.075469 (-0.010920) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255938 / 1.841788 (-0.585850) | 15.151471 / 8.074308 (7.077163) | 12.905762 / 10.191392 (2.714370) | 0.162425 / 0.680424 (-0.517999) | 0.017504 / 0.534201 (-0.516697) | 0.448671 / 0.579283 (-0.130612) | 0.422424 / 0.434364 (-0.011940) | 0.551772 / 0.540337 (0.011434) | 0.649115 / 1.386936 (-0.737821) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#be73d9f192149727c5542ff257df81b03024fa39 \"CML watermark\")\n",
"Having those different checks helps providing an appropriate error message.\r\n\r\nIf the input is a dict, we suggest to select a split. If the input lists is a mix of iterable and non-iterable, we mention that it must be one or the other.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006559 / 0.011353 (-0.004794) | 0.004569 / 0.011008 (-0.006439) | 0.104503 / 0.038508 (0.065995) | 0.028220 / 0.023109 (0.005111) | 0.365507 / 0.275898 (0.089609) | 0.400238 / 0.323480 (0.076758) | 0.004968 / 0.007986 (-0.003017) | 0.003271 / 0.004328 (-0.001057) | 0.082804 / 0.004250 (0.078554) | 0.036299 / 0.037052 (-0.000754) | 0.361201 / 0.258489 (0.102712) | 0.410962 / 0.293841 (0.117121) | 0.030423 / 0.128546 (-0.098123) | 0.011612 / 0.075646 (-0.064034) | 0.331820 / 0.419271 (-0.087452) | 0.043822 / 0.043533 (0.000289) | 0.356242 / 0.255139 (0.101103) | 0.393035 / 0.283200 (0.109836) | 0.088426 / 0.141683 (-0.053257) | 1.484139 / 1.452155 (0.031984) | 1.566712 / 1.492716 (0.073995) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195887 / 0.018006 (0.177880) | 0.402720 / 0.000490 (0.402231) | 0.003516 / 0.000200 (0.003316) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023270 / 0.037411 (-0.014141) | 0.095834 / 0.014526 (0.081308) | 0.102924 / 0.176557 (-0.073632) | 0.161397 / 0.737135 (-0.575738) | 0.105225 / 0.296338 (-0.191114) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451701 / 0.215209 (0.236491) | 4.495171 / 2.077655 (2.417517) | 2.223203 / 1.504120 (0.719083) | 2.035533 / 1.541195 (0.494338) | 2.076182 / 1.468490 (0.607692) | 0.697317 / 4.584777 (-3.887460) | 3.406309 / 3.745712 (-0.339403) | 1.847179 / 5.269862 (-3.422683) | 1.158762 / 4.565676 (-3.406914) | 0.083067 / 0.424275 (-0.341208) | 0.012453 / 0.007607 (0.004846) | 0.546502 / 0.226044 (0.320458) | 5.455712 / 2.268929 (3.186784) | 2.654142 / 55.444624 (-52.790483) | 2.298722 / 6.876477 (-4.577755) | 2.383467 / 2.142072 (0.241395) | 0.805950 / 4.805227 (-3.999278) | 0.152479 / 6.500664 (-6.348185) | 0.066784 / 0.075469 (-0.008685) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.239129 / 1.841788 (-0.602659) | 13.603707 / 8.074308 (5.529398) | 14.062004 / 10.191392 (3.870612) | 0.130928 / 0.680424 (-0.549495) | 0.016907 / 0.534201 (-0.517294) | 0.381614 / 0.579283 (-0.197670) | 0.386770 / 0.434364 (-0.047594) | 0.455792 / 0.540337 (-0.084545) | 0.526092 / 1.386936 (-0.860844) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006202 / 0.011353 (-0.005151) | 0.004478 / 0.011008 (-0.006531) | 0.076492 / 0.038508 (0.037984) | 0.026703 / 0.023109 (0.003594) | 0.355134 / 0.275898 (0.079236) | 0.391207 / 0.323480 (0.067727) | 0.004852 / 0.007986 (-0.003133) | 0.003271 / 0.004328 (-0.001057) | 0.075080 / 0.004250 (0.070830) | 0.038803 / 0.037052 (0.001750) | 0.359530 / 0.258489 (0.101041) | 0.409044 / 0.293841 (0.115203) | 0.030366 / 0.128546 (-0.098180) | 0.011544 / 0.075646 (-0.064102) | 0.084849 / 0.419271 (-0.334423) | 0.040076 / 0.043533 (-0.003457) | 0.357359 / 0.255139 (0.102220) | 0.384075 / 0.283200 (0.100875) | 0.089130 / 0.141683 (-0.052552) | 1.520400 / 1.452155 (0.068246) | 1.604403 / 1.492716 (0.111687) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257127 / 0.018006 (0.239121) | 0.403691 / 0.000490 (0.403202) | 0.006894 / 0.000200 (0.006694) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024653 / 0.037411 (-0.012758) | 0.098834 / 0.014526 (0.084309) | 0.107276 / 0.176557 (-0.069281) | 0.158256 / 0.737135 (-0.578879) | 0.111339 / 0.296338 (-0.184999) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445006 / 0.215209 (0.229797) | 4.452953 / 2.077655 (2.375299) | 2.168291 / 1.504120 (0.664171) | 1.969457 / 1.541195 (0.428262) | 2.003505 / 1.468490 (0.535015) | 0.695857 / 4.584777 (-3.888920) | 3.433424 / 3.745712 (-0.312288) | 2.466977 / 5.269862 (-2.802885) | 1.528167 / 4.565676 (-3.037509) | 0.082425 / 0.424275 (-0.341850) | 0.012470 / 0.007607 (0.004863) | 0.559039 / 0.226044 (0.332995) | 5.609496 / 2.268929 (3.340568) | 2.602898 / 55.444624 (-52.841726) | 2.273971 / 6.876477 (-4.602506) | 2.303370 / 2.142072 (0.161298) | 0.803875 / 4.805227 (-4.001352) | 0.151069 / 6.500664 (-6.349595) | 0.067956 / 0.075469 (-0.007513) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.334443 / 1.841788 (-0.507345) | 13.773252 / 8.074308 (5.698944) | 13.007042 / 10.191392 (2.815650) | 0.127939 / 0.680424 (-0.552485) | 0.016412 / 0.534201 (-0.517789) | 0.374744 / 0.579283 (-0.204539) | 0.396912 / 0.434364 (-0.037452) | 0.443197 / 0.540337 (-0.097140) | 0.528338 / 1.386936 (-0.858598) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#51d9f2a3064aa89a780e3d02c6cc34000c51c4fb \"CML watermark\")\n",
"Just modified it to use only one loop. I think I managed to keep it readable as well",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007382 / 0.011353 (-0.003971) | 0.005143 / 0.011008 (-0.005865) | 0.097635 / 0.038508 (0.059127) | 0.034726 / 0.023109 (0.011616) | 0.315556 / 0.275898 (0.039658) | 0.355951 / 0.323480 (0.032472) | 0.006055 / 0.007986 (-0.001931) | 0.004264 / 0.004328 (-0.000065) | 0.073636 / 0.004250 (0.069386) | 0.050480 / 0.037052 (0.013428) | 0.316031 / 0.258489 (0.057542) | 0.363933 / 0.293841 (0.070092) | 0.035138 / 0.128546 (-0.093408) | 0.012407 / 0.075646 (-0.063239) | 0.333677 / 0.419271 (-0.085595) | 0.050586 / 0.043533 (0.007053) | 0.309507 / 0.255139 (0.054369) | 0.327043 / 0.283200 (0.043844) | 0.108975 / 0.141683 (-0.032708) | 1.447778 / 1.452155 (-0.004377) | 1.519971 / 1.492716 (0.027255) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248770 / 0.018006 (0.230764) | 0.603036 / 0.000490 (0.602546) | 0.000383 / 0.000200 (0.000183) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027094 / 0.037411 (-0.010317) | 0.104427 / 0.014526 (0.089901) | 0.120627 / 0.176557 (-0.055929) | 0.178790 / 0.737135 (-0.558346) | 0.124877 / 0.296338 (-0.171461) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414442 / 0.215209 (0.199233) | 4.138009 / 2.077655 (2.060355) | 1.964642 / 1.504120 (0.460523) | 1.775940 / 1.541195 (0.234745) | 1.899719 / 1.468490 (0.431228) | 0.695406 / 4.584777 (-3.889371) | 3.760470 / 3.745712 (0.014758) | 3.906958 / 5.269862 (-1.362904) | 2.028164 / 4.565676 (-2.537513) | 0.086704 / 0.424275 (-0.337571) | 0.012465 / 0.007607 (0.004857) | 0.512336 / 0.226044 (0.286292) | 5.108587 / 2.268929 (2.839659) | 2.435273 / 55.444624 (-53.009352) | 2.142387 / 6.876477 (-4.734090) | 2.258234 / 2.142072 (0.116162) | 0.854035 / 4.805227 (-3.951193) | 0.170443 / 6.500664 (-6.330222) | 0.065762 / 0.075469 (-0.009707) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.187529 / 1.841788 (-0.654259) | 15.151164 / 8.074308 (7.076856) | 14.577545 / 10.191392 (4.386153) | 0.166973 / 0.680424 (-0.513450) | 0.017883 / 0.534201 (-0.516318) | 0.427607 / 0.579283 (-0.151676) | 0.417050 / 0.434364 (-0.017314) | 0.508116 / 0.540337 (-0.032221) | 0.590173 / 1.386936 (-0.796763) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007499 / 0.011353 (-0.003854) | 0.005195 / 0.011008 (-0.005813) | 0.073600 / 0.038508 (0.035091) | 0.033574 / 0.023109 (0.010464) | 0.377506 / 0.275898 (0.101608) | 0.432752 / 0.323480 (0.109272) | 0.006042 / 0.007986 (-0.001944) | 0.006427 / 0.004328 (0.002098) | 0.071666 / 0.004250 (0.067416) | 0.053243 / 0.037052 (0.016190) | 0.363972 / 0.258489 (0.105483) | 0.454988 / 0.293841 (0.161147) | 0.035118 / 0.128546 (-0.093428) | 0.012395 / 0.075646 (-0.063251) | 0.084308 / 0.419271 (-0.334963) | 0.048589 / 0.043533 (0.005057) | 0.368036 / 0.255139 (0.112897) | 0.399414 / 0.283200 (0.116215) | 0.109043 / 0.141683 (-0.032640) | 1.462972 / 1.452155 (0.010817) | 1.574443 / 1.492716 (0.081726) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215107 / 0.018006 (0.197101) | 0.550255 / 0.000490 (0.549765) | 0.004630 / 0.000200 (0.004430) | 0.000104 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029948 / 0.037411 (-0.007463) | 0.111866 / 0.014526 (0.097340) | 0.126559 / 0.176557 (-0.049997) | 0.181443 / 0.737135 (-0.555693) | 0.130559 / 0.296338 (-0.165779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441410 / 0.215209 (0.226201) | 4.403406 / 2.077655 (2.325752) | 2.180276 / 1.504120 (0.676156) | 2.003729 / 1.541195 (0.462534) | 2.079394 / 1.468490 (0.610904) | 0.706061 / 4.584777 (-3.878716) | 3.805668 / 3.745712 (0.059956) | 3.864941 / 5.269862 (-1.404921) | 1.970468 / 4.565676 (-2.595208) | 0.086033 / 0.424275 (-0.338242) | 0.012261 / 0.007607 (0.004654) | 0.550427 / 0.226044 (0.324383) | 5.542270 / 2.268929 (3.273342) | 2.717047 / 55.444624 (-52.727577) | 2.449022 / 6.876477 (-4.427455) | 2.549567 / 2.142072 (0.407495) | 0.854981 / 4.805227 (-3.950247) | 0.169756 / 6.500664 (-6.330908) | 0.067082 / 0.075469 (-0.008387) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281369 / 1.841788 (-0.560419) | 15.445090 / 8.074308 (7.370781) | 13.205652 / 10.191392 (3.014260) | 0.170070 / 0.680424 (-0.510354) | 0.017815 / 0.534201 (-0.516385) | 0.425193 / 0.579283 (-0.154090) | 0.425205 / 0.434364 (-0.009159) | 0.493561 / 0.540337 (-0.046776) | 0.588994 / 1.386936 (-0.797942) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e427105fc68fce04d0f3c74efb942cbf3a65d166 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006345 / 0.011353 (-0.005008) | 0.004330 / 0.011008 (-0.006678) | 0.096327 / 0.038508 (0.057819) | 0.032964 / 0.023109 (0.009855) | 0.335600 / 0.275898 (0.059702) | 0.365635 / 0.323480 (0.042155) | 0.005435 / 0.007986 (-0.002551) | 0.005005 / 0.004328 (0.000677) | 0.071107 / 0.004250 (0.066856) | 0.044363 / 0.037052 (0.007311) | 0.339988 / 0.258489 (0.081498) | 0.375575 / 0.293841 (0.081734) | 0.028343 / 0.128546 (-0.100203) | 0.008587 / 0.075646 (-0.067059) | 0.324349 / 0.419271 (-0.094922) | 0.050105 / 0.043533 (0.006573) | 0.327398 / 0.255139 (0.072259) | 0.348479 / 0.283200 (0.065279) | 0.102357 / 0.141683 (-0.039326) | 1.419905 / 1.452155 (-0.032250) | 1.534887 / 1.492716 (0.042171) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212418 / 0.018006 (0.194412) | 0.433183 / 0.000490 (0.432693) | 0.000595 / 0.000200 (0.000395) | 0.000062 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027520 / 0.037411 (-0.009891) | 0.109503 / 0.014526 (0.094977) | 0.118202 / 0.176557 (-0.058355) | 0.177236 / 0.737135 (-0.559899) | 0.123736 / 0.296338 (-0.172602) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405734 / 0.215209 (0.190525) | 4.039566 / 2.077655 (1.961911) | 1.838211 / 1.504120 (0.334091) | 1.652650 / 1.541195 (0.111456) | 1.753488 / 1.468490 (0.284998) | 0.525258 / 4.584777 (-4.059519) | 3.704509 / 3.745712 (-0.041203) | 1.826794 / 5.269862 (-3.443067) | 1.236361 / 4.565676 (-3.329315) | 0.065619 / 0.424275 (-0.358656) | 0.011606 / 0.007607 (0.003999) | 0.505954 / 0.226044 (0.279910) | 5.054140 / 2.268929 (2.785211) | 2.352587 / 55.444624 (-53.092037) | 2.050601 / 6.876477 (-4.825875) | 2.097222 / 2.142072 (-0.044850) | 0.641044 / 4.805227 (-4.164183) | 0.140676 / 6.500664 (-6.359988) | 0.063217 / 0.075469 (-0.012253) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.177750 / 1.841788 (-0.664038) | 14.819346 / 8.074308 (6.745038) | 14.085937 / 10.191392 (3.894545) | 0.168618 / 0.680424 (-0.511806) | 0.017189 / 0.534201 (-0.517011) | 0.393415 / 0.579283 (-0.185868) | 0.422879 / 0.434364 (-0.011485) | 0.477289 / 0.540337 (-0.063048) | 0.569078 / 1.386936 (-0.817858) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006502 / 0.011353 (-0.004850) | 0.004640 / 0.011008 (-0.006368) | 0.073272 / 0.038508 (0.034764) | 0.033225 / 0.023109 (0.010116) | 0.359165 / 0.275898 (0.083267) | 0.391659 / 0.323480 (0.068179) | 0.005684 / 0.007986 (-0.002302) | 0.004045 / 0.004328 (-0.000284) | 0.072880 / 0.004250 (0.068629) | 0.046260 / 0.037052 (0.009208) | 0.361772 / 0.258489 (0.103283) | 0.402905 / 0.293841 (0.109064) | 0.027732 / 0.128546 (-0.100814) | 0.008864 / 0.075646 (-0.066783) | 0.081961 / 0.419271 (-0.337310) | 0.046170 / 0.043533 (0.002637) | 0.364198 / 0.255139 (0.109059) | 0.387468 / 0.283200 (0.104269) | 0.105456 / 0.141683 (-0.036227) | 1.457176 / 1.452155 (0.005021) | 1.564899 / 1.492716 (0.072183) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.179129 / 0.018006 (0.161123) | 0.439699 / 0.000490 (0.439209) | 0.002882 / 0.000200 (0.002682) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029123 / 0.037411 (-0.008288) | 0.112046 / 0.014526 (0.097520) | 0.122773 / 0.176557 (-0.053784) | 0.178404 / 0.737135 (-0.558732) | 0.127904 / 0.296338 (-0.168434) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440413 / 0.215209 (0.225204) | 4.407334 / 2.077655 (2.329680) | 2.112932 / 1.504120 (0.608812) | 1.911034 / 1.541195 (0.369840) | 2.057168 / 1.468490 (0.588677) | 0.525472 / 4.584777 (-4.059305) | 3.738894 / 3.745712 (-0.006818) | 1.807592 / 5.269862 (-3.462270) | 1.053837 / 4.565676 (-3.511839) | 0.066203 / 0.424275 (-0.358072) | 0.011965 / 0.007607 (0.004358) | 0.541137 / 0.226044 (0.315093) | 5.415040 / 2.268929 (3.146112) | 2.580476 / 55.444624 (-52.864148) | 2.234144 / 6.876477 (-4.642333) | 2.306014 / 2.142072 (0.163942) | 0.644221 / 4.805227 (-4.161006) | 0.142870 / 6.500664 (-6.357794) | 0.065015 / 0.075469 (-0.010454) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303465 / 1.841788 (-0.538323) | 14.949683 / 8.074308 (6.875375) | 14.370871 / 10.191392 (4.179478) | 0.142714 / 0.680424 (-0.537710) | 0.017372 / 0.534201 (-0.516829) | 0.403898 / 0.579283 (-0.175385) | 0.424781 / 0.434364 (-0.009583) | 0.465984 / 0.540337 (-0.074353) | 0.570863 / 1.386936 (-0.816074) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#22d1d533e8ab831b1aa1aab3e7d3c72ba42a83e8 \"CML watermark\")\n"
] | 2023-05-15T10:36:24 | 2023-05-23T10:40:13 | 2023-05-23T10:32:58 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5861",
"html_url": "https://github.com/huggingface/datasets/pull/5861",
"diff_url": "https://github.com/huggingface/datasets/pull/5861.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5861.patch",
"merged_at": "2023-05-23T10:32:58"
} | close https://github.com/huggingface/datasets/issues/5851 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5861/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5860 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5860/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5860/comments | https://api.github.com/repos/huggingface/datasets/issues/5860/events | https://github.com/huggingface/datasets/pull/5860 | 1,709,727,460 | PR_kwDODunzps5QfojD | 5,860 | Minor tqdm optim | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006917 / 0.011353 (-0.004436) | 0.004803 / 0.011008 (-0.006205) | 0.097082 / 0.038508 (0.058574) | 0.035105 / 0.023109 (0.011996) | 0.325911 / 0.275898 (0.050013) | 0.371858 / 0.323480 (0.048378) | 0.006451 / 0.007986 (-0.001534) | 0.004421 / 0.004328 (0.000093) | 0.075738 / 0.004250 (0.071487) | 0.053624 / 0.037052 (0.016572) | 0.332661 / 0.258489 (0.074172) | 0.372729 / 0.293841 (0.078888) | 0.028279 / 0.128546 (-0.100267) | 0.009318 / 0.075646 (-0.066328) | 0.328505 / 0.419271 (-0.090766) | 0.066962 / 0.043533 (0.023429) | 0.316863 / 0.255139 (0.061724) | 0.344296 / 0.283200 (0.061096) | 0.120575 / 0.141683 (-0.021108) | 1.457867 / 1.452155 (0.005712) | 1.597361 / 1.492716 (0.104644) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296399 / 0.018006 (0.278392) | 0.507196 / 0.000490 (0.506706) | 0.003036 / 0.000200 (0.002836) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028535 / 0.037411 (-0.008876) | 0.110566 / 0.014526 (0.096040) | 0.122078 / 0.176557 (-0.054479) | 0.182926 / 0.737135 (-0.554210) | 0.125546 / 0.296338 (-0.170792) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426952 / 0.215209 (0.211742) | 4.255608 / 2.077655 (2.177953) | 2.063865 / 1.504120 (0.559745) | 1.867198 / 1.541195 (0.326004) | 2.058236 / 1.468490 (0.589746) | 0.525885 / 4.584777 (-4.058892) | 3.723607 / 3.745712 (-0.022105) | 1.919144 / 5.269862 (-3.350718) | 1.235308 / 4.565676 (-3.330368) | 0.066423 / 0.424275 (-0.357852) | 0.012045 / 0.007607 (0.004438) | 0.528432 / 0.226044 (0.302388) | 5.268723 / 2.268929 (2.999794) | 2.504071 / 55.444624 (-52.940553) | 2.137999 / 6.876477 (-4.738477) | 2.229987 / 2.142072 (0.087914) | 0.641739 / 4.805227 (-4.163488) | 0.142635 / 6.500664 (-6.358029) | 0.065649 / 0.075469 (-0.009820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.182710 / 1.841788 (-0.659078) | 15.339777 / 8.074308 (7.265469) | 14.722308 / 10.191392 (4.530916) | 0.145914 / 0.680424 (-0.534510) | 0.017861 / 0.534201 (-0.516340) | 0.393092 / 0.579283 (-0.186191) | 0.431179 / 0.434364 (-0.003185) | 0.485712 / 0.540337 (-0.054625) | 0.602634 / 1.386936 (-0.784302) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006792 / 0.011353 (-0.004561) | 0.005118 / 0.011008 (-0.005890) | 0.073440 / 0.038508 (0.034932) | 0.033751 / 0.023109 (0.010642) | 0.389243 / 0.275898 (0.113345) | 0.397083 / 0.323480 (0.073603) | 0.005989 / 0.007986 (-0.001997) | 0.004289 / 0.004328 (-0.000040) | 0.073228 / 0.004250 (0.068977) | 0.053490 / 0.037052 (0.016438) | 0.396070 / 0.258489 (0.137581) | 0.415134 / 0.293841 (0.121293) | 0.028649 / 0.128546 (-0.099897) | 0.009159 / 0.075646 (-0.066487) | 0.080813 / 0.419271 (-0.338458) | 0.048200 / 0.043533 (0.004667) | 0.388009 / 0.255139 (0.132870) | 0.382174 / 0.283200 (0.098975) | 0.107807 / 0.141683 (-0.033876) | 1.467276 / 1.452155 (0.015121) | 1.568091 / 1.492716 (0.075375) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.328030 / 0.018006 (0.310024) | 0.498058 / 0.000490 (0.497568) | 0.002513 / 0.000200 (0.002313) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029835 / 0.037411 (-0.007576) | 0.113859 / 0.014526 (0.099333) | 0.130813 / 0.176557 (-0.045743) | 0.183646 / 0.737135 (-0.553490) | 0.136561 / 0.296338 (-0.159777) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438901 / 0.215209 (0.223692) | 4.376426 / 2.077655 (2.298771) | 2.220932 / 1.504120 (0.716812) | 2.043585 / 1.541195 (0.502390) | 2.161383 / 1.468490 (0.692893) | 0.523224 / 4.584777 (-4.061553) | 3.730589 / 3.745712 (-0.015123) | 1.859602 / 5.269862 (-3.410260) | 1.073415 / 4.565676 (-3.492261) | 0.066363 / 0.424275 (-0.357912) | 0.012491 / 0.007607 (0.004884) | 0.542052 / 0.226044 (0.316008) | 5.426246 / 2.268929 (3.157318) | 2.673884 / 55.444624 (-52.770740) | 2.372611 / 6.876477 (-4.503865) | 2.482216 / 2.142072 (0.340143) | 0.705669 / 4.805227 (-4.099558) | 0.141075 / 6.500664 (-6.359589) | 0.065339 / 0.075469 (-0.010130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.316403 / 1.841788 (-0.525385) | 15.832870 / 8.074308 (7.758562) | 13.307045 / 10.191392 (3.115653) | 0.147258 / 0.680424 (-0.533166) | 0.017966 / 0.534201 (-0.516235) | 0.414396 / 0.579283 (-0.164887) | 0.431801 / 0.434364 (-0.002563) | 0.465483 / 0.540337 (-0.074855) | 0.577850 / 1.386936 (-0.809086) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c795c7e332a7c850c3e725f2034d4894b5e314f7 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006368 / 0.011353 (-0.004985) | 0.004274 / 0.011008 (-0.006734) | 0.098799 / 0.038508 (0.060291) | 0.029096 / 0.023109 (0.005986) | 0.308009 / 0.275898 (0.032111) | 0.345701 / 0.323480 (0.022221) | 0.005312 / 0.007986 (-0.002674) | 0.003435 / 0.004328 (-0.000894) | 0.075912 / 0.004250 (0.071662) | 0.041993 / 0.037052 (0.004941) | 0.320075 / 0.258489 (0.061586) | 0.347506 / 0.293841 (0.053665) | 0.025456 / 0.128546 (-0.103091) | 0.008461 / 0.075646 (-0.067185) | 0.322823 / 0.419271 (-0.096448) | 0.044650 / 0.043533 (0.001117) | 0.314118 / 0.255139 (0.058979) | 0.333436 / 0.283200 (0.050237) | 0.093811 / 0.141683 (-0.047871) | 1.464464 / 1.452155 (0.012310) | 1.548098 / 1.492716 (0.055382) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.015905 / 0.018006 (-0.002101) | 0.427847 / 0.000490 (0.427357) | 0.007600 / 0.000200 (0.007400) | 0.000421 / 0.000054 (0.000366) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024530 / 0.037411 (-0.012882) | 0.099907 / 0.014526 (0.085381) | 0.107282 / 0.176557 (-0.069275) | 0.168332 / 0.737135 (-0.568804) | 0.109875 / 0.296338 (-0.186464) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451064 / 0.215209 (0.235855) | 4.491434 / 2.077655 (2.413779) | 2.253251 / 1.504120 (0.749131) | 2.086740 / 1.541195 (0.545545) | 2.133288 / 1.468490 (0.664798) | 0.558801 / 4.584777 (-4.025976) | 3.463525 / 3.745712 (-0.282187) | 1.747657 / 5.269862 (-3.522205) | 1.005465 / 4.565676 (-3.560211) | 0.068341 / 0.424275 (-0.355934) | 0.012521 / 0.007607 (0.004914) | 0.567002 / 0.226044 (0.340957) | 5.689529 / 2.268929 (3.420601) | 2.700562 / 55.444624 (-52.744062) | 2.384888 / 6.876477 (-4.491589) | 2.503160 / 2.142072 (0.361088) | 0.667107 / 4.805227 (-4.138120) | 0.137253 / 6.500664 (-6.363412) | 0.068300 / 0.075469 (-0.007170) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.202916 / 1.841788 (-0.638872) | 14.163393 / 8.074308 (6.089085) | 14.402463 / 10.191392 (4.211071) | 0.145273 / 0.680424 (-0.535151) | 0.016996 / 0.534201 (-0.517205) | 0.363520 / 0.579283 (-0.215763) | 0.421595 / 0.434364 (-0.012769) | 0.438413 / 0.540337 (-0.101925) | 0.508615 / 1.386936 (-0.878321) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006419 / 0.011353 (-0.004934) | 0.004346 / 0.011008 (-0.006662) | 0.076356 / 0.038508 (0.037848) | 0.029370 / 0.023109 (0.006260) | 0.371046 / 0.275898 (0.095148) | 0.398279 / 0.323480 (0.074799) | 0.005258 / 0.007986 (-0.002728) | 0.003528 / 0.004328 (-0.000800) | 0.076787 / 0.004250 (0.072537) | 0.041575 / 0.037052 (0.004522) | 0.362319 / 0.258489 (0.103830) | 0.402134 / 0.293841 (0.108293) | 0.025633 / 0.128546 (-0.102913) | 0.008826 / 0.075646 (-0.066820) | 0.082380 / 0.419271 (-0.336892) | 0.041655 / 0.043533 (-0.001878) | 0.357583 / 0.255139 (0.102444) | 0.383486 / 0.283200 (0.100287) | 0.093682 / 0.141683 (-0.048001) | 1.488522 / 1.452155 (0.036367) | 1.576090 / 1.492716 (0.083373) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185556 / 0.018006 (0.167550) | 0.431345 / 0.000490 (0.430855) | 0.002290 / 0.000200 (0.002090) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026030 / 0.037411 (-0.011382) | 0.102889 / 0.014526 (0.088364) | 0.109541 / 0.176557 (-0.067015) | 0.161050 / 0.737135 (-0.576085) | 0.113525 / 0.296338 (-0.182814) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445301 / 0.215209 (0.230092) | 4.437320 / 2.077655 (2.359666) | 2.174181 / 1.504120 (0.670061) | 1.977440 / 1.541195 (0.436245) | 2.036323 / 1.468490 (0.567832) | 0.554227 / 4.584777 (-4.030550) | 3.462746 / 3.745712 (-0.282966) | 1.765257 / 5.269862 (-3.504604) | 1.014515 / 4.565676 (-3.551161) | 0.068391 / 0.424275 (-0.355884) | 0.013154 / 0.007607 (0.005546) | 0.546696 / 0.226044 (0.320652) | 5.490628 / 2.268929 (3.221699) | 2.611947 / 55.444624 (-52.832677) | 2.282659 / 6.876477 (-4.593818) | 2.333972 / 2.142072 (0.191899) | 0.663140 / 4.805227 (-4.142087) | 0.137996 / 6.500664 (-6.362668) | 0.069063 / 0.075469 (-0.006407) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.332147 / 1.841788 (-0.509641) | 14.781592 / 8.074308 (6.707284) | 13.399190 / 10.191392 (3.207798) | 0.139370 / 0.680424 (-0.541054) | 0.016742 / 0.534201 (-0.517459) | 0.364138 / 0.579283 (-0.215146) | 0.402479 / 0.434364 (-0.031885) | 0.427591 / 0.540337 (-0.112746) | 0.520864 / 1.386936 (-0.866072) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a8279677b58b93f77995c7da67aea2a04b6a7395 \"CML watermark\")\n"
] | 2023-05-15T09:49:37 | 2023-05-17T18:46:46 | 2023-05-17T18:39:35 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5860",
"html_url": "https://github.com/huggingface/datasets/pull/5860",
"diff_url": "https://github.com/huggingface/datasets/pull/5860.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5860.patch",
"merged_at": "2023-05-17T18:39:35"
} | Don't create a tqdm progress bar when `disable_tqdm` is passed to `map_nested`.
On my side it sped up some iterable datasets by ~30% when `map_nested` is used extensively to recursively tensorize python dicts. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5860/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5859 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5859/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5859/comments | https://api.github.com/repos/huggingface/datasets/issues/5859/events | https://github.com/huggingface/datasets/pull/5859 | 1,709,554,829 | PR_kwDODunzps5QfDLC | 5,859 | Raise TypeError when indexing a dataset with bool | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq any idea why this only fails (CI integration fails are unrelated) in \"Build PR Documentation / build / build_pr_documentation\" (which uses Python 3.8), with message:\r\n```\r\nTypeError: Type subscription requires python >= 3.9\r\n```\r\nwhereas the CI is green for unit tests, which use Python 3.7?",
"Hmm I don't know sorry :/",
"@lhoestq I am afraid I have to remove the generics I created for numpy and pandas (no subscriptable until Python 3.9) and just leave:\r\n```python\r\nListLike = Union[List[T], Tuple[T, ...]]\r\n```",
"Ok sounds good - no need to spend more time on this",
"I will merge once the CI is finished. The integration errors are unrelated: `502 Server Error: Bad Gateway`",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006637 / 0.011353 (-0.004716) | 0.004578 / 0.011008 (-0.006430) | 0.097346 / 0.038508 (0.058838) | 0.034171 / 0.023109 (0.011062) | 0.315060 / 0.275898 (0.039162) | 0.354386 / 0.323480 (0.030907) | 0.005778 / 0.007986 (-0.002207) | 0.004123 / 0.004328 (-0.000206) | 0.073839 / 0.004250 (0.069589) | 0.046418 / 0.037052 (0.009366) | 0.325910 / 0.258489 (0.067421) | 0.368909 / 0.293841 (0.075068) | 0.027975 / 0.128546 (-0.100571) | 0.008885 / 0.075646 (-0.066761) | 0.327956 / 0.419271 (-0.091316) | 0.049911 / 0.043533 (0.006378) | 0.309424 / 0.255139 (0.054285) | 0.346543 / 0.283200 (0.063343) | 0.103429 / 0.141683 (-0.038253) | 1.517606 / 1.452155 (0.065451) | 1.536685 / 1.492716 (0.043969) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211552 / 0.018006 (0.193546) | 0.449583 / 0.000490 (0.449094) | 0.002949 / 0.000200 (0.002750) | 0.000140 / 0.000054 (0.000086) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027603 / 0.037411 (-0.009808) | 0.108873 / 0.014526 (0.094347) | 0.117990 / 0.176557 (-0.058567) | 0.174202 / 0.737135 (-0.562933) | 0.123793 / 0.296338 (-0.172545) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418449 / 0.215209 (0.203240) | 4.177753 / 2.077655 (2.100099) | 1.923446 / 1.504120 (0.419326) | 1.720576 / 1.541195 (0.179381) | 1.783723 / 1.468490 (0.315232) | 0.530068 / 4.584777 (-4.054709) | 3.709410 / 3.745712 (-0.036302) | 1.863924 / 5.269862 (-3.405938) | 1.149906 / 4.565676 (-3.415770) | 0.066595 / 0.424275 (-0.357680) | 0.011733 / 0.007607 (0.004126) | 0.519249 / 0.226044 (0.293205) | 5.179676 / 2.268929 (2.910748) | 2.389488 / 55.444624 (-53.055137) | 2.060006 / 6.876477 (-4.816471) | 2.160668 / 2.142072 (0.018596) | 0.641081 / 4.805227 (-4.164146) | 0.141962 / 6.500664 (-6.358702) | 0.063146 / 0.075469 (-0.012323) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197424 / 1.841788 (-0.644364) | 14.915321 / 8.074308 (6.841013) | 14.792302 / 10.191392 (4.600910) | 0.145436 / 0.680424 (-0.534988) | 0.017669 / 0.534201 (-0.516532) | 0.399060 / 0.579283 (-0.180223) | 0.416282 / 0.434364 (-0.018082) | 0.498392 / 0.540337 (-0.041946) | 0.600242 / 1.386936 (-0.786694) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007246 / 0.011353 (-0.004106) | 0.005353 / 0.011008 (-0.005656) | 0.076357 / 0.038508 (0.037849) | 0.037662 / 0.023109 (0.014553) | 0.387862 / 0.275898 (0.111964) | 0.421610 / 0.323480 (0.098130) | 0.006424 / 0.007986 (-0.001561) | 0.004397 / 0.004328 (0.000069) | 0.074212 / 0.004250 (0.069961) | 0.054147 / 0.037052 (0.017095) | 0.393171 / 0.258489 (0.134682) | 0.424082 / 0.293841 (0.130241) | 0.029001 / 0.128546 (-0.099546) | 0.009381 / 0.075646 (-0.066265) | 0.082562 / 0.419271 (-0.336710) | 0.048004 / 0.043533 (0.004472) | 0.386895 / 0.255139 (0.131756) | 0.386104 / 0.283200 (0.102904) | 0.113714 / 0.141683 (-0.027969) | 1.435601 / 1.452155 (-0.016553) | 1.554940 / 1.492716 (0.062224) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.179288 / 0.018006 (0.161282) | 0.455301 / 0.000490 (0.454811) | 0.001469 / 0.000200 (0.001269) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030928 / 0.037411 (-0.006484) | 0.117833 / 0.014526 (0.103307) | 0.125088 / 0.176557 (-0.051468) | 0.178906 / 0.737135 (-0.558230) | 0.131264 / 0.296338 (-0.165075) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436900 / 0.215209 (0.221691) | 4.366094 / 2.077655 (2.288439) | 2.184398 / 1.504120 (0.680278) | 1.992779 / 1.541195 (0.451584) | 2.055260 / 1.468490 (0.586770) | 0.524136 / 4.584777 (-4.060641) | 3.750535 / 3.745712 (0.004823) | 2.985095 / 5.269862 (-2.284767) | 1.400291 / 4.565676 (-3.165385) | 0.065921 / 0.424275 (-0.358354) | 0.012110 / 0.007607 (0.004502) | 0.538239 / 0.226044 (0.312195) | 5.380613 / 2.268929 (3.111685) | 2.637509 / 55.444624 (-52.807116) | 2.352265 / 6.876477 (-4.524212) | 2.409829 / 2.142072 (0.267756) | 0.640428 / 4.805227 (-4.164799) | 0.142070 / 6.500664 (-6.358594) | 0.068171 / 0.075469 (-0.007298) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.280080 / 1.841788 (-0.561707) | 15.588799 / 8.074308 (7.514491) | 14.648596 / 10.191392 (4.457204) | 0.147027 / 0.680424 (-0.533397) | 0.018981 / 0.534201 (-0.515220) | 0.394796 / 0.579283 (-0.184487) | 0.423686 / 0.434364 (-0.010678) | 0.467376 / 0.540337 (-0.072961) | 0.562247 / 1.386936 (-0.824689) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#680162303f4c5dae6ad2edef6b3efadded7d37bd \"CML watermark\")\n"
] | 2023-05-15T08:08:42 | 2023-05-25T16:31:24 | 2023-05-25T16:23:17 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5859",
"html_url": "https://github.com/huggingface/datasets/pull/5859",
"diff_url": "https://github.com/huggingface/datasets/pull/5859.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5859.patch",
"merged_at": "2023-05-25T16:23:17"
} | Fix #5858. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5859/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5858 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5858/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5858/comments | https://api.github.com/repos/huggingface/datasets/issues/5858/events | https://github.com/huggingface/datasets/issues/5858 | 1,709,332,632 | I_kwDODunzps5l4liY | 5,858 | Throw an error when dataset improperly indexed | {
"login": "sarahwie",
"id": 8027676,
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sarahwie",
"html_url": "https://github.com/sarahwie",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @sarahwie.\r\n\r\nPlease note that in `datasets` we do not have vectorized operation like `pandas`. Therefore, your equality comparisons above are `False`:\r\n- For example: `squad['question']` returns a `list`, and this list is not equal to `\"Who was the Norse leader?\"`\r\n\r\nThe `False` value is equivalent to `0` when indexing a dataset, thus the reason why you get the first element (with index 0): \r\n- For example: `squad[False]` is equivalent to `squad[0]`\r\n\r\nMaybe we should an exception instead of assuming that `False` is equivalent to `0` (and `True` is equivalent to `1`) in the context of indexing."
] | 2023-05-15T05:15:53 | 2023-05-25T16:23:19 | 2023-05-25T16:23:19 | NONE | null | null | null | ### Describe the bug
Pandas-style subset indexing on dataset does not throw an error, when maybe it should. Instead returns the first instance of the dataset regardless of index condition.
### Steps to reproduce the bug
Steps to reproduce the behavior:
1. `squad = datasets.load_dataset("squad_v2", split="validation")`
2. `item = squad[squad['question'] == "Who was the Norse leader?"]`
or `it = squad[squad['id'] == '56ddde6b9a695914005b962b']`
3. returns the first item in the dataset, which does not satisfy the above conditions:
`{'id': '56ddde6b9a695914005b9628', 'title': 'Normans', 'context': 'The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their name to Normandy, a region in France. They were descended from Norse ("Norman" comes from "Norseman") raiders and pirates from Denmark, Iceland and Norway who, under their leader Rollo, agreed to swear fealty to King Charles III of West Francia. Through generations of assimilation and mixing with the native Frankish and Roman-Gaulish populations, their descendants would gradually merge with the Carolingian-based cultures of West Francia. The distinct cultural and ethnic identity of the Normans emerged initially in the first half of the 10th century, and it continued to evolve over the succeeding centuries.', 'question': 'In what country is Normandy located?', 'answers': {'text': ['France', 'France', 'France', 'France'], 'answer_start': [159, 159, 159, 159]}}`
### Expected behavior
Should either throw an error message, or return the dataset item that satisfies the condition.
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5858/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5857 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5857/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5857/comments | https://api.github.com/repos/huggingface/datasets/issues/5857/events | https://github.com/huggingface/datasets/issues/5857 | 1,709,326,622 | I_kwDODunzps5l4kEe | 5,857 | Adding chemistry dataset/models in huggingface | {
"login": "knc6",
"id": 16902896,
"node_id": "MDQ6VXNlcjE2OTAyODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/16902896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/knc6",
"html_url": "https://github.com/knc6",
"followers_url": "https://api.github.com/users/knc6/followers",
"following_url": "https://api.github.com/users/knc6/following{/other_user}",
"gists_url": "https://api.github.com/users/knc6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/knc6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/knc6/subscriptions",
"organizations_url": "https://api.github.com/users/knc6/orgs",
"repos_url": "https://api.github.com/users/knc6/repos",
"events_url": "https://api.github.com/users/knc6/events{/privacy}",
"received_events_url": "https://api.github.com/users/knc6/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! \r\n\r\nThis would be a nice addition to the Hub! You can find the existing chemistry datasets/models on the Hub (using the `chemistry` tag) [here](https://huggingface.co/search/full-text?q=chemistry&type=model&type=dataset).\r\n\r\nFeel free to ping us here on the Hub if you need help adding the datasets.\r\n"
] | 2023-05-15T05:09:49 | 2023-07-21T13:45:40 | 2023-07-21T13:45:40 | NONE | null | null | null | ### Feature request
Huggingface is really amazing platform for open science.
In addition to computer vision, video and NLP, would it be of interest to add chemistry/materials science dataset/models in Huggingface? Or, if its already done, can you provide some pointers.
We have been working on a comprehensive benchmark on this topic: [JARVIS-Leaderboard](https://pages.nist.gov/jarvis_leaderboard/) and I am wondering if we could contribute/integrate this project as a part of huggingface.
### Motivation
Similar to the main stream AI field, there is need of large scale benchmarks/models/infrastructure for chemistry/materials data.
### Your contribution
We can start adding datasets as our [benchmarks](https://github.com/usnistgov/jarvis_leaderboard/tree/main/jarvis_leaderboard/benchmarks) should be easily convertible to the dataset format. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5857/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5856/comments | https://api.github.com/repos/huggingface/datasets/issues/5856/events | https://github.com/huggingface/datasets/issues/5856 | 1,709,218,242 | I_kwDODunzps5l4JnC | 5,856 | Error loading natural_questions | {
"login": "Crownor",
"id": 19185508,
"node_id": "MDQ6VXNlcjE5MTg1NTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/19185508?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Crownor",
"html_url": "https://github.com/Crownor",
"followers_url": "https://api.github.com/users/Crownor/followers",
"following_url": "https://api.github.com/users/Crownor/following{/other_user}",
"gists_url": "https://api.github.com/users/Crownor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Crownor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Crownor/subscriptions",
"organizations_url": "https://api.github.com/users/Crownor/orgs",
"repos_url": "https://api.github.com/users/Crownor/repos",
"events_url": "https://api.github.com/users/Crownor/events{/privacy}",
"received_events_url": "https://api.github.com/users/Crownor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! You can avoid this error by using the preprocessed version:\r\n```python\r\nimport datasets\r\nds = datasets.load_dataset('natural_questions')\r\n```\r\n\r\nPS: Once we finish https://github.com/huggingface/datasets/pull/5364, this error will no longer be a problem.",
"> Hi! You can avoid this error by using the preprocessed version:\r\n> \r\n> ```python\r\n> import datasets\r\n> ds = datasets.load_dataset('natural_questions')\r\n> ```\r\n> \r\n> PS: Once we finish #5364, this error will no longer be a problem.\r\n\r\nThanks, wish #5364 finish early"
] | 2023-05-15T02:46:04 | 2023-06-05T09:11:19 | 2023-06-05T09:11:18 | NONE | null | null | null | ### Describe the bug
When try to load natural_questions through datasets == 2.12.0 with python == 3.8.9:
```python
import datasets
datasets.load_dataset('natural_questions',beam_runner='DirectRunner')
```
It failed with following info:
`pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs`
### Steps to reproduce the bug
In python console:
```python
import datasets
datasets.load_dataset('natural_questions',beam_runner='DirectRunner')
```
Then the trace is:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/load.py", line 1797, in load_dataset
builder_instance.download_and_prepare(
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/builder.py", line 890, in download_and_prepare
self._download_and_prepare(
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/builder.py", line 2019, in _download_and_prepare
num_examples, num_bytes = beam_writer.finalize(metrics.query(m_filter))
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/arrow_writer.py", line 694, in finalize
shard_num_bytes, _ = parquet_to_arrow(source, destination)
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/arrow_writer.py", line 737, in parquet_to_arrow
for record_batch in parquet_file.iter_batches():
File "pyarrow/_parquet.pyx", line 1323, in iter_batches
File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
```
### Expected behavior
load natural_question questions
### Environment info
```
- `datasets` version: 2.12.0
- Platform: Linux-3.10.0-1160.42.2.el7.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.9
- Huggingface_hub version: 0.14.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.1
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5856/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5855 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5855/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5855/comments | https://api.github.com/repos/huggingface/datasets/issues/5855/events | https://github.com/huggingface/datasets/issues/5855 | 1,708,784,943 | I_kwDODunzps5l2f0v | 5,855 | `to_tf_dataset` consumes too much memory | {
"login": "massquantity",
"id": 28751760,
"node_id": "MDQ6VXNlcjI4NzUxNzYw",
"avatar_url": "https://avatars.githubusercontent.com/u/28751760?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/massquantity",
"html_url": "https://github.com/massquantity",
"followers_url": "https://api.github.com/users/massquantity/followers",
"following_url": "https://api.github.com/users/massquantity/following{/other_user}",
"gists_url": "https://api.github.com/users/massquantity/gists{/gist_id}",
"starred_url": "https://api.github.com/users/massquantity/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/massquantity/subscriptions",
"organizations_url": "https://api.github.com/users/massquantity/orgs",
"repos_url": "https://api.github.com/users/massquantity/repos",
"events_url": "https://api.github.com/users/massquantity/events{/privacy}",
"received_events_url": "https://api.github.com/users/massquantity/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Cc @amyeroberts @Rocketknight1 \r\n\r\nIndded I think it's because it does something like this under the hood when there's no multiprocessing:\r\n\r\n```python\r\ntf_dataset = tf_dataset.shuffle(len(dataset))\r\n```\r\n\r\nPS: with multiprocessing it appears to be different:\r\n\r\n```python\r\nindices = np.arange(len(dataset))\r\nif shuffle:\r\n np.random.shuffle(indices)\r\n```",
"Hi @massquantity, the dataset being shuffled there is not the full dataset. If you look at [the line above](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/tf_utils.py#L182), the dataset is actually just a single indices array at that point, and that array is the only thing that gets fully loaded into memory and shuffled. We then load samples from the dataset by applying a transform function to the shuffled dataset, which fetches samples based on the indices it receives.\r\n\r\nIf your dataset is **really** gigantic, then this index tensor might be a memory issue, but since it's just an int64 tensor it will only use 1GB of memory per 125 million samples.\r\n\r\nStill, if you're encountering memory issues, there might be another cause here - can you share some code to reproduce the error, or does it depend on some internal/proprietary dataset?",
"Hi @Rocketknight1, you're right and I also noticed that only indices are used in shuffling. My data has shape (50000000, 10), but really the problem doesn't relate to a specific dataset. Simply running the following code costs me 10GB of memory.\r\n\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndef gen():\r\n for i in range(50000000):\r\n yield {\"data\": i}\r\n\r\nds = Dataset.from_generator(gen, cache_dir=\"./huggingface\")\r\n\r\ntf_ds = ds.to_tf_dataset(\r\n batch_size=1,\r\n shuffle=True,\r\n drop_remainder=False,\r\n prefetch=True,\r\n)\r\ntf_ds = iter(tf_ds)\r\nnext(tf_ds)\r\n# {'data': <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>}\r\n```\r\n\r\nI just realized maybe it was an issue from tensorflow (I'm using tf 2.12). So I tried the following code, and it used 10GB of memory too.\r\n```python\r\nimport numpy as np\r\nimport tensorflow as tf\r\n\r\ndata_size = 50000000\r\ntf_dataset = tf.data.Dataset.from_tensor_slices(np.arange(data_size))\r\ntf_dataset = iter(tf_dataset.shuffle(data_size))\r\nnext(tf_dataset)\r\n# <tf.Tensor: shape=(), dtype=int64, numpy=24774043>\r\n```\r\n\r\nBy the way, as @lhoestq mentioned, multiprocessing uses numpy shuffling, and it uses less than 1 GB of memory:\r\n```python\r\ntf_ds_mp = ds.to_tf_dataset(\r\n batch_size=1,\r\n shuffle=True,\r\n drop_remainder=False,\r\n prefetch=True,\r\n num_workers=2,\r\n)\r\n```",
"Thanks for that reproduction script - I've confirmed the same issue is occurring for me. Investigating it now!",
"Update: The memory usage is occurring in creation of the index and shuffle buffer. You can reproduce it very simply with:\r\n\r\n```python\r\nimport tensorflow as tf\r\nindices = tf.range(50_000_000, dtype=tf.int64)\r\ndataset = tf.data.Dataset.from_tensor_slices(indices)\r\ndataset = dataset.shuffle(len(dataset))\r\nprint(next(iter(dataset))\r\n```\r\nWhen I wrote this code I thought `tf.data` had an optimization for shuffling an entire tensor that wouldn't create the entire shuffle buffer, but evidently it's just creating the enormous buffer in memory. I'll see if I can find a more efficient way to do this - we might end up moving everything to the `numpy` multiprocessing path to avoid it.",
"I opened a PR to fix this - will continue the discussion there!"
] | 2023-05-14T01:22:29 | 2023-06-08T16:32:52 | 2023-06-08T16:32:52 | NONE | null | null | null | ### Describe the bug
Hi, I'm using `to_tf_dataset` to convert a _large_ dataset to `tf.data.Dataset`. I observed that the data loading *before* training took a lot of time and memory, even with `batch_size=1`.
After some digging, i believe the reason lies in the shuffle behavior. The [source code](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/tf_utils.py#L185) uses `len(dataset)` as the `buffer_size`, which may load all the data into the memory, and the [tf.data doc](https://www.tensorflow.org/guide/data#randomly_shuffling_input_data) also states that "While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill".
### Steps to reproduce the bug
```python
from datasets import Dataset
def gen(): # some large data
for i in range(50000000):
yield {"data": i}
ds = Dataset.from_generator(gen, cache_dir="./huggingface")
tf_ds = ds.to_tf_dataset(
batch_size=64,
shuffle=False, # no shuffle
drop_remainder=False,
prefetch=True,
)
# fast and memory friendly π€
for batch in tf_ds:
...
tf_ds_shuffle = ds.to_tf_dataset(
batch_size=64,
shuffle=True,
drop_remainder=False,
prefetch=True,
)
# slow and memory hungry for simple iteration π±
for batch in tf_ds_shuffle:
...
```
### Expected behavior
Shuffling should not load all the data into the memory. Would adding a `buffer_size` parameter in the `to_tf_dataset` API alleviate the problem?
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.17.1-051701-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 1.4.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5855/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5854 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5854/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5854/comments | https://api.github.com/repos/huggingface/datasets/issues/5854/events | https://github.com/huggingface/datasets/issues/5854 | 1,708,779,300 | I_kwDODunzps5l2eck | 5,854 | Can not load audiofolder dataset on kaggle | {
"login": "ILG2021",
"id": 93691919,
"node_id": "U_kgDOBZWgDw",
"avatar_url": "https://avatars.githubusercontent.com/u/93691919?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ILG2021",
"html_url": "https://github.com/ILG2021",
"followers_url": "https://api.github.com/users/ILG2021/followers",
"following_url": "https://api.github.com/users/ILG2021/following{/other_user}",
"gists_url": "https://api.github.com/users/ILG2021/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ILG2021/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ILG2021/subscriptions",
"organizations_url": "https://api.github.com/users/ILG2021/orgs",
"repos_url": "https://api.github.com/users/ILG2021/repos",
"events_url": "https://api.github.com/users/ILG2021/events{/privacy}",
"received_events_url": "https://api.github.com/users/ILG2021/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! `audiofolder` requires `datasets>=2.5.0`, so please update the `datasets`' installation (`pip install -U datasets`) in the environment to resolve the issue.",
"> Hi! `audiofolder` requires `datasets>=2.5.0`, so please update the `datasets`' installation (`pip install -U datasets`) in the environment to resolve the issue.\r\n\r\nI don't think it is a problem of the version. It runs ok on colab or local machine. Only on kaggle will has this bug.",
"Based on your dataset info, the installed version is `2.1.0`, which does not include `audiofolder`.\r\n\r\nBy default, Kaggle preinstalls `datasets` into a new env, but the version it installs is outdated and does not contain newer features such as `audiofolder`"
] | 2023-05-14T00:50:47 | 2023-07-21T13:53:45 | 2023-07-21T13:53:45 | NONE | null | null | null | ### Describe the bug
It's crash log:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/audiofolder/audiofolder.py or any data file in the same directory. Couldn't find 'audiofolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/audiofolder/audiofolder.py
### Steps to reproduce the bug
![image](https://github.com/huggingface/datasets/assets/93691919/a2829d27-d15c-4acc-86fb-d1987c760468)
common_voice = load_dataset("audiofolder", data_dir="/kaggle/working/data")
### Expected behavior
load dataset without error. It works ok on colab, but on kaggle it happends.
### Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.15.109+-x86_64-with-glibc2.31
- Python version: 3.10.10
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5854/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5853/comments | https://api.github.com/repos/huggingface/datasets/issues/5853/events | https://github.com/huggingface/datasets/pull/5853 | 1,708,092,786 | PR_kwDODunzps5QaZLP | 5,853 | [docs] Redirects, migrated from nginx | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@mishig25 note that it's not exactly the same behavior as in nginx as here it interacts a bit with the `version` and the `language`\r\n\r\nShould be close enough, though.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007212 / 0.011353 (-0.004141) | 0.005125 / 0.011008 (-0.005883) | 0.098460 / 0.038508 (0.059952) | 0.034040 / 0.023109 (0.010931) | 0.320203 / 0.275898 (0.044305) | 0.357787 / 0.323480 (0.034307) | 0.006000 / 0.007986 (-0.001986) | 0.005644 / 0.004328 (0.001316) | 0.072654 / 0.004250 (0.068403) | 0.049393 / 0.037052 (0.012341) | 0.345686 / 0.258489 (0.087196) | 0.362345 / 0.293841 (0.068504) | 0.036597 / 0.128546 (-0.091949) | 0.012303 / 0.075646 (-0.063343) | 0.334374 / 0.419271 (-0.084897) | 0.062010 / 0.043533 (0.018477) | 0.312547 / 0.255139 (0.057408) | 0.336021 / 0.283200 (0.052821) | 0.112304 / 0.141683 (-0.029378) | 1.446706 / 1.452155 (-0.005449) | 1.523256 / 1.492716 (0.030540) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217658 / 0.018006 (0.199652) | 0.449208 / 0.000490 (0.448718) | 0.002878 / 0.000200 (0.002679) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025735 / 0.037411 (-0.011676) | 0.105876 / 0.014526 (0.091350) | 0.114887 / 0.176557 (-0.061669) | 0.170984 / 0.737135 (-0.566152) | 0.121420 / 0.296338 (-0.174918) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419670 / 0.215209 (0.204461) | 4.189453 / 2.077655 (2.111798) | 1.938236 / 1.504120 (0.434116) | 1.769747 / 1.541195 (0.228553) | 1.910919 / 1.468490 (0.442429) | 0.705046 / 4.584777 (-3.879730) | 3.783774 / 3.745712 (0.038062) | 2.096504 / 5.269862 (-3.173358) | 1.339265 / 4.565676 (-3.226412) | 0.086670 / 0.424275 (-0.337605) | 0.012243 / 0.007607 (0.004636) | 0.524701 / 0.226044 (0.298657) | 5.240689 / 2.268929 (2.971760) | 2.473622 / 55.444624 (-52.971003) | 2.170568 / 6.876477 (-4.705909) | 2.289653 / 2.142072 (0.147581) | 0.848913 / 4.805227 (-3.956314) | 0.168332 / 6.500664 (-6.332332) | 0.064926 / 0.075469 (-0.010543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.193614 / 1.841788 (-0.648173) | 14.920403 / 8.074308 (6.846095) | 14.475059 / 10.191392 (4.283667) | 0.164458 / 0.680424 (-0.515966) | 0.017613 / 0.534201 (-0.516588) | 0.426311 / 0.579283 (-0.152972) | 0.431478 / 0.434364 (-0.002886) | 0.520280 / 0.540337 (-0.020057) | 0.627738 / 1.386936 (-0.759198) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007458 / 0.011353 (-0.003895) | 0.005363 / 0.011008 (-0.005645) | 0.076713 / 0.038508 (0.038205) | 0.034189 / 0.023109 (0.011079) | 0.359938 / 0.275898 (0.084040) | 0.395532 / 0.323480 (0.072052) | 0.005977 / 0.007986 (-0.002008) | 0.004263 / 0.004328 (-0.000065) | 0.075971 / 0.004250 (0.071721) | 0.051924 / 0.037052 (0.014871) | 0.362818 / 0.258489 (0.104329) | 0.409897 / 0.293841 (0.116056) | 0.035494 / 0.128546 (-0.093053) | 0.012399 / 0.075646 (-0.063247) | 0.088335 / 0.419271 (-0.330937) | 0.047968 / 0.043533 (0.004435) | 0.355744 / 0.255139 (0.100606) | 0.376339 / 0.283200 (0.093139) | 0.104542 / 0.141683 (-0.037141) | 1.464826 / 1.452155 (0.012672) | 1.600665 / 1.492716 (0.107948) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220841 / 0.018006 (0.202834) | 0.446444 / 0.000490 (0.445954) | 0.000392 / 0.000200 (0.000192) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029402 / 0.037411 (-0.008009) | 0.116511 / 0.014526 (0.101986) | 0.122959 / 0.176557 (-0.053598) | 0.171674 / 0.737135 (-0.565462) | 0.129871 / 0.296338 (-0.166468) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.450411 / 0.215209 (0.235202) | 4.471859 / 2.077655 (2.394205) | 2.229439 / 1.504120 (0.725319) | 2.053308 / 1.541195 (0.512114) | 2.142476 / 1.468490 (0.673986) | 0.708299 / 4.584777 (-3.876478) | 3.797830 / 3.745712 (0.052118) | 2.142509 / 5.269862 (-3.127352) | 1.333357 / 4.565676 (-3.232320) | 0.086837 / 0.424275 (-0.337439) | 0.012102 / 0.007607 (0.004495) | 0.548428 / 0.226044 (0.322384) | 5.490611 / 2.268929 (3.221682) | 2.713882 / 55.444624 (-52.730742) | 2.399638 / 6.876477 (-4.476839) | 2.481549 / 2.142072 (0.339477) | 0.839812 / 4.805227 (-3.965415) | 0.168890 / 6.500664 (-6.331774) | 0.065564 / 0.075469 (-0.009906) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.275507 / 1.841788 (-0.566281) | 14.896343 / 8.074308 (6.822035) | 13.159701 / 10.191392 (2.968309) | 0.172065 / 0.680424 (-0.508359) | 0.017507 / 0.534201 (-0.516694) | 0.420031 / 0.579283 (-0.159252) | 0.438835 / 0.434364 (0.004471) | 0.490597 / 0.540337 (-0.049741) | 0.583952 / 1.386936 (-0.802984) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#48c9755d0ae9abe4c4d6cd8c1ce76eff849f0e5c \"CML watermark\")\n"
] | 2023-05-12T19:19:27 | 2023-05-15T10:37:19 | 2023-05-15T10:30:14 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5853",
"html_url": "https://github.com/huggingface/datasets/pull/5853",
"diff_url": "https://github.com/huggingface/datasets/pull/5853.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5853.patch",
"merged_at": "2023-05-15T10:30:14"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5853/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5852 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5852/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5852/comments | https://api.github.com/repos/huggingface/datasets/issues/5852/events | https://github.com/huggingface/datasets/pull/5852 | 1,707,927,165 | PR_kwDODunzps5QZ1lj | 5,852 | Iterable torch formatting | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006567 / 0.011353 (-0.004786) | 0.004479 / 0.011008 (-0.006530) | 0.028286 / 0.038508 (-0.010222) | 0.033137 / 0.023109 (0.010028) | 0.305249 / 0.275898 (0.029351) | 0.330306 / 0.323480 (0.006826) | 0.003747 / 0.007986 (-0.004238) | 0.004409 / 0.004328 (0.000081) | 0.004742 / 0.004250 (0.000491) | 0.040780 / 0.037052 (0.003728) | 0.302879 / 0.258489 (0.044390) | 0.346880 / 0.293841 (0.053039) | 0.032908 / 0.128546 (-0.095638) | 0.010617 / 0.075646 (-0.065029) | 0.257996 / 0.419271 (-0.161275) | 0.051044 / 0.043533 (0.007511) | 0.306113 / 0.255139 (0.050974) | 0.324444 / 0.283200 (0.041244) | 0.100820 / 0.141683 (-0.040863) | 1.478402 / 1.452155 (0.026248) | 1.599398 / 1.492716 (0.106682) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216540 / 0.018006 (0.198534) | 0.433480 / 0.000490 (0.432991) | 0.004032 / 0.000200 (0.003832) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027807 / 0.037411 (-0.009604) | 0.107225 / 0.014526 (0.092699) | 0.120157 / 0.176557 (-0.056400) | 0.174130 / 0.737135 (-0.563005) | 0.128902 / 0.296338 (-0.167437) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395996 / 0.215209 (0.180787) | 3.936254 / 2.077655 (1.858599) | 1.808864 / 1.504120 (0.304744) | 1.608935 / 1.541195 (0.067741) | 1.646427 / 1.468490 (0.177937) | 0.716026 / 4.584777 (-3.868751) | 3.815045 / 3.745712 (0.069333) | 2.271534 / 5.269862 (-2.998327) | 1.548728 / 4.565676 (-3.016948) | 0.076743 / 0.424275 (-0.347532) | 0.011575 / 0.007607 (0.003968) | 0.499202 / 0.226044 (0.273158) | 4.983754 / 2.268929 (2.714825) | 2.239319 / 55.444624 (-53.205306) | 1.919427 / 6.876477 (-4.957050) | 2.019664 / 2.142072 (-0.122408) | 0.866318 / 4.805227 (-3.938910) | 0.157309 / 6.500664 (-6.343355) | 0.063341 / 0.075469 (-0.012128) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180817 / 1.841788 (-0.660971) | 14.579869 / 8.074308 (6.505561) | 14.277848 / 10.191392 (4.086456) | 0.182560 / 0.680424 (-0.497863) | 0.017402 / 0.534201 (-0.516799) | 0.411549 / 0.579283 (-0.167734) | 0.432938 / 0.434364 (-0.001426) | 0.545067 / 0.540337 (0.004730) | 0.642173 / 1.386936 (-0.744763) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006753 / 0.011353 (-0.004600) | 0.004590 / 0.011008 (-0.006418) | 0.006111 / 0.038508 (-0.032397) | 0.032763 / 0.023109 (0.009654) | 0.401001 / 0.275898 (0.125103) | 0.428063 / 0.323480 (0.104583) | 0.003730 / 0.007986 (-0.004255) | 0.004617 / 0.004328 (0.000289) | 0.004770 / 0.004250 (0.000519) | 0.049718 / 0.037052 (0.012666) | 0.399724 / 0.258489 (0.141235) | 0.440292 / 0.293841 (0.146451) | 0.032846 / 0.128546 (-0.095700) | 0.010842 / 0.075646 (-0.064804) | 0.012642 / 0.419271 (-0.406630) | 0.046043 / 0.043533 (0.002510) | 0.390862 / 0.255139 (0.135723) | 0.407027 / 0.283200 (0.123828) | 0.099349 / 0.141683 (-0.042334) | 1.455739 / 1.452155 (0.003584) | 1.572214 / 1.492716 (0.079497) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227186 / 0.018006 (0.209180) | 0.447404 / 0.000490 (0.446914) | 0.000400 / 0.000200 (0.000200) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029830 / 0.037411 (-0.007581) | 0.112365 / 0.014526 (0.097839) | 0.125736 / 0.176557 (-0.050821) | 0.174781 / 0.737135 (-0.562354) | 0.129439 / 0.296338 (-0.166900) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444438 / 0.215209 (0.229229) | 4.459381 / 2.077655 (2.381726) | 2.264541 / 1.504120 (0.760421) | 2.075257 / 1.541195 (0.534062) | 2.181289 / 1.468490 (0.712799) | 0.725279 / 4.584777 (-3.859498) | 3.863253 / 3.745712 (0.117541) | 2.132498 / 5.269862 (-3.137364) | 1.402003 / 4.565676 (-3.163673) | 0.084268 / 0.424275 (-0.340007) | 0.011762 / 0.007607 (0.004155) | 0.556239 / 0.226044 (0.330194) | 5.617998 / 2.268929 (3.349070) | 2.754789 / 55.444624 (-52.689835) | 2.418418 / 6.876477 (-4.458059) | 2.479696 / 2.142072 (0.337624) | 0.870037 / 4.805227 (-3.935190) | 0.160480 / 6.500664 (-6.340184) | 0.064464 / 0.075469 (-0.011005) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.290916 / 1.841788 (-0.550872) | 14.783173 / 8.074308 (6.708865) | 13.355883 / 10.191392 (3.164491) | 0.169963 / 0.680424 (-0.510461) | 0.017657 / 0.534201 (-0.516544) | 0.409218 / 0.579283 (-0.170065) | 0.422942 / 0.434364 (-0.011422) | 0.494968 / 0.540337 (-0.045369) | 0.587044 / 1.386936 (-0.799892) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2051e912d9525bc38a1caf295df0620619c488eb \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007183 / 0.011353 (-0.004169) | 0.004586 / 0.011008 (-0.006423) | 0.032668 / 0.038508 (-0.005840) | 0.040896 / 0.023109 (0.017787) | 0.358225 / 0.275898 (0.082327) | 0.395063 / 0.323480 (0.071583) | 0.004540 / 0.007986 (-0.003446) | 0.003849 / 0.004328 (-0.000480) | 0.005521 / 0.004250 (0.001271) | 0.053314 / 0.037052 (0.016262) | 0.362417 / 0.258489 (0.103928) | 0.414337 / 0.293841 (0.120496) | 0.030698 / 0.128546 (-0.097849) | 0.008823 / 0.075646 (-0.066823) | 0.303583 / 0.419271 (-0.115689) | 0.060277 / 0.043533 (0.016744) | 0.365938 / 0.255139 (0.110799) | 0.379554 / 0.283200 (0.096354) | 0.122545 / 0.141683 (-0.019138) | 1.712098 / 1.452155 (0.259943) | 1.802036 / 1.492716 (0.309319) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239508 / 0.018006 (0.221502) | 0.492194 / 0.000490 (0.491704) | 0.003280 / 0.000200 (0.003081) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033301 / 0.037411 (-0.004110) | 0.125851 / 0.014526 (0.111325) | 0.137757 / 0.176557 (-0.038799) | 0.207603 / 0.737135 (-0.529533) | 0.143507 / 0.296338 (-0.152831) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.470662 / 0.215209 (0.255453) | 4.736017 / 2.077655 (2.658363) | 2.154152 / 1.504120 (0.650032) | 1.954243 / 1.541195 (0.413048) | 2.080186 / 1.468490 (0.611696) | 0.622884 / 4.584777 (-3.961893) | 4.385885 / 3.745712 (0.640173) | 2.262085 / 5.269862 (-3.007776) | 1.454215 / 4.565676 (-3.111462) | 0.067342 / 0.424275 (-0.356933) | 0.012913 / 0.007607 (0.005306) | 0.600676 / 0.226044 (0.374631) | 5.915093 / 2.268929 (3.646164) | 2.664915 / 55.444624 (-52.779709) | 2.286986 / 6.876477 (-4.589490) | 2.387776 / 2.142072 (0.245704) | 0.757067 / 4.805227 (-4.048160) | 0.154625 / 6.500664 (-6.346039) | 0.074632 / 0.075469 (-0.000838) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.413229 / 1.841788 (-0.428558) | 17.433012 / 8.074308 (9.358704) | 16.980340 / 10.191392 (6.788948) | 0.218943 / 0.680424 (-0.461481) | 0.020525 / 0.534201 (-0.513676) | 0.451847 / 0.579283 (-0.127436) | 0.495587 / 0.434364 (0.061223) | 0.548739 / 0.540337 (0.008402) | 0.662120 / 1.386936 (-0.724816) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006775 / 0.011353 (-0.004577) | 0.004556 / 0.011008 (-0.006452) | 0.006462 / 0.038508 (-0.032046) | 0.039073 / 0.023109 (0.015964) | 0.429249 / 0.275898 (0.153351) | 0.469946 / 0.323480 (0.146467) | 0.004402 / 0.007986 (-0.003584) | 0.003798 / 0.004328 (-0.000530) | 0.005347 / 0.004250 (0.001097) | 0.053743 / 0.037052 (0.016691) | 0.434635 / 0.258489 (0.176146) | 0.475661 / 0.293841 (0.181820) | 0.029891 / 0.128546 (-0.098656) | 0.009058 / 0.075646 (-0.066588) | 0.010987 / 0.419271 (-0.408284) | 0.053877 / 0.043533 (0.010344) | 0.434428 / 0.255139 (0.179289) | 0.449637 / 0.283200 (0.166437) | 0.124331 / 0.141683 (-0.017352) | 1.736083 / 1.452155 (0.283928) | 1.831632 / 1.492716 (0.338916) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248428 / 0.018006 (0.230422) | 0.493113 / 0.000490 (0.492623) | 0.000429 / 0.000200 (0.000229) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031337 / 0.037411 (-0.006074) | 0.132360 / 0.014526 (0.117834) | 0.134734 / 0.176557 (-0.041822) | 0.193811 / 0.737135 (-0.543324) | 0.146883 / 0.296338 (-0.149456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.510876 / 0.215209 (0.295666) | 5.170198 / 2.077655 (3.092543) | 2.572105 / 1.504120 (1.067985) | 2.316918 / 1.541195 (0.775723) | 2.449316 / 1.468490 (0.980826) | 0.612219 / 4.584777 (-3.972558) | 4.456740 / 3.745712 (0.711028) | 2.099757 / 5.269862 (-3.170105) | 1.293017 / 4.565676 (-3.272660) | 0.067922 / 0.424275 (-0.356353) | 0.013467 / 0.007607 (0.005860) | 0.634240 / 0.226044 (0.408196) | 6.373111 / 2.268929 (4.104182) | 3.171567 / 55.444624 (-52.273057) | 2.763411 / 6.876477 (-4.113066) | 2.845557 / 2.142072 (0.703485) | 0.763431 / 4.805227 (-4.041797) | 0.155949 / 6.500664 (-6.344715) | 0.076264 / 0.075469 (0.000795) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.468075 / 1.841788 (-0.373713) | 17.582354 / 8.074308 (9.508046) | 16.565964 / 10.191392 (6.374572) | 0.163779 / 0.680424 (-0.516644) | 0.020472 / 0.534201 (-0.513728) | 0.444416 / 0.579283 (-0.134867) | 0.488471 / 0.434364 (0.054107) | 0.550661 / 0.540337 (0.010323) | 0.667230 / 1.386936 (-0.719706) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3655cbf1c627c945e393641d35298a166f1e4bf5 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006160 / 0.011353 (-0.005193) | 0.004093 / 0.011008 (-0.006915) | 0.056485 / 0.038508 (0.017977) | 0.033637 / 0.023109 (0.010528) | 0.296448 / 0.275898 (0.020550) | 0.332532 / 0.323480 (0.009052) | 0.003864 / 0.007986 (-0.004122) | 0.003446 / 0.004328 (-0.000883) | 0.034808 / 0.004250 (0.030558) | 0.048567 / 0.037052 (0.011514) | 0.296090 / 0.258489 (0.037601) | 0.336067 / 0.293841 (0.042226) | 0.026081 / 0.128546 (-0.102465) | 0.007875 / 0.075646 (-0.067771) | 0.286049 / 0.419271 (-0.133222) | 0.050411 / 0.043533 (0.006878) | 0.297016 / 0.255139 (0.041877) | 0.320030 / 0.283200 (0.036830) | 0.110374 / 0.141683 (-0.031308) | 1.432470 / 1.452155 (-0.019684) | 1.492479 / 1.492716 (-0.000238) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.262352 / 0.018006 (0.244346) | 0.557956 / 0.000490 (0.557467) | 0.010296 / 0.000200 (0.010096) | 0.000315 / 0.000054 (0.000260) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028801 / 0.037411 (-0.008611) | 0.109844 / 0.014526 (0.095318) | 0.122333 / 0.176557 (-0.054224) | 0.180571 / 0.737135 (-0.556564) | 0.125990 / 0.296338 (-0.170348) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401643 / 0.215209 (0.186434) | 4.020993 / 2.077655 (1.943338) | 1.815256 / 1.504120 (0.311136) | 1.619579 / 1.541195 (0.078384) | 1.708889 / 1.468490 (0.240398) | 0.537847 / 4.584777 (-4.046930) | 3.743331 / 3.745712 (-0.002381) | 1.779891 / 5.269862 (-3.489970) | 1.021423 / 4.565676 (-3.544253) | 0.058869 / 0.424275 (-0.365406) | 0.011826 / 0.007607 (0.004218) | 0.499665 / 0.226044 (0.273621) | 4.980928 / 2.268929 (2.712000) | 2.285664 / 55.444624 (-53.158960) | 1.936553 / 6.876477 (-4.939923) | 2.090428 / 2.142072 (-0.051645) | 0.655218 / 4.805227 (-4.150009) | 0.133178 / 6.500664 (-6.367486) | 0.062991 / 0.075469 (-0.012478) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.168895 / 1.841788 (-0.672892) | 14.656773 / 8.074308 (6.582465) | 13.737921 / 10.191392 (3.546529) | 0.145383 / 0.680424 (-0.535041) | 0.017614 / 0.534201 (-0.516587) | 0.386499 / 0.579283 (-0.192784) | 0.425626 / 0.434364 (-0.008738) | 0.389572 / 0.540337 (-0.150766) | 0.386753 / 1.386936 (-1.000183) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005998 / 0.011353 (-0.005355) | 0.004265 / 0.011008 (-0.006743) | 0.034743 / 0.038508 (-0.003766) | 0.033929 / 0.023109 (0.010820) | 0.405535 / 0.275898 (0.129636) | 0.407235 / 0.323480 (0.083755) | 0.003972 / 0.007986 (-0.004013) | 0.003616 / 0.004328 (-0.000712) | 0.035278 / 0.004250 (0.031027) | 0.052990 / 0.037052 (0.015937) | 0.405228 / 0.258489 (0.146739) | 0.415007 / 0.293841 (0.121166) | 0.025951 / 0.128546 (-0.102595) | 0.007990 / 0.075646 (-0.067656) | 0.040492 / 0.419271 (-0.378779) | 0.049123 / 0.043533 (0.005591) | 0.399282 / 0.255139 (0.144143) | 0.384303 / 0.283200 (0.101103) | 0.115234 / 0.141683 (-0.026448) | 1.476904 / 1.452155 (0.024749) | 1.627191 / 1.492716 (0.134475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209211 / 0.018006 (0.191205) | 0.566718 / 0.000490 (0.566228) | 0.002094 / 0.000200 (0.001894) | 0.000104 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030885 / 0.037411 (-0.006526) | 0.110777 / 0.014526 (0.096251) | 0.124382 / 0.176557 (-0.052174) | 0.175081 / 0.737135 (-0.562054) | 0.130263 / 0.296338 (-0.166075) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448091 / 0.215209 (0.232882) | 4.484404 / 2.077655 (2.406749) | 2.278438 / 1.504120 (0.774318) | 2.087933 / 1.541195 (0.546738) | 2.186709 / 1.468490 (0.718219) | 0.534822 / 4.584777 (-4.049955) | 3.778229 / 3.745712 (0.032517) | 3.312334 / 5.269862 (-1.957528) | 1.557209 / 4.565676 (-3.008467) | 0.058923 / 0.424275 (-0.365352) | 0.011350 / 0.007607 (0.003743) | 0.550470 / 0.226044 (0.324426) | 5.480347 / 2.268929 (3.211419) | 2.781709 / 55.444624 (-52.662915) | 2.478729 / 6.876477 (-4.397748) | 2.492001 / 2.142072 (0.349929) | 0.652649 / 4.805227 (-4.152578) | 0.131334 / 6.500664 (-6.369330) | 0.065619 / 0.075469 (-0.009850) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253998 / 1.841788 (-0.587790) | 15.207433 / 8.074308 (7.133124) | 14.627842 / 10.191392 (4.436450) | 0.146947 / 0.680424 (-0.533477) | 0.017533 / 0.534201 (-0.516668) | 0.391627 / 0.579283 (-0.187656) | 0.431113 / 0.434364 (-0.003251) | 0.413886 / 0.540337 (-0.126451) | 0.414483 / 1.386936 (-0.972453) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3f4e98701590a4922050051eb0f4d63e6125723d \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007741 / 0.011353 (-0.003612) | 0.004584 / 0.011008 (-0.006424) | 0.067869 / 0.038508 (0.029361) | 0.041612 / 0.023109 (0.018503) | 0.377878 / 0.275898 (0.101980) | 0.421633 / 0.323480 (0.098153) | 0.004614 / 0.007986 (-0.003371) | 0.003824 / 0.004328 (-0.000504) | 0.041479 / 0.004250 (0.037229) | 0.053309 / 0.037052 (0.016256) | 0.390147 / 0.258489 (0.131658) | 0.437706 / 0.293841 (0.143865) | 0.035951 / 0.128546 (-0.092595) | 0.009231 / 0.075646 (-0.066415) | 0.357572 / 0.419271 (-0.061699) | 0.081332 / 0.043533 (0.037799) | 0.370076 / 0.255139 (0.114937) | 0.423653 / 0.283200 (0.140453) | 0.141401 / 0.141683 (-0.000282) | 1.722744 / 1.452155 (0.270589) | 1.914668 / 1.492716 (0.421952) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256568 / 0.018006 (0.238562) | 0.512243 / 0.000490 (0.511753) | 0.019913 / 0.000200 (0.019713) | 0.000136 / 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031742 / 0.037411 (-0.005670) | 0.128537 / 0.014526 (0.114011) | 0.139962 / 0.176557 (-0.036594) | 0.210711 / 0.737135 (-0.526424) | 0.147162 / 0.296338 (-0.149177) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.509518 / 0.215209 (0.294309) | 5.083788 / 2.077655 (3.006134) | 2.455381 / 1.504120 (0.951262) | 2.208078 / 1.541195 (0.666883) | 2.341807 / 1.468490 (0.873317) | 0.580014 / 4.584777 (-4.004763) | 4.599492 / 3.745712 (0.853780) | 2.403249 / 5.269862 (-2.866612) | 1.559177 / 4.565676 (-3.006500) | 0.072846 / 0.424275 (-0.351429) | 0.017327 / 0.007607 (0.009720) | 0.627747 / 0.226044 (0.401703) | 6.242586 / 2.268929 (3.973657) | 2.982875 / 55.444624 (-52.461750) | 2.588645 / 6.876477 (-4.287832) | 2.765915 / 2.142072 (0.623843) | 0.720455 / 4.805227 (-4.084772) | 0.157474 / 6.500664 (-6.343190) | 0.074295 / 0.075469 (-0.001174) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.540799 / 1.841788 (-0.300988) | 18.054632 / 8.074308 (9.980324) | 16.544036 / 10.191392 (6.352644) | 0.201423 / 0.680424 (-0.479001) | 0.020497 / 0.534201 (-0.513704) | 0.496275 / 0.579283 (-0.083008) | 0.547380 / 0.434364 (0.113017) | 0.614605 / 0.540337 (0.074267) | 0.749889 / 1.386936 (-0.637047) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006963 / 0.011353 (-0.004389) | 0.004543 / 0.011008 (-0.006465) | 0.039530 / 0.038508 (0.001022) | 0.038420 / 0.023109 (0.015311) | 0.454885 / 0.275898 (0.178987) | 0.491731 / 0.323480 (0.168251) | 0.004211 / 0.007986 (-0.003775) | 0.003673 / 0.004328 (-0.000655) | 0.038735 / 0.004250 (0.034484) | 0.052085 / 0.037052 (0.015032) | 0.448924 / 0.258489 (0.190435) | 0.499254 / 0.293841 (0.205413) | 0.030069 / 0.128546 (-0.098477) | 0.009082 / 0.075646 (-0.066565) | 0.047181 / 0.419271 (-0.372090) | 0.054758 / 0.043533 (0.011225) | 0.445035 / 0.255139 (0.189896) | 0.475090 / 0.283200 (0.191891) | 0.122641 / 0.141683 (-0.019042) | 1.706514 / 1.452155 (0.254360) | 1.855726 / 1.492716 (0.363010) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246028 / 0.018006 (0.228022) | 0.486382 / 0.000490 (0.485892) | 0.003038 / 0.000200 (0.002838) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034298 / 0.037411 (-0.003113) | 0.135364 / 0.014526 (0.120838) | 0.146102 / 0.176557 (-0.030455) | 0.207997 / 0.737135 (-0.529139) | 0.153119 / 0.296338 (-0.143219) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.528758 / 0.215209 (0.313549) | 5.243303 / 2.077655 (3.165648) | 2.617194 / 1.504120 (1.113074) | 2.400740 / 1.541195 (0.859545) | 2.534692 / 1.468490 (1.066202) | 0.585825 / 4.584777 (-3.998952) | 4.879766 / 3.745712 (1.134054) | 2.377419 / 5.269862 (-2.892443) | 1.460711 / 4.565676 (-3.104966) | 0.075572 / 0.424275 (-0.348703) | 0.013650 / 0.007607 (0.006042) | 0.697103 / 0.226044 (0.471058) | 6.444984 / 2.268929 (4.176055) | 3.227662 / 55.444624 (-52.216963) | 2.875163 / 6.876477 (-4.001314) | 2.860953 / 2.142072 (0.718881) | 0.718908 / 4.805227 (-4.086319) | 0.158005 / 6.500664 (-6.342659) | 0.077581 / 0.075469 (0.002112) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.653027 / 1.841788 (-0.188760) | 18.789342 / 8.074308 (10.715034) | 16.762678 / 10.191392 (6.571286) | 0.238920 / 0.680424 (-0.441504) | 0.020698 / 0.534201 (-0.513502) | 0.512634 / 0.579283 (-0.066649) | 0.542235 / 0.434364 (0.107871) | 0.626634 / 0.540337 (0.086297) | 0.753324 / 1.386936 (-0.633612) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f978ad8bec6e5e77868c6ffcc6f514354a03901d \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005737 / 0.011353 (-0.005616) | 0.003767 / 0.011008 (-0.007241) | 0.097792 / 0.038508 (0.059284) | 0.028466 / 0.023109 (0.005356) | 0.317703 / 0.275898 (0.041805) | 0.359512 / 0.323480 (0.036032) | 0.003428 / 0.007986 (-0.004558) | 0.002848 / 0.004328 (-0.001481) | 0.075668 / 0.004250 (0.071418) | 0.037165 / 0.037052 (0.000113) | 0.329539 / 0.258489 (0.071050) | 0.361365 / 0.293841 (0.067524) | 0.024777 / 0.128546 (-0.103769) | 0.008324 / 0.075646 (-0.067323) | 0.317346 / 0.419271 (-0.101926) | 0.043296 / 0.043533 (-0.000237) | 0.315318 / 0.255139 (0.060179) | 0.347641 / 0.283200 (0.064441) | 0.089551 / 0.141683 (-0.052132) | 1.506335 / 1.452155 (0.054180) | 1.573931 / 1.492716 (0.081215) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208041 / 0.018006 (0.190034) | 0.428198 / 0.000490 (0.427708) | 0.002568 / 0.000200 (0.002369) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023745 / 0.037411 (-0.013667) | 0.096256 / 0.014526 (0.081730) | 0.104917 / 0.176557 (-0.071639) | 0.164341 / 0.737135 (-0.572794) | 0.107972 / 0.296338 (-0.188367) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.453995 / 0.215209 (0.238786) | 4.546892 / 2.077655 (2.469238) | 2.185498 / 1.504120 (0.681378) | 1.989156 / 1.541195 (0.447962) | 2.053443 / 1.468490 (0.584953) | 0.559940 / 4.584777 (-4.024837) | 3.420759 / 3.745712 (-0.324954) | 1.771528 / 5.269862 (-3.498333) | 1.139692 / 4.565676 (-3.425984) | 0.067686 / 0.424275 (-0.356589) | 0.011729 / 0.007607 (0.004122) | 0.558001 / 0.226044 (0.331957) | 5.583886 / 2.268929 (3.314957) | 2.678726 / 55.444624 (-52.765899) | 2.324127 / 6.876477 (-4.552350) | 2.472805 / 2.142072 (0.330733) | 0.663163 / 4.805227 (-4.142065) | 0.134892 / 6.500664 (-6.365772) | 0.066722 / 0.075469 (-0.008747) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.195200 / 1.841788 (-0.646587) | 13.602517 / 8.074308 (5.528209) | 14.036344 / 10.191392 (3.844952) | 0.143759 / 0.680424 (-0.536665) | 0.017215 / 0.534201 (-0.516986) | 0.383749 / 0.579283 (-0.195534) | 0.388229 / 0.434364 (-0.046134) | 0.469366 / 0.540337 (-0.070971) | 0.560408 / 1.386936 (-0.826528) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005953 / 0.011353 (-0.005400) | 0.003840 / 0.011008 (-0.007168) | 0.077481 / 0.038508 (0.038973) | 0.028318 / 0.023109 (0.005209) | 0.403991 / 0.275898 (0.128093) | 0.433374 / 0.323480 (0.109894) | 0.003572 / 0.007986 (-0.004414) | 0.003033 / 0.004328 (-0.001295) | 0.075873 / 0.004250 (0.071623) | 0.039321 / 0.037052 (0.002269) | 0.416790 / 0.258489 (0.158301) | 0.459368 / 0.293841 (0.165527) | 0.025270 / 0.128546 (-0.103276) | 0.008574 / 0.075646 (-0.067072) | 0.083376 / 0.419271 (-0.335896) | 0.043206 / 0.043533 (-0.000327) | 0.404831 / 0.255139 (0.149692) | 0.418559 / 0.283200 (0.135360) | 0.099135 / 0.141683 (-0.042548) | 1.501315 / 1.452155 (0.049160) | 1.583912 / 1.492716 (0.091195) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241510 / 0.018006 (0.223504) | 0.410473 / 0.000490 (0.409983) | 0.001857 / 0.000200 (0.001657) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025366 / 0.037411 (-0.012045) | 0.103353 / 0.014526 (0.088828) | 0.107934 / 0.176557 (-0.068622) | 0.162388 / 0.737135 (-0.574747) | 0.113550 / 0.296338 (-0.182789) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463529 / 0.215209 (0.248320) | 4.657688 / 2.077655 (2.580034) | 2.455088 / 1.504120 (0.950968) | 2.304833 / 1.541195 (0.763638) | 2.317520 / 1.468490 (0.849029) | 0.563395 / 4.584777 (-4.021382) | 3.408489 / 3.745712 (-0.337223) | 2.636379 / 5.269862 (-2.633482) | 1.425355 / 4.565676 (-3.140322) | 0.068335 / 0.424275 (-0.355940) | 0.011713 / 0.007607 (0.004106) | 0.550230 / 0.226044 (0.324186) | 5.519843 / 2.268929 (3.250915) | 2.864986 / 55.444624 (-52.579639) | 2.604821 / 6.876477 (-4.271655) | 2.701501 / 2.142072 (0.559428) | 0.668193 / 4.805227 (-4.137034) | 0.134739 / 6.500664 (-6.365925) | 0.067110 / 0.075469 (-0.008359) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.326358 / 1.841788 (-0.515430) | 14.184172 / 8.074308 (6.109864) | 14.139245 / 10.191392 (3.947853) | 0.151881 / 0.680424 (-0.528542) | 0.016718 / 0.534201 (-0.517483) | 0.367035 / 0.579283 (-0.212248) | 0.393512 / 0.434364 (-0.040852) | 0.441261 / 0.540337 (-0.099076) | 0.533907 / 1.386936 (-0.853029) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#54098759d023f0b3e8eccd2dd98d46a1c6d19cce \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006275 / 0.011353 (-0.005078) | 0.003980 / 0.011008 (-0.007028) | 0.097617 / 0.038508 (0.059109) | 0.034089 / 0.023109 (0.010980) | 0.297381 / 0.275898 (0.021483) | 0.330106 / 0.323480 (0.006626) | 0.003838 / 0.007986 (-0.004148) | 0.004042 / 0.004328 (-0.000287) | 0.074305 / 0.004250 (0.070055) | 0.048318 / 0.037052 (0.011265) | 0.295585 / 0.258489 (0.037096) | 0.346924 / 0.293841 (0.053083) | 0.027397 / 0.128546 (-0.101150) | 0.008452 / 0.075646 (-0.067194) | 0.326837 / 0.419271 (-0.092435) | 0.049515 / 0.043533 (0.005982) | 0.303931 / 0.255139 (0.048792) | 0.317647 / 0.283200 (0.034447) | 0.098280 / 0.141683 (-0.043403) | 1.442603 / 1.452155 (-0.009552) | 1.524050 / 1.492716 (0.031334) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215095 / 0.018006 (0.197089) | 0.437662 / 0.000490 (0.437173) | 0.009771 / 0.000200 (0.009571) | 0.000401 / 0.000054 (0.000346) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027169 / 0.037411 (-0.010243) | 0.111383 / 0.014526 (0.096857) | 0.116163 / 0.176557 (-0.060394) | 0.173134 / 0.737135 (-0.564001) | 0.122376 / 0.296338 (-0.173962) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398332 / 0.215209 (0.183123) | 3.974166 / 2.077655 (1.896511) | 1.793847 / 1.504120 (0.289727) | 1.615117 / 1.541195 (0.073922) | 1.660288 / 1.468490 (0.191798) | 0.523833 / 4.584777 (-4.060944) | 3.704273 / 3.745712 (-0.041439) | 1.873308 / 5.269862 (-3.396554) | 1.203546 / 4.565676 (-3.362131) | 0.064949 / 0.424275 (-0.359326) | 0.011830 / 0.007607 (0.004223) | 0.497294 / 0.226044 (0.271250) | 4.948663 / 2.268929 (2.679735) | 2.233391 / 55.444624 (-53.211234) | 1.903208 / 6.876477 (-4.973269) | 2.067908 / 2.142072 (-0.074164) | 0.644256 / 4.805227 (-4.160971) | 0.142798 / 6.500664 (-6.357866) | 0.064734 / 0.075469 (-0.010735) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.172313 / 1.841788 (-0.669475) | 14.665853 / 8.074308 (6.591545) | 13.147051 / 10.191392 (2.955659) | 0.139338 / 0.680424 (-0.541086) | 0.017452 / 0.534201 (-0.516749) | 0.395660 / 0.579283 (-0.183623) | 0.410138 / 0.434364 (-0.024226) | 0.460357 / 0.540337 (-0.079980) | 0.555670 / 1.386936 (-0.831266) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006247 / 0.011353 (-0.005106) | 0.004098 / 0.011008 (-0.006910) | 0.075050 / 0.038508 (0.036542) | 0.033232 / 0.023109 (0.010122) | 0.384139 / 0.275898 (0.108241) | 0.420865 / 0.323480 (0.097385) | 0.003889 / 0.007986 (-0.004096) | 0.003336 / 0.004328 (-0.000993) | 0.073837 / 0.004250 (0.069587) | 0.048775 / 0.037052 (0.011723) | 0.386373 / 0.258489 (0.127884) | 0.421718 / 0.293841 (0.127878) | 0.027553 / 0.128546 (-0.100993) | 0.008724 / 0.075646 (-0.066922) | 0.080970 / 0.419271 (-0.338302) | 0.045981 / 0.043533 (0.002448) | 0.364381 / 0.255139 (0.109242) | 0.391203 / 0.283200 (0.108004) | 0.101681 / 0.141683 (-0.040002) | 1.469533 / 1.452155 (0.017378) | 1.562016 / 1.492716 (0.069300) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222318 / 0.018006 (0.204312) | 0.441395 / 0.000490 (0.440905) | 0.000408 / 0.000200 (0.000208) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030291 / 0.037411 (-0.007120) | 0.114053 / 0.014526 (0.099527) | 0.123124 / 0.176557 (-0.053433) | 0.173474 / 0.737135 (-0.563661) | 0.129946 / 0.296338 (-0.166393) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430342 / 0.215209 (0.215133) | 4.309782 / 2.077655 (2.232128) | 2.110668 / 1.504120 (0.606548) | 1.922881 / 1.541195 (0.381687) | 1.993562 / 1.468490 (0.525072) | 0.523682 / 4.584777 (-4.061095) | 3.774152 / 3.745712 (0.028440) | 3.354783 / 5.269862 (-1.915079) | 1.489793 / 4.565676 (-3.075884) | 0.065169 / 0.424275 (-0.359107) | 0.011626 / 0.007607 (0.004019) | 0.539126 / 0.226044 (0.313081) | 5.372593 / 2.268929 (3.103664) | 2.570652 / 55.444624 (-52.873973) | 2.253353 / 6.876477 (-4.623123) | 2.312876 / 2.142072 (0.170804) | 0.644241 / 4.805227 (-4.160986) | 0.138326 / 6.500664 (-6.362338) | 0.064491 / 0.075469 (-0.010979) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.344164 / 1.841788 (-0.497624) | 15.124679 / 8.074308 (7.050371) | 14.799310 / 10.191392 (4.607918) | 0.149054 / 0.680424 (-0.531370) | 0.017564 / 0.534201 (-0.516637) | 0.394593 / 0.579283 (-0.184690) | 0.428768 / 0.434364 (-0.005596) | 0.468235 / 0.540337 (-0.072103) | 0.557384 / 1.386936 (-0.829552) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a8bfac259e2b5047bb8a0cdcefc8357477ebf93c \"CML watermark\")\n",
"@albertvillanova could you take a look at this one ? It directly follows the arrow formatting PR",
"I added tests for the `__array__` case which lets you go from any tensor format to any other tensor format.\r\n\r\nI also properly deprecated format_type and added a warning message.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007838 / 0.011353 (-0.003515) | 0.005177 / 0.011008 (-0.005831) | 0.131058 / 0.038508 (0.092550) | 0.035959 / 0.023109 (0.012850) | 0.414071 / 0.275898 (0.138173) | 0.429628 / 0.323480 (0.106148) | 0.005151 / 0.007986 (-0.002834) | 0.003979 / 0.004328 (-0.000349) | 0.103209 / 0.004250 (0.098958) | 0.046200 / 0.037052 (0.009148) | 0.414020 / 0.258489 (0.155531) | 0.475748 / 0.293841 (0.181907) | 0.041031 / 0.128546 (-0.087515) | 0.014462 / 0.075646 (-0.061185) | 0.423706 / 0.419271 (0.004434) | 0.063488 / 0.043533 (0.019955) | 0.404937 / 0.255139 (0.149798) | 0.404973 / 0.283200 (0.121773) | 0.114982 / 0.141683 (-0.026701) | 1.911867 / 1.452155 (0.459713) | 1.925274 / 1.492716 (0.432557) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284656 / 0.018006 (0.266650) | 0.588329 / 0.000490 (0.587840) | 0.007092 / 0.000200 (0.006892) | 0.000143 / 0.000054 (0.000089) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025136 / 0.037411 (-0.012275) | 0.109514 / 0.014526 (0.094988) | 0.117953 / 0.176557 (-0.058603) | 0.195454 / 0.737135 (-0.541682) | 0.134243 / 0.296338 (-0.162096) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.584045 / 0.215209 (0.368836) | 6.456922 / 2.077655 (4.379267) | 2.759728 / 1.504120 (1.255608) | 2.260913 / 1.541195 (0.719718) | 2.292535 / 1.468490 (0.824045) | 0.906873 / 4.584777 (-3.677904) | 5.554455 / 3.745712 (1.808743) | 4.881557 / 5.269862 (-0.388305) | 2.509121 / 4.565676 (-2.056555) | 0.107191 / 0.424275 (-0.317084) | 0.014684 / 0.007607 (0.007077) | 0.761625 / 0.226044 (0.535580) | 7.582708 / 2.268929 (5.313780) | 3.150160 / 55.444624 (-52.294464) | 2.792284 / 6.876477 (-4.084193) | 2.881321 / 2.142072 (0.739248) | 1.108353 / 4.805227 (-3.696874) | 0.220129 / 6.500664 (-6.280535) | 0.075877 / 0.075469 (0.000408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.465743 / 1.841788 (-0.376045) | 17.679219 / 8.074308 (9.604911) | 18.929399 / 10.191392 (8.738007) | 0.219488 / 0.680424 (-0.460935) | 0.028435 / 0.534201 (-0.505766) | 0.512623 / 0.579283 (-0.066660) | 0.619983 / 0.434364 (0.185619) | 0.603430 / 0.540337 (0.063092) | 0.730416 / 1.386936 (-0.656520) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008285 / 0.011353 (-0.003068) | 0.005771 / 0.011008 (-0.005237) | 0.106444 / 0.038508 (0.067936) | 0.035078 / 0.023109 (0.011969) | 0.441198 / 0.275898 (0.165300) | 0.536279 / 0.323480 (0.212800) | 0.004561 / 0.007986 (-0.003424) | 0.006623 / 0.004328 (0.002294) | 0.102392 / 0.004250 (0.098142) | 0.051736 / 0.037052 (0.014684) | 0.479113 / 0.258489 (0.220624) | 0.535088 / 0.293841 (0.241247) | 0.041805 / 0.128546 (-0.086741) | 0.014031 / 0.075646 (-0.061615) | 0.115795 / 0.419271 (-0.303477) | 0.057913 / 0.043533 (0.014380) | 0.435847 / 0.255139 (0.180708) | 0.524831 / 0.283200 (0.241632) | 0.119419 / 0.141683 (-0.022263) | 1.835577 / 1.452155 (0.383423) | 1.936990 / 1.492716 (0.444273) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288422 / 0.018006 (0.270416) | 0.569776 / 0.000490 (0.569287) | 0.005652 / 0.000200 (0.005452) | 0.000139 / 0.000054 (0.000085) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034632 / 0.037411 (-0.002779) | 0.136217 / 0.014526 (0.121691) | 0.139468 / 0.176557 (-0.037089) | 0.206804 / 0.737135 (-0.530331) | 0.148733 / 0.296338 (-0.147606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.667728 / 0.215209 (0.452518) | 6.548972 / 2.077655 (4.471317) | 3.051537 / 1.504120 (1.547417) | 2.581173 / 1.541195 (1.039978) | 2.653443 / 1.468490 (1.184953) | 0.906606 / 4.584777 (-3.678171) | 5.704384 / 3.745712 (1.958672) | 2.848618 / 5.269862 (-2.421244) | 1.821402 / 4.565676 (-2.744274) | 0.118018 / 0.424275 (-0.306257) | 0.014821 / 0.007607 (0.007214) | 0.821967 / 0.226044 (0.595923) | 8.165818 / 2.268929 (5.896889) | 3.744509 / 55.444624 (-51.700116) | 2.901097 / 6.876477 (-3.975380) | 3.018068 / 2.142072 (0.875996) | 1.106155 / 4.805227 (-3.699072) | 0.263118 / 6.500664 (-6.237546) | 0.088508 / 0.075469 (0.013039) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.725860 / 1.841788 (-0.115928) | 19.411246 / 8.074308 (11.336938) | 20.807499 / 10.191392 (10.616107) | 0.238417 / 0.680424 (-0.442007) | 0.026550 / 0.534201 (-0.507651) | 0.500715 / 0.579283 (-0.078568) | 0.615547 / 0.434364 (0.181183) | 0.614361 / 0.540337 (0.074023) | 0.720365 / 1.386936 (-0.666571) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ae2e77f8344cdcc1c4c876f67936bec33087b19a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006640 / 0.011353 (-0.004713) | 0.004079 / 0.011008 (-0.006930) | 0.100555 / 0.038508 (0.062046) | 0.037318 / 0.023109 (0.014209) | 0.320050 / 0.275898 (0.044152) | 0.358860 / 0.323480 (0.035380) | 0.003828 / 0.007986 (-0.004158) | 0.003215 / 0.004328 (-0.001113) | 0.076577 / 0.004250 (0.072326) | 0.048080 / 0.037052 (0.011028) | 0.324759 / 0.258489 (0.066270) | 0.361862 / 0.293841 (0.068021) | 0.030759 / 0.128546 (-0.097787) | 0.008998 / 0.075646 (-0.066648) | 0.329105 / 0.419271 (-0.090167) | 0.051407 / 0.043533 (0.007875) | 0.311067 / 0.255139 (0.055928) | 0.334401 / 0.283200 (0.051201) | 0.098307 / 0.141683 (-0.043376) | 1.500931 / 1.452155 (0.048776) | 1.574646 / 1.492716 (0.081930) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219080 / 0.018006 (0.201073) | 0.447117 / 0.000490 (0.446627) | 0.009091 / 0.000200 (0.008891) | 0.000396 / 0.000054 (0.000341) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026048 / 0.037411 (-0.011363) | 0.112714 / 0.014526 (0.098188) | 0.116426 / 0.176557 (-0.060131) | 0.172187 / 0.737135 (-0.564948) | 0.121707 / 0.296338 (-0.174632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.358898 / 0.215209 (0.143689) | 3.589212 / 2.077655 (1.511557) | 1.677927 / 1.504120 (0.173807) | 1.515861 / 1.541195 (-0.025334) | 1.598479 / 1.468490 (0.129989) | 0.478265 / 4.584777 (-4.106512) | 3.834982 / 3.745712 (0.089270) | 1.933815 / 5.269862 (-3.336047) | 1.122769 / 4.565676 (-3.442908) | 0.066984 / 0.424275 (-0.357291) | 0.011276 / 0.007607 (0.003669) | 0.512530 / 0.226044 (0.286486) | 5.112667 / 2.268929 (2.843739) | 2.266336 / 55.444624 (-53.178288) | 1.929671 / 6.876477 (-4.946806) | 2.127231 / 2.142072 (-0.014842) | 0.671307 / 4.805227 (-4.133920) | 0.143919 / 6.500664 (-6.356745) | 0.066086 / 0.075469 (-0.009383) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.208767 / 1.841788 (-0.633021) | 15.008415 / 8.074308 (6.934106) | 14.085442 / 10.191392 (3.894050) | 0.184164 / 0.680424 (-0.496260) | 0.017619 / 0.534201 (-0.516582) | 0.394443 / 0.579283 (-0.184840) | 0.457653 / 0.434364 (0.023289) | 0.473169 / 0.540337 (-0.067169) | 0.571332 / 1.386936 (-0.815604) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007009 / 0.011353 (-0.004344) | 0.004330 / 0.011008 (-0.006678) | 0.077462 / 0.038508 (0.038954) | 0.034780 / 0.023109 (0.011671) | 0.395573 / 0.275898 (0.119675) | 0.425444 / 0.323480 (0.101964) | 0.004119 / 0.007986 (-0.003866) | 0.003597 / 0.004328 (-0.000731) | 0.075209 / 0.004250 (0.070958) | 0.050871 / 0.037052 (0.013819) | 0.402990 / 0.258489 (0.144500) | 0.445334 / 0.293841 (0.151493) | 0.032492 / 0.128546 (-0.096054) | 0.009066 / 0.075646 (-0.066581) | 0.083073 / 0.419271 (-0.336198) | 0.051661 / 0.043533 (0.008128) | 0.395207 / 0.255139 (0.140068) | 0.409556 / 0.283200 (0.126356) | 0.106035 / 0.141683 (-0.035648) | 1.506255 / 1.452155 (0.054101) | 1.598724 / 1.492716 (0.106008) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194733 / 0.018006 (0.176727) | 0.444920 / 0.000490 (0.444431) | 0.002402 / 0.000200 (0.002202) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030464 / 0.037411 (-0.006947) | 0.119153 / 0.014526 (0.104627) | 0.126081 / 0.176557 (-0.050476) | 0.179692 / 0.737135 (-0.557444) | 0.131834 / 0.296338 (-0.164504) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440153 / 0.215209 (0.224944) | 4.397504 / 2.077655 (2.319850) | 2.138320 / 1.504120 (0.634200) | 1.950596 / 1.541195 (0.409402) | 2.079792 / 1.468490 (0.611302) | 0.537606 / 4.584777 (-4.047171) | 3.689420 / 3.745712 (-0.056292) | 2.960732 / 5.269862 (-2.309129) | 1.585652 / 4.565676 (-2.980024) | 0.066102 / 0.424275 (-0.358173) | 0.011429 / 0.007607 (0.003821) | 0.537011 / 0.226044 (0.310967) | 5.342171 / 2.268929 (3.073242) | 2.624446 / 55.444624 (-52.820179) | 2.313311 / 6.876477 (-4.563166) | 2.389166 / 2.142072 (0.247094) | 0.657547 / 4.805227 (-4.147681) | 0.141640 / 6.500664 (-6.359025) | 0.066102 / 0.075469 (-0.009367) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.130471 / 1.841788 (-0.711317) | 14.824792 / 8.074308 (6.750484) | 13.436463 / 10.191392 (3.245071) | 0.155688 / 0.680424 (-0.524736) | 0.015811 / 0.534201 (-0.518390) | 0.355623 / 0.579283 (-0.223660) | 0.450604 / 0.434364 (0.016241) | 0.472542 / 0.540337 (-0.067796) | 0.563584 / 1.386936 (-0.823352) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#963ff6de6eae80a6de4aabf0092eb3dfbe43096e \"CML watermark\")\n"
] | 2023-05-12T16:48:49 | 2023-06-13T16:04:05 | 2023-06-13T15:57:05 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5852",
"html_url": "https://github.com/huggingface/datasets/pull/5852",
"diff_url": "https://github.com/huggingface/datasets/pull/5852.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5852.patch",
"merged_at": "2023-06-13T15:57:05"
} | Used the TorchFormatter to get torch tensors in iterable dataset with format set to "torch".
It uses the data from Arrow if possible, otherwise applies recursive_tensorize.
When set back to format_type=None, cast_to_python_objects is used.
requires https://github.com/huggingface/datasets/pull/5821
close https://github.com/huggingface/datasets/issues/5793 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5852/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5850 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5850/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5850/comments | https://api.github.com/repos/huggingface/datasets/issues/5850/events | https://github.com/huggingface/datasets/pull/5850 | 1,707,678,911 | PR_kwDODunzps5QZALv | 5,850 | Make packaged builders skip non-supported file formats | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5850). All of your documentation changes will be reflected on that endpoint.",
"Good idea. @mariosasko!!!\r\n\r\nPlease note that before this PR, the files are not evenly distributed for archives: `_generate_examples` gets a list of iterators, one for each archive (uncompressed to a directory).",
"This change could create silent problems when loading files with extensions that are not listed here. For example\r\n\r\n```python\r\nload_dataset(\"text\", data_files=[\"20230515.log\"])\r\n```\r\n\r\nwouldn't even log anything to say that the file was ignored.\r\n\r\nMaybe it's possible to do this at data files patterns resolution ?\r\n\r\ne.g. in get_data_patterns_in_dataset_repository / get_data_patterns_locally we could return patterns that include the most common extension",
"@lhoestq the issue you evoke (.log files skipped by text builder if .log is not added to .txt as supported extension) persists whether you perform the skip at the pattern resolution or in the builder itself.\r\n\r\nThe solution is to add the .log extension (besides the .txt) as supported by text, independently of where we perform the skip (at pattern resolution or in the builder itself).\r\n\r\nAdditionally, at the time we call for pattern resolution, we do not know the builder class yet, so that we cannot pass specific file extensions. First we call data files pattern resolution, and afterwards we call `infer_module_for_data_files` and then know the builder class.",
"> @lhoestq the issue you evoke (.log files skipped by text builder if .log is not added to .txt as supported extension) persists whether you perform the skip at the pattern resolution or in the builder itself.\r\n\r\nNo I simply think it's a bad breaking change to not support\r\n\r\n```python\r\nload_dataset(\"<builder_name>\", data_files=[\"path/to/file_with_unknown_or_no_extension\"])\r\n# or\r\nload_dataset(\"<builder_name>\", data_files=[\"https://url.to/file_with_unknown_or_no_extension\"])\r\n```\r\n\r\nIdk if it's the easiest solution, but maybe it's possible to do the change only when inferring the patterns of dataset repositories. This should avoid this breaking change.\r\n\r\nFor example it could do something like that in `get_data_patterns_locally`\r\n\r\n```python\r\n Input:\r\n\r\n my_dataset_repository/\r\n βββ README.md\r\n βββ banner.png\r\n βββ data0.csv\r\n βββ data1.csv\r\n βββ data2.csv\r\n\r\n Output:\r\n\r\n {\"train\": [\"**.csv\"]}\r\n```\r\n\r\ninstead of \r\n\r\n```python\r\n Output:\r\n\r\n {\"train\": [\"**\"]}\r\n```",
"I agree with @lhoestq - it should still be possible to request parsing a file with a specific builder even if the file's extension is \"invalid\" for the builder, and only ignore non-supported file formats when inferring the patterns.",
"Therefore, if I understand correctly, what you suggest is:\r\n- if the user passes a packaged builder to `load_dataset` (e.g. `load_dataset(\"csv\",...`), then the *passed* `data_files` should not be filtered to remove unsupported extensions. No breaking change in this case\r\n- if the user passes a no-script repo/folder to `load_dataset` (e.g. `load_dataset(\"my_dataset_repository\",...`), then the *inferred* data files should be filtered to remove the extensions that are not supported by the inferred module name builder\r\n - if the user passes `data_files` as well, then I guess these should not be filtered, to avoid any breaking change as in the first case above",
"Yes that would be ideal imo !",
"I think this now fulfills all the requirements.",
"I find it a bit confusing to still be able to pass data_files that are going to be silently ignored based on the value of `only_supported_extensions`. My suggestion was to have the right data files pattern, not to filter a posteriori (sorry if my last message was confusing).\r\n\r\nHaving the right data files pattern would also allow users to inspect what's actually being loaded with\r\n```\r\nload_dataset_builder(...).config.data_files\r\n```\r\nand it would list exactly what data files are used."
] | 2023-05-12T13:52:34 | 2023-06-07T12:26:38 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5850",
"html_url": "https://github.com/huggingface/datasets/pull/5850",
"diff_url": "https://github.com/huggingface/datasets/pull/5850.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5850.patch",
"merged_at": null
} | This PR makes packaged builders skip non-supported file formats:
- Csv builder skips non-CSV files
- Analogously for the other builders
Fix #5849. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5850/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5849 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5849/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5849/comments | https://api.github.com/repos/huggingface/datasets/issues/5849/events | https://github.com/huggingface/datasets/issues/5849 | 1,707,551,511 | I_kwDODunzps5lxysX | 5,849 | CSV datasets should only read the CSV data files in the repo | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-05-12T12:29:53 | 2023-06-22T14:16:27 | 2023-06-22T14:16:27 | MEMBER | null | null | null | When a no-script dataset has many CSV files and a JPG file, the library infers to use the Csv builder, but tries to read as CSV all files in the repo, also the JPG file.
I think the Csv builder should filter out non-CSV files when reading.
An analogue solution should be implemented for other packaged builders.
Related to:
- https://huggingface.co/datasets/abidlabs/img2text/discussions/1
- https://github.com/gradio-app/gradio/pull/3973#issuecomment-1545409061
CC: @abidlabs @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5849/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5849/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5848/comments | https://api.github.com/repos/huggingface/datasets/issues/5848/events | https://github.com/huggingface/datasets/pull/5848 | 1,707,506,734 | PR_kwDODunzps5QYa1B | 5,848 | Add `accelerate` as metric's test dependency to fix CI error | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007565 / 0.011353 (-0.003788) | 0.005361 / 0.011008 (-0.005647) | 0.098963 / 0.038508 (0.060455) | 0.034271 / 0.023109 (0.011162) | 0.323421 / 0.275898 (0.047523) | 0.348495 / 0.323480 (0.025015) | 0.006244 / 0.007986 (-0.001741) | 0.004215 / 0.004328 (-0.000113) | 0.073614 / 0.004250 (0.069364) | 0.049334 / 0.037052 (0.012282) | 0.315277 / 0.258489 (0.056788) | 0.354325 / 0.293841 (0.060484) | 0.035001 / 0.128546 (-0.093545) | 0.012149 / 0.075646 (-0.063497) | 0.335614 / 0.419271 (-0.083657) | 0.050532 / 0.043533 (0.006999) | 0.308500 / 0.255139 (0.053361) | 0.324620 / 0.283200 (0.041421) | 0.110241 / 0.141683 (-0.031442) | 1.443923 / 1.452155 (-0.008232) | 1.559289 / 1.492716 (0.066573) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207629 / 0.018006 (0.189622) | 0.433251 / 0.000490 (0.432762) | 0.003021 / 0.000200 (0.002821) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028312 / 0.037411 (-0.009100) | 0.111829 / 0.014526 (0.097303) | 0.127099 / 0.176557 (-0.049458) | 0.184702 / 0.737135 (-0.552433) | 0.125062 / 0.296338 (-0.171277) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399451 / 0.215209 (0.184242) | 3.966528 / 2.077655 (1.888874) | 1.826004 / 1.504120 (0.321884) | 1.669547 / 1.541195 (0.128353) | 1.751584 / 1.468490 (0.283094) | 0.688308 / 4.584777 (-3.896469) | 3.813275 / 3.745712 (0.067562) | 3.181554 / 5.269862 (-2.088307) | 1.750566 / 4.565676 (-2.815111) | 0.085038 / 0.424275 (-0.339237) | 0.011992 / 0.007607 (0.004385) | 0.502374 / 0.226044 (0.276330) | 4.970614 / 2.268929 (2.701686) | 2.309617 / 55.444624 (-53.135007) | 2.012427 / 6.876477 (-4.864050) | 2.156348 / 2.142072 (0.014276) | 0.834415 / 4.805227 (-3.970812) | 0.167912 / 6.500664 (-6.332752) | 0.065711 / 0.075469 (-0.009758) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223132 / 1.841788 (-0.618656) | 15.126753 / 8.074308 (7.052445) | 14.829184 / 10.191392 (4.637792) | 0.142582 / 0.680424 (-0.537842) | 0.017483 / 0.534201 (-0.516718) | 0.429768 / 0.579283 (-0.149516) | 0.422745 / 0.434364 (-0.011619) | 0.508813 / 0.540337 (-0.031525) | 0.618716 / 1.386936 (-0.768220) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007749 / 0.011353 (-0.003604) | 0.005433 / 0.011008 (-0.005576) | 0.076223 / 0.038508 (0.037715) | 0.036334 / 0.023109 (0.013225) | 0.375339 / 0.275898 (0.099441) | 0.413674 / 0.323480 (0.090194) | 0.006207 / 0.007986 (-0.001778) | 0.004085 / 0.004328 (-0.000244) | 0.076154 / 0.004250 (0.071904) | 0.050324 / 0.037052 (0.013271) | 0.382919 / 0.258489 (0.124429) | 0.442508 / 0.293841 (0.148667) | 0.035951 / 0.128546 (-0.092595) | 0.012067 / 0.075646 (-0.063580) | 0.087649 / 0.419271 (-0.331623) | 0.048786 / 0.043533 (0.005253) | 0.373541 / 0.255139 (0.118402) | 0.400437 / 0.283200 (0.117237) | 0.102622 / 0.141683 (-0.039061) | 1.472443 / 1.452155 (0.020288) | 1.580178 / 1.492716 (0.087462) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222105 / 0.018006 (0.204098) | 0.445465 / 0.000490 (0.444975) | 0.003671 / 0.000200 (0.003471) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030808 / 0.037411 (-0.006603) | 0.116687 / 0.014526 (0.102161) | 0.124972 / 0.176557 (-0.051584) | 0.175621 / 0.737135 (-0.561514) | 0.129029 / 0.296338 (-0.167310) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434627 / 0.215209 (0.219418) | 4.330268 / 2.077655 (2.252613) | 2.140266 / 1.504120 (0.636146) | 1.960705 / 1.541195 (0.419510) | 2.035949 / 1.468490 (0.567459) | 0.696830 / 4.584777 (-3.887947) | 3.790468 / 3.745712 (0.044756) | 3.194112 / 5.269862 (-2.075750) | 1.577728 / 4.565676 (-2.987948) | 0.085445 / 0.424275 (-0.338830) | 0.012207 / 0.007607 (0.004600) | 0.555199 / 0.226044 (0.329154) | 5.551539 / 2.268929 (3.282610) | 2.630917 / 55.444624 (-52.813707) | 2.383362 / 6.876477 (-4.493114) | 2.476301 / 2.142072 (0.334229) | 0.845773 / 4.805227 (-3.959455) | 0.169229 / 6.500664 (-6.331435) | 0.066064 / 0.075469 (-0.009405) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.277543 / 1.841788 (-0.564245) | 15.775637 / 8.074308 (7.701329) | 13.528588 / 10.191392 (3.337196) | 0.167428 / 0.680424 (-0.512996) | 0.017581 / 0.534201 (-0.516620) | 0.454472 / 0.579283 (-0.124811) | 0.427987 / 0.434364 (-0.006377) | 0.551512 / 0.540337 (0.011175) | 0.650811 / 1.386936 (-0.736125) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#96a6f5f526cc90330df597ae0097274742d5b84f \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009800 / 0.011353 (-0.001552) | 0.006443 / 0.011008 (-0.004565) | 0.144137 / 0.038508 (0.105629) | 0.037493 / 0.023109 (0.014383) | 0.482306 / 0.275898 (0.206408) | 0.467625 / 0.323480 (0.144145) | 0.006812 / 0.007986 (-0.001174) | 0.004810 / 0.004328 (0.000481) | 0.109047 / 0.004250 (0.104796) | 0.047169 / 0.037052 (0.010116) | 0.451253 / 0.258489 (0.192764) | 0.511339 / 0.293841 (0.217498) | 0.055583 / 0.128546 (-0.072963) | 0.021810 / 0.075646 (-0.053836) | 0.426522 / 0.419271 (0.007250) | 0.070282 / 0.043533 (0.026749) | 0.469631 / 0.255139 (0.214492) | 0.484951 / 0.283200 (0.201751) | 0.117370 / 0.141683 (-0.024313) | 1.809917 / 1.452155 (0.357763) | 1.882659 / 1.492716 (0.389943) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223843 / 0.018006 (0.205837) | 0.549216 / 0.000490 (0.548726) | 0.007120 / 0.000200 (0.006920) | 0.000128 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033057 / 0.037411 (-0.004354) | 0.128242 / 0.014526 (0.113716) | 0.140906 / 0.176557 (-0.035650) | 0.213122 / 0.737135 (-0.524013) | 0.148115 / 0.296338 (-0.148224) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.638712 / 0.215209 (0.423503) | 6.383684 / 2.077655 (4.306029) | 2.477020 / 1.504120 (0.972900) | 2.129190 / 1.541195 (0.587996) | 2.230503 / 1.468490 (0.762013) | 1.367167 / 4.584777 (-3.217610) | 5.570586 / 3.745712 (1.824873) | 5.462857 / 5.269862 (0.192996) | 2.990604 / 4.565676 (-1.575073) | 0.146543 / 0.424275 (-0.277732) | 0.016060 / 0.007607 (0.008453) | 0.812691 / 0.226044 (0.586646) | 7.928041 / 2.268929 (5.659112) | 3.329494 / 55.444624 (-52.115130) | 2.523452 / 6.876477 (-4.353025) | 2.672374 / 2.142072 (0.530302) | 1.598554 / 4.805227 (-3.206673) | 0.284727 / 6.500664 (-6.215937) | 0.080359 / 0.075469 (0.004889) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.501112 / 1.841788 (-0.340675) | 17.553644 / 8.074308 (9.479335) | 22.704062 / 10.191392 (12.512670) | 0.225575 / 0.680424 (-0.454849) | 0.026531 / 0.534201 (-0.507670) | 0.520129 / 0.579283 (-0.059154) | 0.626220 / 0.434364 (0.191856) | 0.631740 / 0.540337 (0.091403) | 0.750611 / 1.386936 (-0.636325) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009866 / 0.011353 (-0.001487) | 0.005733 / 0.011008 (-0.005275) | 0.111529 / 0.038508 (0.073021) | 0.042001 / 0.023109 (0.018891) | 0.458578 / 0.275898 (0.182680) | 0.507796 / 0.323480 (0.184316) | 0.006547 / 0.007986 (-0.001438) | 0.005611 / 0.004328 (0.001282) | 0.115321 / 0.004250 (0.111070) | 0.048741 / 0.037052 (0.011689) | 0.447611 / 0.258489 (0.189122) | 0.531830 / 0.293841 (0.237989) | 0.052176 / 0.128546 (-0.076370) | 0.022431 / 0.075646 (-0.053216) | 0.120709 / 0.419271 (-0.298562) | 0.067301 / 0.043533 (0.023769) | 0.460577 / 0.255139 (0.205438) | 0.497805 / 0.283200 (0.214605) | 0.121830 / 0.141683 (-0.019853) | 1.876436 / 1.452155 (0.424281) | 1.983491 / 1.492716 (0.490775) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230982 / 0.018006 (0.212976) | 0.540643 / 0.000490 (0.540153) | 0.004646 / 0.000200 (0.004446) | 0.000131 / 0.000054 (0.000077) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034230 / 0.037411 (-0.003181) | 0.136454 / 0.014526 (0.121928) | 0.143370 / 0.176557 (-0.033187) | 0.206752 / 0.737135 (-0.530384) | 0.148722 / 0.296338 (-0.147617) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.704667 / 0.215209 (0.489458) | 7.112079 / 2.077655 (5.034424) | 3.083916 / 1.504120 (1.579797) | 2.606388 / 1.541195 (1.065193) | 2.738505 / 1.468490 (1.270015) | 1.314897 / 4.584777 (-3.269880) | 5.764442 / 3.745712 (2.018729) | 3.491890 / 5.269862 (-1.777972) | 2.299983 / 4.565676 (-2.265693) | 0.169655 / 0.424275 (-0.254620) | 0.015251 / 0.007607 (0.007643) | 0.977230 / 0.226044 (0.751186) | 9.697773 / 2.268929 (7.428844) | 3.826928 / 55.444624 (-51.617697) | 3.108238 / 6.876477 (-3.768239) | 3.103242 / 2.142072 (0.961169) | 1.586645 / 4.805227 (-3.218582) | 0.287181 / 6.500664 (-6.213483) | 0.107332 / 0.075469 (0.031863) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.712710 / 1.841788 (-0.129077) | 19.169403 / 8.074308 (11.095095) | 21.777301 / 10.191392 (11.585909) | 0.216918 / 0.680424 (-0.463506) | 0.026551 / 0.534201 (-0.507650) | 0.570383 / 0.579283 (-0.008900) | 0.643885 / 0.434364 (0.209521) | 0.673906 / 0.540337 (0.133568) | 0.824573 / 1.386936 (-0.562363) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4ead18b6921c9576a3078d2fb685c38f1e1a4b8a \"CML watermark\")\n"
] | 2023-05-12T12:01:01 | 2023-05-12T13:48:47 | 2023-05-12T13:39:06 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5848",
"html_url": "https://github.com/huggingface/datasets/pull/5848",
"diff_url": "https://github.com/huggingface/datasets/pull/5848.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5848.patch",
"merged_at": "2023-05-12T13:39:06"
} | The `frugalscore` metric uses Transformers' Trainer, which requires `accelerate` (as of recently).
Fixes the following [CI error](https://github.com/huggingface/datasets/actions/runs/4950900048/jobs/8855148703?pr=5845). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5848/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5847 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5847/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5847/comments | https://api.github.com/repos/huggingface/datasets/issues/5847/events | https://github.com/huggingface/datasets/issues/5847 | 1,706,616,634 | I_kwDODunzps5luOc6 | 5,847 | Streaming IterableDataset not working with translation pipeline | {
"login": "jlquinn",
"id": 826841,
"node_id": "MDQ6VXNlcjgyNjg0MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/826841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlquinn",
"html_url": "https://github.com/jlquinn",
"followers_url": "https://api.github.com/users/jlquinn/followers",
"following_url": "https://api.github.com/users/jlquinn/following{/other_user}",
"gists_url": "https://api.github.com/users/jlquinn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlquinn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlquinn/subscriptions",
"organizations_url": "https://api.github.com/users/jlquinn/orgs",
"repos_url": "https://api.github.com/users/jlquinn/repos",
"events_url": "https://api.github.com/users/jlquinn/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlquinn/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I wasn't sure to file this against transformers or datasets.",
"[`KeyDataset`](https://github.com/huggingface/transformers/blob/7f8b909189547944617741d8d3c6c84504701693/src/transformers/pipelines/pt_utils.py#L296) doesn't support iterable datasets, so you either need to implement a version that does (and also indexing nested (translation) fields):\r\n\r\n```python\r\nfrom torch.utils.data import Dataset, IterableDataset\r\n\r\ndef build_key_fetcher(key: str):\r\n def _key_fetcher(item):\r\n for sub_key in key.split(\".\"):\r\n item = item[sub_key]\r\n return item\r\n return _key_fetcher\r\n\r\nclass KeyDataset(Dataset):\r\n def __new__(cls, dataset: Dataset, key: str):\r\n cls = _KeyIterableDataset if isinstance(dataset, IterableDataset) else _KeyMapDataset\r\n self = object.__new__(cls)\r\n self.dataset = dataset\r\n self.key = key\r\n self._key_fetcher = build_key_fetcher(key)\r\n return self\r\n\r\nclass _KeyMapDataset(KeyDataset):\r\n def __getitem__(self, i):\r\n return self._key_fetcher(self.dataset[i])\r\n \r\n def __len__(self):\r\n return len(self.dataset)\r\n\r\n\r\nclass _KeyIterableDataset(KeyDataset):\r\n def __iter__(self):\r\n for ex in self.dataset:\r\n yield self._key_fetcher(ex)\r\n\r\nks = KeyDataset(ds, \"translation.en\")\r\n```\r\n\r\nor use `IterableDataset`'s `map`:\r\n```python\r\ndef fetch_en_translation(ex):\r\n return {\"en\": ex[\"translation\"][\"en\"]}\r\nks = ds.map(fetch_en_translation, remove_columns=ds.column_names) \r\n```\r\n\r\ncc @sgugger: Perhaps the `KeyDataset` + PyTorch `IterableDataset` case should be supported by Transformers",
"@mariosasko The map snippet didn't quite work, but gave me enough of a clue to get it working. The following snippet does work:\r\n```\r\ndef en_translation(x):\r\n return {\"en\":x['translation']['en']}\r\nks = ds.map(en_translation, remove_columns=['translation'])\r\ntest=[]\r\nfor x in iter(ks):\r\n test.append(x['en'])\r\nxx= mt(test)\r\nfor x in xx:\r\n print(x)\r\n```\r\n\r\nI tried just returning `x['translation']['en`]` in the helper function instead of the dict, but that didn't give me an iterator over strings that pipeline would work with either.\r\n\r\n\r\nThe snippet as is gives the following error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/pdb.py\", line 1704, in main\r\n pdb._runscript(mainpyfile)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/pdb.py\", line 1573, in _runscript\r\n self.run(statement)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/bdb.py\", line 580, in run\r\n exec(cmd, globals, locals)\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/jlquinn/models/hf/ende.t5.pipe.py\", line 1, in <module>\r\n from transformers import pipeline\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py\", line 335, in __call__\r\n return super().__call__(*args, **kwargs)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py\", line 138, in __call__\r\n result = super().__call__(*args, **kwargs)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/base.py\", line 1027, in __call__\r\n return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/base.py\", line 1033, in run_single\r\n model_inputs = self.preprocess(inputs, **preprocess_params)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py\", line 287, in preprocess\r\n return super()._parse_and_tokenize(*args, truncation=truncation)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py\", line 100, in _parse_and_tokenize\r\n raise ValueError(\r\nValueError: `args[0]`: <datasets.iterable_dataset.IterableDataset object at 0x7f5fd38ef1c0> have the wrong format. The should be either of type `str` or type `list`\r\nUncaught exception. Entering post mortem debugging\r\nRunning 'cont' or 'step' will restart the program\r\n```\r\n",
"So perhaps there's no bug exactly, but I would love to see two things: 1) improve the documentation to better understand what's really getting returned. 2) update the example provided of using transformer pipeline with a dataset to include the oddball case that translation appears to be.",
"cc @Narsil ",
"Hi,\r\n\r\nfor the original snippet, the issue is that `streaming` datasets are not countable (they have no len) and therefore `KeyDataset` cannot work with them ( KeyDataset is a dataset and therefore requires a length).\r\n\r\nI modified slightly the original snippet to make it work:\r\n\r\n```python\r\nfrom transformers import pipeline\r\nfrom transformers.pipelines.pt_utils import KeyDataset\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(path=\"wmt14\", name=\"fr-en\", split=\"test\", streaming=True)\r\nbs = 1\r\nmt = pipeline(\r\n \"translation_en_to_fr\", model=\"hf-internal-testing/tiny-random-T5ForConditionalGeneration\", batch_size=bs\r\n)\r\n\r\n\r\ndef ks(ds):\r\n for item in ds:\r\n yield item[\"translation\"][\"en\"]\r\n\r\n\r\n# print(f\"{ks}\")\r\nxx = mt(ks(ds))\r\nfor x in xx:\r\n print(x)\r\n```\r\n\r\nThis is what the first example in the docs suggests to use (as it's the most flexible): https://huggingface.co/docs/transformers/v4.29.1/en/pipeline_tutorial#using-pipelines-on-a-dataset\r\n\r\n`KeyDataset` really exists only to get a `sized` dataset to work nicer with `tqdm` for instance.\r\n\r\n@sgugger should we update the docs to remove `KeyDataset` entirely ? (We can add a note to pass manually the length of the data to tqdm so that the progress bar option can still be easy to use ?)\r\n",
"Maybe moving `KeyDataset` later on in the guide and specify it's mostly for streaming then? Or is it also necessary for batch_size>1 (which is what the current doc implies)?",
"Hmm\r\n\r\nIterator (`yield`) :\r\n- Not countable\r\n- Super flexible\r\n- Cannot use `num_workers>1` (threading requires indexing at the correct location, iterators require to iterate in order,so each thread would iterate over the full thing being genuinely a bad idea)\r\n- Can batch\r\n- tqdm doesn't show a nice progress bar (it has no total)\r\n\r\nKeyDataset (Or any PyTorch like Dataset returning the correct object for the pipeline):\r\n- Countable\r\n- Less flexible (not applicable to datasets with streaming), can only work on single keys. But should be easy to read and write your own (like @mariosasko did)\r\n- Works with `num_workers > 1` (Every worker can fetch exactly what's needed)\r\n- Can batch \r\n- tqdm shows a nice progress bar\r\n\r\nIn the docs, if we update all the examples to use iterators, and include an example with\r\n\r\n```\r\nfor item in tqdm.tqdm(pipe(iterator(), total=len(dataset))))\r\n```\r\n\r\nWe can save the biggest feature that doesn't work out of the box with iterators which is the tqdm progress bar.\r\n\r\n`num_workers>1` we can mention it, but it tends to be an issues only on CPU intensive loads, like image (and maybe audio)\r\n"
] | 2023-05-11T21:52:38 | 2023-05-16T15:59:55 | null | NONE | null | null | null | ### Describe the bug
I'm trying to use a streaming dataset for translation inference to avoid downloading the training data.
I'm using a pipeline and a dataset, and following the guidance in the tutorial.
Instead I get an exception that IterableDataset has no len().
### Steps to reproduce the bug
CODE:
```
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from datasets import load_dataset
ds = load_dataset(path="wmt14", name="fr-en", split="test", streaming=True)
bs=1
mt = pipeline("translation_en_to_fr", model="t5-base", batch_size=bs)
#print(mt("hello")) THIS WORKS
ks = KeyDataset(ds, "translation")
print(f"{ks}")
xx= mt(ks)
for x in xx:
print(x)
```
RUN:
```
(watnlp) [jlquinn@bertdev01 hf]$ python ende.t5.pipe.py
2023-05-11 16:48:08.817572: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-05-11 16:48:08.821388: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2023-05-11 16:48:08.821407: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
<transformers.pipelines.pt_utils.KeyDataset object at 0x7f61ed5da9d0>
Traceback (most recent call last):
File "/home/jlquinn/models/hf/ende.t5.pipe.py", line 11, in <module>
for x in xx:
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 111, in __next__
item = next(self.iterator)
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 111, in __next__
item = next(self.iterator)
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
data = self._next_data()
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 720, in _next_data
index = self._next_index() # may raise StopIteration
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 671, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/sampler.py", line 247, in __iter__
for idx in self.sampler:
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/sampler.py", line 76, in __iter__
return iter(range(len(self.data_source)))
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 13, in __len__
return len(self.dataset)
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 289, in __len__
return len(self.dataset)
TypeError: object of type 'IterableDataset' has no len()
```
### Expected behavior
I'm expecting french translations of the english test set to be printed.
### Environment info
Run on CPU with no GPU.
RHEL 8.7 x86_64
python 3.9.0
transformers 4.17.0
datasets 2.0.0
tokenizers 0.12.1
```
(watnlp) [jlquinn@bertdev01 hf]$ datasets-cli env
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.0.0
- Platform: Linux-4.18.0-372.19.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.0
- PyArrow version: 8.0.0
- Pandas version: 1.4.4
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5847/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5851 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5851/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5851/comments | https://api.github.com/repos/huggingface/datasets/issues/5851/events | https://github.com/huggingface/datasets/issues/5851 | 1,707,907,048 | I_kwDODunzps5lzJfo | 5,851 | Error message not clear in interleaving datasets | {
"login": "surya-narayanan",
"id": 17240858,
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surya-narayanan",
"html_url": "https://github.com/surya-narayanan",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-05-11T20:52:13 | 2023-05-23T10:32:59 | 2023-05-23T10:32:59 | NONE | null | null | null | ### System Info
standard env
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm trying to interleave 'sciq', 'wiki' and the 'pile-enron' dataset. I think the error I made was that I loaded the train split of one, but for the other but the error is not too helpful-
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[/home/suryahari/Vornoi/save_model_ops.py](https://vscode-remote+ssh-002dremote-002bthomsonlab-002d2-002ejamesgornet-002ecom.vscode-resource.vscode-cdn.net/home/suryahari/Vornoi/save_model_ops.py) in line 3
[41](file:///home/suryahari/Vornoi/save_model_ops.py?line=40) # %%
----> [43](file:///home/suryahari/Vornoi/save_model_ops.py?line=42) dataset = interleave_datasets(datasets, stopping_strategy="all_exhausted")
File [~/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py:124](https://vscode-remote+ssh-002dremote-002bthomsonlab-002d2-002ejamesgornet-002ecom.vscode-resource.vscode-cdn.net/home/suryahari/~/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py:124), in interleave_datasets(datasets, probabilities, seed, info, split, stopping_strategy)
[122](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=121) for dataset in datasets[1:]:
[123](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=122) if (map_style and not isinstance(dataset, Dataset)) or (iterable and not isinstance(dataset, IterableDataset)):
--> [124](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=123) raise ValueError(
[125](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=124) f"Unable to interleave a {type(datasets[0])} with a {type(dataset)}. Expected a list of Dataset objects or a list of IterableDataset objects."
[126](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=125) )
[127](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=126) if stopping_strategy not in ["first_exhausted", "all_exhausted"]:
[128](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=127) raise ValueError(f"{stopping_strategy} is not supported. Please enter a valid stopping_strategy.")
ValueError: Unable to interleave a with a . Expected a list of Dataset objects or a list of IterableDataset objects.
```
### Expected behavior
the error message should hopefully be more clear | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5851/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5846 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5846/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5846/comments | https://api.github.com/repos/huggingface/datasets/issues/5846/events | https://github.com/huggingface/datasets/issues/5846 | 1,706,289,290 | I_kwDODunzps5ls-iK | 5,846 | load_dataset('bigcode/the-stack-dedup', streaming=True) very slow! | {
"login": "tbenthompson",
"id": 4241811,
"node_id": "MDQ6VXNlcjQyNDE4MTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4241811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tbenthompson",
"html_url": "https://github.com/tbenthompson",
"followers_url": "https://api.github.com/users/tbenthompson/followers",
"following_url": "https://api.github.com/users/tbenthompson/following{/other_user}",
"gists_url": "https://api.github.com/users/tbenthompson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tbenthompson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tbenthompson/subscriptions",
"organizations_url": "https://api.github.com/users/tbenthompson/orgs",
"repos_url": "https://api.github.com/users/tbenthompson/repos",
"events_url": "https://api.github.com/users/tbenthompson/events{/privacy}",
"received_events_url": "https://api.github.com/users/tbenthompson/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"This is due to the slow resolution of the data files: https://github.com/huggingface/datasets/issues/5537.\r\n\r\nWe plan to switch to `huggingface_hub`'s `HfFileSystem` soon to make the resolution faster (will be up to 20x faster once we merge https://github.com/huggingface/huggingface_hub/pull/1443)\r\n\r\n",
"You're right, when I try to parse more than 50GB of text data, I also get very slow, usually taking hours or even tens of hours.",
"> You're right, when I try to parse more than 50GB of text data, I also get very slow, usually taking hours or even tens of hours.\r\n\r\nThat's unrelated to the problem discussed in this issue. ",
"> > You're right, when I try to parse more than 50GB of text data, I also get very slow, usually taking hours or even tens of hours.\r\n> \r\n> That's unrelated to the problem discussed in this issue.\r\n\r\nSorry, I misunderstood it."
] | 2023-05-11T17:58:57 | 2023-05-16T03:23:46 | null | NONE | null | null | null | ### Describe the bug
Running
```
import datasets
ds = datasets.load_dataset('bigcode/the-stack-dedup', streaming=True)
```
takes about 2.5 minutes!
I would expect this to be near instantaneous. With other datasets, the runtime is one or two seconds.
### Environment info
- `datasets` version: 2.11.0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.10
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5846/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5845/comments | https://api.github.com/repos/huggingface/datasets/issues/5845/events | https://github.com/huggingface/datasets/pull/5845 | 1,706,253,251 | PR_kwDODunzps5QUMjS | 5,845 | Add `date_format` param to the CSV reader | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007592 / 0.011353 (-0.003761) | 0.005223 / 0.011008 (-0.005786) | 0.110218 / 0.038508 (0.071710) | 0.027644 / 0.023109 (0.004534) | 0.335063 / 0.275898 (0.059165) | 0.347102 / 0.323480 (0.023623) | 0.005107 / 0.007986 (-0.002878) | 0.003932 / 0.004328 (-0.000396) | 0.086095 / 0.004250 (0.081845) | 0.034735 / 0.037052 (-0.002317) | 0.329029 / 0.258489 (0.070540) | 0.370282 / 0.293841 (0.076441) | 0.043040 / 0.128546 (-0.085507) | 0.019626 / 0.075646 (-0.056021) | 0.336452 / 0.419271 (-0.082819) | 0.070365 / 0.043533 (0.026832) | 0.326881 / 0.255139 (0.071742) | 0.354984 / 0.283200 (0.071785) | 0.102605 / 0.141683 (-0.039077) | 1.459161 / 1.452155 (0.007007) | 1.453599 / 1.492716 (-0.039117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201021 / 0.018006 (0.183015) | 0.456415 / 0.000490 (0.455926) | 0.012349 / 0.000200 (0.012149) | 0.000115 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025199 / 0.037411 (-0.012213) | 0.098536 / 0.014526 (0.084010) | 0.107528 / 0.176557 (-0.069028) | 0.160492 / 0.737135 (-0.576643) | 0.108660 / 0.296338 (-0.187679) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.527020 / 0.215209 (0.311811) | 5.357635 / 2.077655 (3.279980) | 2.062930 / 1.504120 (0.558811) | 1.783009 / 1.541195 (0.241815) | 1.840225 / 1.468490 (0.371735) | 1.074278 / 4.584777 (-3.510499) | 4.710533 / 3.745712 (0.964821) | 2.611202 / 5.269862 (-2.658660) | 1.885487 / 4.565676 (-2.680189) | 0.123201 / 0.424275 (-0.301074) | 0.013880 / 0.007607 (0.006273) | 0.636511 / 0.226044 (0.410467) | 6.516075 / 2.268929 (4.247146) | 2.710138 / 55.444624 (-52.734486) | 2.046606 / 6.876477 (-4.829871) | 2.085907 / 2.142072 (-0.056166) | 1.199489 / 4.805227 (-3.605738) | 0.211668 / 6.500664 (-6.288996) | 0.075436 / 0.075469 (-0.000033) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.219771 / 1.841788 (-0.622016) | 14.276215 / 8.074308 (6.201907) | 16.611529 / 10.191392 (6.420137) | 0.221091 / 0.680424 (-0.459333) | 0.024922 / 0.534201 (-0.509279) | 0.431906 / 0.579283 (-0.147377) | 0.518863 / 0.434364 (0.084499) | 0.515366 / 0.540337 (-0.024971) | 0.640411 / 1.386936 (-0.746525) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007955 / 0.011353 (-0.003398) | 0.004813 / 0.011008 (-0.006196) | 0.076508 / 0.038508 (0.038000) | 0.028137 / 0.023109 (0.005028) | 0.349609 / 0.275898 (0.073711) | 0.403588 / 0.323480 (0.080109) | 0.005456 / 0.007986 (-0.002530) | 0.005677 / 0.004328 (0.001349) | 0.076882 / 0.004250 (0.072632) | 0.039832 / 0.037052 (0.002779) | 0.351930 / 0.258489 (0.093440) | 0.390492 / 0.293841 (0.096651) | 0.045199 / 0.128546 (-0.083347) | 0.023945 / 0.075646 (-0.051701) | 0.091140 / 0.419271 (-0.328132) | 0.057728 / 0.043533 (0.014195) | 0.370663 / 0.255139 (0.115524) | 0.380649 / 0.283200 (0.097449) | 0.097017 / 0.141683 (-0.044666) | 1.362248 / 1.452155 (-0.089907) | 1.445699 / 1.492716 (-0.047018) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204207 / 0.018006 (0.186201) | 0.474471 / 0.000490 (0.473981) | 0.012187 / 0.000200 (0.011987) | 0.000151 / 0.000054 (0.000096) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023123 / 0.037411 (-0.014288) | 0.097547 / 0.014526 (0.083021) | 0.113877 / 0.176557 (-0.062679) | 0.158307 / 0.737135 (-0.578828) | 0.113876 / 0.296338 (-0.182462) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.519920 / 0.215209 (0.304711) | 5.384371 / 2.077655 (3.306716) | 2.263276 / 1.504120 (0.759156) | 1.960604 / 1.541195 (0.419409) | 2.022864 / 1.468490 (0.554374) | 1.015430 / 4.584777 (-3.569347) | 4.774426 / 3.745712 (1.028714) | 4.549598 / 5.269862 (-0.720264) | 2.412638 / 4.565676 (-2.153039) | 0.117983 / 0.424275 (-0.306292) | 0.013340 / 0.007607 (0.005733) | 0.639826 / 0.226044 (0.413782) | 6.491622 / 2.268929 (4.222693) | 2.946892 / 55.444624 (-52.497732) | 2.376393 / 6.876477 (-4.500084) | 2.285592 / 2.142072 (0.143519) | 1.185049 / 4.805227 (-3.620178) | 0.204127 / 6.500664 (-6.296537) | 0.070285 / 0.075469 (-0.005184) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.439736 / 1.841788 (-0.402052) | 14.852087 / 8.074308 (6.777779) | 15.675742 / 10.191392 (5.484350) | 0.206577 / 0.680424 (-0.473846) | 0.031688 / 0.534201 (-0.502513) | 0.471003 / 0.579283 (-0.108280) | 0.505449 / 0.434364 (0.071085) | 0.506114 / 0.540337 (-0.034224) | 0.583752 / 1.386936 (-0.803184) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d6fcff8a031db39cb31079bc1fa62ded6e35218c \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012965 / 0.011353 (0.001612) | 0.006660 / 0.011008 (-0.004348) | 0.126060 / 0.038508 (0.087551) | 0.041154 / 0.023109 (0.018045) | 0.413428 / 0.275898 (0.137530) | 0.429035 / 0.323480 (0.105555) | 0.006680 / 0.007986 (-0.001305) | 0.005063 / 0.004328 (0.000734) | 0.092161 / 0.004250 (0.087911) | 0.056092 / 0.037052 (0.019039) | 0.421460 / 0.258489 (0.162971) | 0.450291 / 0.293841 (0.156450) | 0.050820 / 0.128546 (-0.077726) | 0.021392 / 0.075646 (-0.054255) | 0.426915 / 0.419271 (0.007643) | 0.064908 / 0.043533 (0.021375) | 0.406769 / 0.255139 (0.151630) | 0.434344 / 0.283200 (0.151144) | 0.127967 / 0.141683 (-0.013716) | 1.922414 / 1.452155 (0.470260) | 1.940717 / 1.492716 (0.448000) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288024 / 0.018006 (0.270017) | 0.615859 / 0.000490 (0.615369) | 0.007095 / 0.000200 (0.006895) | 0.000160 / 0.000054 (0.000106) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028182 / 0.037411 (-0.009230) | 0.126277 / 0.014526 (0.111752) | 0.131687 / 0.176557 (-0.044870) | 0.206191 / 0.737135 (-0.530944) | 0.141799 / 0.296338 (-0.154539) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.631580 / 0.215209 (0.416371) | 6.141942 / 2.077655 (4.064287) | 2.476721 / 1.504120 (0.972602) | 2.128850 / 1.541195 (0.587655) | 2.236468 / 1.468490 (0.767978) | 1.188665 / 4.584777 (-3.396112) | 5.481179 / 3.745712 (1.735467) | 3.120333 / 5.269862 (-2.149529) | 2.365889 / 4.565676 (-2.199787) | 0.145081 / 0.424275 (-0.279194) | 0.015866 / 0.007607 (0.008259) | 0.795650 / 0.226044 (0.569605) | 7.595289 / 2.268929 (5.326361) | 3.174418 / 55.444624 (-52.270207) | 2.905207 / 6.876477 (-3.971270) | 2.428263 / 2.142072 (0.286191) | 1.408900 / 4.805227 (-3.396328) | 0.265485 / 6.500664 (-6.235179) | 0.083882 / 0.075469 (0.008413) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.517025 / 1.841788 (-0.324762) | 18.110288 / 8.074308 (10.035980) | 20.810003 / 10.191392 (10.618611) | 0.210380 / 0.680424 (-0.470044) | 0.030180 / 0.534201 (-0.504021) | 0.523453 / 0.579283 (-0.055830) | 0.603896 / 0.434364 (0.169532) | 0.622554 / 0.540337 (0.082216) | 0.737973 / 1.386936 (-0.648963) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009795 / 0.011353 (-0.001558) | 0.006269 / 0.011008 (-0.004739) | 0.099938 / 0.038508 (0.061430) | 0.035162 / 0.023109 (0.012052) | 0.506353 / 0.275898 (0.230455) | 0.527804 / 0.323480 (0.204324) | 0.007211 / 0.007986 (-0.000775) | 0.005498 / 0.004328 (0.001169) | 0.098325 / 0.004250 (0.094075) | 0.054513 / 0.037052 (0.017461) | 0.525764 / 0.258489 (0.267274) | 0.576699 / 0.293841 (0.282858) | 0.052800 / 0.128546 (-0.075747) | 0.021192 / 0.075646 (-0.054454) | 0.117676 / 0.419271 (-0.301596) | 0.055415 / 0.043533 (0.011882) | 0.516746 / 0.255139 (0.261607) | 0.528417 / 0.283200 (0.245217) | 0.116947 / 0.141683 (-0.024735) | 1.757864 / 1.452155 (0.305709) | 2.043632 / 1.492716 (0.550916) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284018 / 0.018006 (0.266011) | 0.595086 / 0.000490 (0.594596) | 0.001945 / 0.000200 (0.001745) | 0.000127 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032255 / 0.037411 (-0.005157) | 0.128201 / 0.014526 (0.113676) | 0.139189 / 0.176557 (-0.037367) | 0.199750 / 0.737135 (-0.537385) | 0.149406 / 0.296338 (-0.146933) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.652184 / 0.215209 (0.436975) | 6.453319 / 2.077655 (4.375664) | 2.831566 / 1.504120 (1.327446) | 2.453064 / 1.541195 (0.911869) | 2.622056 / 1.468490 (1.153566) | 1.191279 / 4.584777 (-3.393498) | 5.504720 / 3.745712 (1.759007) | 5.916900 / 5.269862 (0.647038) | 2.974400 / 4.565676 (-1.591277) | 0.142851 / 0.424275 (-0.281424) | 0.015241 / 0.007607 (0.007634) | 0.917537 / 0.226044 (0.691493) | 8.277645 / 2.268929 (6.008717) | 3.700495 / 55.444624 (-51.744130) | 3.047127 / 6.876477 (-3.829350) | 3.093216 / 2.142072 (0.951143) | 1.413529 / 4.805227 (-3.391698) | 0.259395 / 6.500664 (-6.241270) | 0.083144 / 0.075469 (0.007675) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.632240 / 1.841788 (-0.209548) | 18.687403 / 8.074308 (10.613095) | 20.134091 / 10.191392 (9.942699) | 0.238792 / 0.680424 (-0.441632) | 0.027645 / 0.534201 (-0.506556) | 0.518200 / 0.579283 (-0.061083) | 0.613535 / 0.434364 (0.179171) | 0.631414 / 0.540337 (0.091076) | 0.724658 / 1.386936 (-0.662278) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ac7caa5e195ad76c7e8ef98914813383f4f668cf \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006228 / 0.011353 (-0.005125) | 0.004517 / 0.011008 (-0.006492) | 0.097998 / 0.038508 (0.059490) | 0.027903 / 0.023109 (0.004793) | 0.309789 / 0.275898 (0.033891) | 0.332784 / 0.323480 (0.009304) | 0.004757 / 0.007986 (-0.003228) | 0.003348 / 0.004328 (-0.000981) | 0.075193 / 0.004250 (0.070942) | 0.037382 / 0.037052 (0.000330) | 0.306929 / 0.258489 (0.048440) | 0.347304 / 0.293841 (0.053463) | 0.030235 / 0.128546 (-0.098312) | 0.011516 / 0.075646 (-0.064131) | 0.322249 / 0.419271 (-0.097023) | 0.044125 / 0.043533 (0.000592) | 0.303874 / 0.255139 (0.048735) | 0.326808 / 0.283200 (0.043608) | 0.088137 / 0.141683 (-0.053546) | 1.521426 / 1.452155 (0.069272) | 1.573823 / 1.492716 (0.081107) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203204 / 0.018006 (0.185197) | 0.402247 / 0.000490 (0.401757) | 0.003146 / 0.000200 (0.002946) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022955 / 0.037411 (-0.014456) | 0.096059 / 0.014526 (0.081533) | 0.105552 / 0.176557 (-0.071004) | 0.167459 / 0.737135 (-0.569676) | 0.106723 / 0.296338 (-0.189615) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.454626 / 0.215209 (0.239417) | 4.556346 / 2.077655 (2.478691) | 2.220349 / 1.504120 (0.716229) | 2.011820 / 1.541195 (0.470625) | 2.048149 / 1.468490 (0.579659) | 0.697583 / 4.584777 (-3.887194) | 3.428394 / 3.745712 (-0.317318) | 1.863872 / 5.269862 (-3.405989) | 1.159691 / 4.565676 (-3.405985) | 0.082598 / 0.424275 (-0.341677) | 0.012202 / 0.007607 (0.004594) | 0.555617 / 0.226044 (0.329572) | 5.545481 / 2.268929 (3.276553) | 2.650850 / 55.444624 (-52.793775) | 2.305864 / 6.876477 (-4.570613) | 2.392252 / 2.142072 (0.250179) | 0.808512 / 4.805227 (-3.996716) | 0.152086 / 6.500664 (-6.348578) | 0.066440 / 0.075469 (-0.009029) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211789 / 1.841788 (-0.629999) | 13.515546 / 8.074308 (5.441238) | 13.859870 / 10.191392 (3.668478) | 0.150335 / 0.680424 (-0.530088) | 0.016578 / 0.534201 (-0.517623) | 0.379145 / 0.579283 (-0.200138) | 0.393735 / 0.434364 (-0.040628) | 0.460219 / 0.540337 (-0.080118) | 0.555896 / 1.386936 (-0.831040) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006402 / 0.011353 (-0.004950) | 0.004558 / 0.011008 (-0.006450) | 0.077332 / 0.038508 (0.038824) | 0.027955 / 0.023109 (0.004846) | 0.407877 / 0.275898 (0.131979) | 0.432552 / 0.323480 (0.109072) | 0.004850 / 0.007986 (-0.003135) | 0.003329 / 0.004328 (-0.000999) | 0.075767 / 0.004250 (0.071517) | 0.035940 / 0.037052 (-0.001112) | 0.419544 / 0.258489 (0.161055) | 0.454672 / 0.293841 (0.160831) | 0.030461 / 0.128546 (-0.098085) | 0.011536 / 0.075646 (-0.064111) | 0.085774 / 0.419271 (-0.333498) | 0.039408 / 0.043533 (-0.004125) | 0.389909 / 0.255139 (0.134770) | 0.403287 / 0.283200 (0.120088) | 0.088385 / 0.141683 (-0.053298) | 1.596840 / 1.452155 (0.144686) | 1.659296 / 1.492716 (0.166580) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216349 / 0.018006 (0.198342) | 0.394969 / 0.000490 (0.394479) | 0.000408 / 0.000200 (0.000208) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024346 / 0.037411 (-0.013066) | 0.099609 / 0.014526 (0.085084) | 0.106779 / 0.176557 (-0.069778) | 0.156889 / 0.737135 (-0.580247) | 0.110625 / 0.296338 (-0.185714) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443809 / 0.215209 (0.228600) | 4.450524 / 2.077655 (2.372870) | 2.151694 / 1.504120 (0.647574) | 1.952521 / 1.541195 (0.411326) | 1.963320 / 1.468490 (0.494830) | 0.709291 / 4.584777 (-3.875486) | 3.415708 / 3.745712 (-0.330005) | 1.850498 / 5.269862 (-3.419363) | 1.164355 / 4.565676 (-3.401321) | 0.084977 / 0.424275 (-0.339298) | 0.013284 / 0.007607 (0.005677) | 0.555103 / 0.226044 (0.329059) | 5.583587 / 2.268929 (3.314658) | 2.608754 / 55.444624 (-52.835870) | 2.264079 / 6.876477 (-4.612398) | 2.272455 / 2.142072 (0.130382) | 0.820849 / 4.805227 (-3.984379) | 0.155063 / 6.500664 (-6.345601) | 0.069709 / 0.075469 (-0.005760) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.293285 / 1.841788 (-0.548503) | 14.181867 / 8.074308 (6.107559) | 13.021280 / 10.191392 (2.829888) | 0.130101 / 0.680424 (-0.550323) | 0.016461 / 0.534201 (-0.517740) | 0.383651 / 0.579283 (-0.195632) | 0.387353 / 0.434364 (-0.047011) | 0.443351 / 0.540337 (-0.096986) | 0.529448 / 1.386936 (-0.857488) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#05145d50b5bb1b7b42b76516cd6492d4868c46ba \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007513 / 0.011353 (-0.003840) | 0.005328 / 0.011008 (-0.005680) | 0.096937 / 0.038508 (0.058429) | 0.036230 / 0.023109 (0.013121) | 0.325808 / 0.275898 (0.049910) | 0.363601 / 0.323480 (0.040121) | 0.006130 / 0.007986 (-0.001855) | 0.004352 / 0.004328 (0.000023) | 0.073543 / 0.004250 (0.069293) | 0.054114 / 0.037052 (0.017062) | 0.328952 / 0.258489 (0.070463) | 0.366943 / 0.293841 (0.073102) | 0.035768 / 0.128546 (-0.092778) | 0.012505 / 0.075646 (-0.063142) | 0.332260 / 0.419271 (-0.087012) | 0.066673 / 0.043533 (0.023140) | 0.323866 / 0.255139 (0.068727) | 0.341311 / 0.283200 (0.058112) | 0.129898 / 0.141683 (-0.011785) | 1.456890 / 1.452155 (0.004735) | 1.546933 / 1.492716 (0.054217) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.299236 / 0.018006 (0.281229) | 0.496134 / 0.000490 (0.495645) | 0.004233 / 0.000200 (0.004033) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028089 / 0.037411 (-0.009322) | 0.104723 / 0.014526 (0.090197) | 0.121032 / 0.176557 (-0.055525) | 0.179916 / 0.737135 (-0.557220) | 0.126628 / 0.296338 (-0.169711) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403497 / 0.215209 (0.188288) | 4.052481 / 2.077655 (1.974827) | 1.804419 / 1.504120 (0.300299) | 1.619833 / 1.541195 (0.078638) | 1.732438 / 1.468490 (0.263948) | 0.702474 / 4.584777 (-3.882303) | 3.808973 / 3.745712 (0.063261) | 3.682764 / 5.269862 (-1.587098) | 1.919184 / 4.565676 (-2.646493) | 0.086638 / 0.424275 (-0.337637) | 0.012265 / 0.007607 (0.004658) | 0.501273 / 0.226044 (0.275229) | 5.010918 / 2.268929 (2.741989) | 2.278114 / 55.444624 (-53.166510) | 1.942266 / 6.876477 (-4.934211) | 2.101982 / 2.142072 (-0.040091) | 0.847622 / 4.805227 (-3.957606) | 0.172973 / 6.500664 (-6.327691) | 0.066884 / 0.075469 (-0.008586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.187609 / 1.841788 (-0.654179) | 15.089485 / 8.074308 (7.015177) | 14.787398 / 10.191392 (4.596006) | 0.168254 / 0.680424 (-0.512170) | 0.018266 / 0.534201 (-0.515935) | 0.423204 / 0.579283 (-0.156079) | 0.435238 / 0.434364 (0.000874) | 0.512473 / 0.540337 (-0.027864) | 0.618091 / 1.386936 (-0.768845) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007249 / 0.011353 (-0.004104) | 0.005297 / 0.011008 (-0.005711) | 0.076428 / 0.038508 (0.037920) | 0.033565 / 0.023109 (0.010456) | 0.373756 / 0.275898 (0.097858) | 0.407405 / 0.323480 (0.083925) | 0.006100 / 0.007986 (-0.001886) | 0.006482 / 0.004328 (0.002153) | 0.075884 / 0.004250 (0.071633) | 0.055338 / 0.037052 (0.018286) | 0.378721 / 0.258489 (0.120232) | 0.427065 / 0.293841 (0.133224) | 0.036285 / 0.128546 (-0.092261) | 0.012460 / 0.075646 (-0.063186) | 0.087641 / 0.419271 (-0.331630) | 0.048199 / 0.043533 (0.004666) | 0.386785 / 0.255139 (0.131646) | 0.386702 / 0.283200 (0.103503) | 0.110087 / 0.141683 (-0.031596) | 1.511204 / 1.452155 (0.059050) | 1.585671 / 1.492716 (0.092954) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313558 / 0.018006 (0.295552) | 0.496991 / 0.000490 (0.496501) | 0.001492 / 0.000200 (0.001292) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031814 / 0.037411 (-0.005597) | 0.113486 / 0.014526 (0.098960) | 0.125208 / 0.176557 (-0.051348) | 0.174469 / 0.737135 (-0.562666) | 0.131095 / 0.296338 (-0.165244) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439282 / 0.215209 (0.224073) | 4.362286 / 2.077655 (2.284631) | 2.153271 / 1.504120 (0.649151) | 1.990482 / 1.541195 (0.449288) | 2.103322 / 1.468490 (0.634831) | 0.692522 / 4.584777 (-3.892254) | 3.861931 / 3.745712 (0.116219) | 3.686294 / 5.269862 (-1.583567) | 1.734525 / 4.565676 (-2.831152) | 0.085057 / 0.424275 (-0.339218) | 0.012116 / 0.007607 (0.004509) | 0.547996 / 0.226044 (0.321952) | 5.513835 / 2.268929 (3.244906) | 2.723829 / 55.444624 (-52.720795) | 2.404715 / 6.876477 (-4.471761) | 2.514768 / 2.142072 (0.372696) | 0.834972 / 4.805227 (-3.970255) | 0.168261 / 6.500664 (-6.332403) | 0.066464 / 0.075469 (-0.009005) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259923 / 1.841788 (-0.581865) | 15.646277 / 8.074308 (7.571969) | 13.097598 / 10.191392 (2.906206) | 0.187991 / 0.680424 (-0.492433) | 0.017358 / 0.534201 (-0.516843) | 0.427979 / 0.579283 (-0.151304) | 0.425747 / 0.434364 (-0.008617) | 0.501907 / 0.540337 (-0.038431) | 0.595106 / 1.386936 (-0.791830) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#db56f7f0d2f0b99af4da17d388c205152504c7d9 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009378 / 0.011353 (-0.001975) | 0.006434 / 0.011008 (-0.004574) | 0.120603 / 0.038508 (0.082095) | 0.042929 / 0.023109 (0.019820) | 0.366853 / 0.275898 (0.090955) | 0.436795 / 0.323480 (0.113315) | 0.007730 / 0.007986 (-0.000256) | 0.004842 / 0.004328 (0.000513) | 0.091058 / 0.004250 (0.086808) | 0.058256 / 0.037052 (0.021203) | 0.378692 / 0.258489 (0.120203) | 0.467384 / 0.293841 (0.173543) | 0.042948 / 0.128546 (-0.085598) | 0.015172 / 0.075646 (-0.060475) | 0.409225 / 0.419271 (-0.010046) | 0.083672 / 0.043533 (0.040140) | 0.390088 / 0.255139 (0.134949) | 0.406965 / 0.283200 (0.123765) | 0.142132 / 0.141683 (0.000449) | 1.765737 / 1.452155 (0.313582) | 1.895419 / 1.492716 (0.402703) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244052 / 0.018006 (0.226046) | 0.553383 / 0.000490 (0.552893) | 0.006798 / 0.000200 (0.006598) | 0.000227 / 0.000054 (0.000173) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032032 / 0.037411 (-0.005380) | 0.129990 / 0.014526 (0.115464) | 0.140338 / 0.176557 (-0.036219) | 0.212155 / 0.737135 (-0.524980) | 0.147395 / 0.296338 (-0.148943) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478760 / 0.215209 (0.263551) | 4.751335 / 2.077655 (2.673680) | 2.164755 / 1.504120 (0.660635) | 1.944288 / 1.541195 (0.403094) | 2.077657 / 1.468490 (0.609167) | 0.818519 / 4.584777 (-3.766258) | 4.689013 / 3.745712 (0.943301) | 2.484079 / 5.269862 (-2.785782) | 1.788632 / 4.565676 (-2.777044) | 0.100484 / 0.424275 (-0.323791) | 0.013838 / 0.007607 (0.006231) | 0.589650 / 0.226044 (0.363605) | 5.859461 / 2.268929 (3.590533) | 2.670025 / 55.444624 (-52.774599) | 2.688709 / 6.876477 (-4.187768) | 2.408060 / 2.142072 (0.265988) | 0.972107 / 4.805227 (-3.833120) | 0.194425 / 6.500664 (-6.306239) | 0.076077 / 0.075469 (0.000608) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.430150 / 1.841788 (-0.411638) | 17.710507 / 8.074308 (9.636199) | 16.210789 / 10.191392 (6.019397) | 0.163940 / 0.680424 (-0.516484) | 0.020295 / 0.534201 (-0.513906) | 0.472596 / 0.579283 (-0.106687) | 0.483107 / 0.434364 (0.048743) | 0.585269 / 0.540337 (0.044931) | 0.705526 / 1.386936 (-0.681410) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008864 / 0.011353 (-0.002489) | 0.006095 / 0.011008 (-0.004913) | 0.088702 / 0.038508 (0.050194) | 0.041596 / 0.023109 (0.018486) | 0.453515 / 0.275898 (0.177617) | 0.476217 / 0.323480 (0.152737) | 0.007574 / 0.007986 (-0.000412) | 0.004727 / 0.004328 (0.000398) | 0.087271 / 0.004250 (0.083021) | 0.059631 / 0.037052 (0.022578) | 0.449379 / 0.258489 (0.190890) | 0.494436 / 0.293841 (0.200595) | 0.043448 / 0.128546 (-0.085098) | 0.014580 / 0.075646 (-0.061067) | 0.103836 / 0.419271 (-0.315435) | 0.057537 / 0.043533 (0.014004) | 0.449359 / 0.255139 (0.194220) | 0.447577 / 0.283200 (0.164377) | 0.123600 / 0.141683 (-0.018083) | 1.748448 / 1.452155 (0.296294) | 1.902116 / 1.492716 (0.409399) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237214 / 0.018006 (0.219207) | 0.497648 / 0.000490 (0.497158) | 0.003519 / 0.000200 (0.003319) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034477 / 0.037411 (-0.002934) | 0.132627 / 0.014526 (0.118101) | 0.139721 / 0.176557 (-0.036836) | 0.195705 / 0.737135 (-0.541430) | 0.150762 / 0.296338 (-0.145577) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.521306 / 0.215209 (0.306097) | 5.184982 / 2.077655 (3.107328) | 2.503979 / 1.504120 (0.999859) | 2.301054 / 1.541195 (0.759860) | 2.352713 / 1.468490 (0.884222) | 0.819804 / 4.584777 (-3.764973) | 4.584011 / 3.745712 (0.838299) | 2.497311 / 5.269862 (-2.772550) | 1.561262 / 4.565676 (-3.004414) | 0.101814 / 0.424275 (-0.322461) | 0.014078 / 0.007607 (0.006471) | 0.666564 / 0.226044 (0.440520) | 6.616379 / 2.268929 (4.347450) | 3.263892 / 55.444624 (-52.180732) | 2.891774 / 6.876477 (-3.984703) | 2.945260 / 2.142072 (0.803188) | 1.014379 / 4.805227 (-3.790848) | 0.201762 / 6.500664 (-6.298902) | 0.078012 / 0.075469 (0.002543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.567808 / 1.841788 (-0.273980) | 19.096552 / 8.074308 (11.022244) | 15.522285 / 10.191392 (5.330893) | 0.226568 / 0.680424 (-0.453856) | 0.021078 / 0.534201 (-0.513123) | 0.501686 / 0.579283 (-0.077597) | 0.517575 / 0.434364 (0.083211) | 0.589685 / 0.540337 (0.049348) | 0.705053 / 1.386936 (-0.681883) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#db56f7f0d2f0b99af4da17d388c205152504c7d9 \"CML watermark\")\n"
] | 2023-05-11T17:29:57 | 2023-05-15T07:39:13 | 2023-05-12T15:14:48 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5845",
"html_url": "https://github.com/huggingface/datasets/pull/5845",
"diff_url": "https://github.com/huggingface/datasets/pull/5845.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5845.patch",
"merged_at": "2023-05-12T15:14:48"
} | Adds the `date_format` param introduced in Pandas 2.0 to the CSV reader and improves its type hints. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5845/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5844/comments | https://api.github.com/repos/huggingface/datasets/issues/5844/events | https://github.com/huggingface/datasets/issues/5844 | 1,705,907,812 | I_kwDODunzps5lrhZk | 5,844 | TypeError: Couldn't cast array of type struct<answer: struct<unanswerable: bool, answerType: string, free_form_answer: string, evidence: list<item: string>, evidenceAnnotate: list<item: string>, highlighted_evidence: list<item: string>>> to ... | {
"login": "chen-coding",
"id": 54010030,
"node_id": "MDQ6VXNlcjU0MDEwMDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/54010030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chen-coding",
"html_url": "https://github.com/chen-coding",
"followers_url": "https://api.github.com/users/chen-coding/followers",
"following_url": "https://api.github.com/users/chen-coding/following{/other_user}",
"gists_url": "https://api.github.com/users/chen-coding/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chen-coding/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chen-coding/subscriptions",
"organizations_url": "https://api.github.com/users/chen-coding/orgs",
"repos_url": "https://api.github.com/users/chen-coding/repos",
"events_url": "https://api.github.com/users/chen-coding/events{/privacy}",
"received_events_url": "https://api.github.com/users/chen-coding/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2023-05-11T14:15:01 | 2023-05-11T14:15:01 | null | NONE | null | null | null | ### Describe the bug
TypeError: Couldn't cast array of type struct<answer: struct<unanswerable: bool, answerType: string, free_form_answer: string, evidence: list<item: string>, evidenceAnnotate: list<item: string>, highlighted_evidence: list<item: string>>> to {'answer': {'unanswerable': Value(dtype='bool', id=None), 'answerType': Value(dtype='string', id=None), 'free_form_answer': Value(dtype='string', id=None), 'evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'evidenceAnnotate': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'highlighted_evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'unanswerable': Value(dtype='bool', id=None), 'answerType': Value(dtype='string', id=None), 'free_form_answer': Value(dtype='string', id=None), 'evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'evidenceAnnotate': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'highlighted_evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
When I use _load_dataset()_ I get the error
`from datasets import load_dataset
datafiles = {'train': './data/train.json', 'validation': './data/validation.json', 'test': './data/test.json'}
raw_data = load_dataset("json", data_files=datafiles, cache_dir="./cache")
`
Detailed error information is as followsοΌ
Traceback (most recent call last):
File "C:/Users/CHENJIALEI/Desktop/NLPCC2023/NLPCC23_SciMRC-main/test2.py", line 9, in <module>
raw_data = load_dataset("json", data_files=datafiles, cache_dir="./cache")
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\load.py", line 1747, in load_dataset
builder_instance.download_and_prepare(
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\builder.py", line 814, in download_and_prepare
self._download_and_prepare(
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\builder.py", line 905, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\builder.py", line 1521, in _prepare_split
writer.write_table(table)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\arrow_writer.py", line 540, in write_table
pa_table = table_cast(pa_table, self._schema)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 2069, in table_cast
return cast_table_to_schema(table, schema)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 2031, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 2031, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1740, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1740, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1867, in cast_array_to_feature
casted_values = _c(array.values, feature[0])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1742, in wrapper
return func(array, *args, **kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1862, in cast_array_to_feature
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1862, in <listcomp>
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1742, in wrapper
return func(array, *args, **kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1867, in cast_array_to_feature
casted_values = _c(array.values, feature[0])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1742, in wrapper
return func(array, *args, **kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1913, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
It is successful when I load the data separately
`raw_data = load_dataset("json", data_files="./data/train.json", cache_dir="./cache")`
### Steps to reproduce the bug
1.from datasets import load_dataset
2.datafiles = {'train': './data/train.json', 'validation': './data/validation.json', 'test': './data/test.json'}
3.raw_data = load_dataset("json", data_files=datafiles, cache_dir="./cache")
### Expected behavior
Successfully load dataset
### Environment info
datasets == 2.6.1
pyarrow == 8.0.0
python == 3.8
platform:windows11 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5844/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5841/comments | https://api.github.com/repos/huggingface/datasets/issues/5841/events | https://github.com/huggingface/datasets/issues/5841 | 1,705,286,639 | I_kwDODunzps5lpJvv | 5,841 | Abusurdly slow on iteration | {
"login": "fecet",
"id": 41792945,
"node_id": "MDQ6VXNlcjQxNzkyOTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/41792945?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fecet",
"html_url": "https://github.com/fecet",
"followers_url": "https://api.github.com/users/fecet/followers",
"following_url": "https://api.github.com/users/fecet/following{/other_user}",
"gists_url": "https://api.github.com/users/fecet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fecet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fecet/subscriptions",
"organizations_url": "https://api.github.com/users/fecet/orgs",
"repos_url": "https://api.github.com/users/fecet/repos",
"events_url": "https://api.github.com/users/fecet/events{/privacy}",
"received_events_url": "https://api.github.com/users/fecet/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! You can try to use the [Image](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Image) type which [decodes images on-the-fly](https://huggingface.co/docs/datasets/v2.12.0/en/about_dataset_features#image-feature) into pytorch tensors :)\r\n\r\n```python\r\nds = Dataset.from_dict({\"tensor\":a}).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 5.04 s, sys: 96.5 ms, total: 5.14 s\r\n# Wall time: 5.14 s\r\n# 10000\r\n```\r\n\r\n```python\r\nfeatures = Features({\"tensor\": Image()})\r\nds = Dataset.from_dict({\"tensor\":a}, features=features).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 1.86 s, sys: 49 ms, total: 1.91 s\r\n# Wall time: 1.9 s\r\n# 10000\r\n```\r\n\r\n-> Speed x2.7\r\n\r\nAnd if you want to keep using arrays of integers, consider using the [Array2D](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Array2D) or [Array3D](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Array3D) types which are even faster (since it doesn't decode images):\r\n\r\n```python\r\nfeatures = Features({\"tensor\": Array2D(shape=(100, 224), dtype=\"float32\")})\r\nds = Dataset.from_dict({\"tensor\":a}, features=features).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 828 ms, sys: 68.4 ms, total: 896 ms\r\n# Wall time: 897 ms\r\n# 10000\r\n```\r\n\r\n-> Speed x5.7\r\n\r\nBatching also speeds up a lot\r\n\r\n```python\r\nfrom torch.utils.data import DataLoader\r\ndl = DataLoader(ds, batch_size=100)\r\n%time sum(1 for _ in dl)\r\n# CPU times: user 564 ms, sys: 83.5 ms, total: 648 ms\r\n# Wall time: 579 ms\r\n# 100\r\n```\r\n\r\n-> Speed x8.9\r\n\r\n```python\r\n%time sum(1 for _ in ds.iter(batch_size=100))\r\n# CPU times: user 119 ms, sys: 96.8 ms, total: 215 ms\r\n# Wall time: 117 ms\r\n# 100\r\n```\r\n\r\n-> Speed x46",
"Anyway, regarding the speed difference between numpy and pytorch, I think the issue is that we first convert numpy sub-arrays to pytorch and then consolidate into one tensor, while we should to the opposite. Indeed converting a numpy array to pytorch has a fix cost that seems to cause a slow down. The current pipeline is\r\n\r\n```\r\narrow -> nested numpy arrays -> lists of torch tensors -> one torch tensor\r\n```\r\n\r\nand we should do\r\n\r\n```\r\narrow -> nested numpy arrays -> one numpy array -> one torch tensor\r\n```",
"I have a similar issue: iterating over a dataset takes 5s without applying any transform, but takes ~30s after applying a transform.\r\nHere is the minimum code to reproduce the problem\r\n\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset, DatasetDict, load_dataset, Array3D, Image, Features\r\nfrom torch.utils.data import DataLoader\r\nfrom tqdm import tqdm\r\nimport torchvision \r\nfrom torchvision.transforms import ToTensor, Normalize\r\n\r\n\r\n#################################\r\n# Without transform\r\n#################################\r\n \r\ntrain_dataset = load_dataset(\r\n 'cifar100',\r\n split='train',\r\n use_auth_token=True,\r\n)\r\n\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"img\", \"fine_label\"])\r\n\r\ntrain_loader= DataLoader(\r\n train_dataset,\r\n batch_size=100,\r\n pin_memory=False,\r\n shuffle=True,\r\n num_workers=8,\r\n)\r\n\r\nfor batch in tqdm(train_loader, desc=\"Loading data, no transform\"):\r\n pass\r\n\r\n\r\n#################################\r\n# With transform\r\n#################################\r\n\r\ntransform_func = torchvision.transforms.Compose([\r\n ToTensor(), \r\n Normalize(mean=[0.485, 0.456, 0.406], std= [0.229, 0.224, 0.225]),] \r\n)\r\n \r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"img\": transform_func(x[\"img\"])},\r\n)\r\n\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"img\", \"fine_label\"])\r\n\r\n\r\ntrain_loader= DataLoader(\r\n train_dataset,\r\n batch_size=100,\r\n pin_memory=False,\r\n shuffle=True,\r\n num_workers=8,\r\n)\r\n\r\n\r\nfor batch in tqdm(train_loader, desc=\"Loading data after transform\"):\r\n pass \r\n```\r\n\r\nI have also tried converting the Image column to an Array3D\r\n```python\r\nimg_shape = train_dataset[0][\"img\"].shape\r\n\r\nfeatures = train_dataset.features.copy()\r\nfeatures[\"x\"] = Array3D(shape=img_shape, dtype=\"float32\")\r\n\r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"x\": np.array(x[\"img\"], dtype=np.uint8)},\r\n features=features,\r\n)\r\ntrain_dataset.cast_column(\"x\", Array3D(shape=img_shape, dtype=\"float32\"))\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"x\", \"fine_label\"])\r\n```\r\nbut to no avail. Any clue?",
"Thanks! I convert my dataset feature to Array3D and this speed became awesome!"
] | 2023-05-11T08:04:09 | 2023-05-15T15:38:13 | 2023-05-15T15:38:13 | NONE | null | null | null | ### Describe the bug
I am attempting to iterate through an image dataset, but I am encountering a significant slowdown in the iteration speed. In order to investigate this issue, I conducted the following experiment:
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_dict({"tensor":a})
for i in tqdm(ds.with_format("numpy")):
pass
for i in tqdm(ds.with_format("torch")):
pass
```
I noticed that the dataset in numpy format performs significantly faster than the one in torch format. My hypothesis is that the dataset undergoes a transformation process of torch->python->numpy(torch) in the background, which might be causing the slowdown. Is there any way to expedite the process by bypassing such transformations?
Furthermore, if I increase the size of a to an image shape, like:
```python
a=torch.randn(3,224,224)
```
the iteration speed becomes absurdly slow, around 100 iterations per second, whereas the speed with numpy format is approximately 250 iterations per second. This level of speed would be unacceptable for large image datasets, as it could take several hours just to iterate through a single epoch.
### Steps to reproduce the bug
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_dict({"tensor":a})
for i in tqdm(ds.with_format("numpy")):
pass
for i in tqdm(ds.with_format("torch")):
pass
```
### Expected behavior
iteration faster
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.10
- Python version: 3.8.16
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5841/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5841/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5840/comments | https://api.github.com/repos/huggingface/datasets/issues/5840/events | https://github.com/huggingface/datasets/issues/5840 | 1,705,212,085 | I_kwDODunzps5lo3i1 | 5,840 | load model error. | {
"login": "LanShanPi",
"id": 58167546,
"node_id": "MDQ6VXNlcjU4MTY3NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/58167546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LanShanPi",
"html_url": "https://github.com/LanShanPi",
"followers_url": "https://api.github.com/users/LanShanPi/followers",
"following_url": "https://api.github.com/users/LanShanPi/following{/other_user}",
"gists_url": "https://api.github.com/users/LanShanPi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LanShanPi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LanShanPi/subscriptions",
"organizations_url": "https://api.github.com/users/LanShanPi/orgs",
"repos_url": "https://api.github.com/users/LanShanPi/repos",
"events_url": "https://api.github.com/users/LanShanPi/events{/privacy}",
"received_events_url": "https://api.github.com/users/LanShanPi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Please report this in the `transformers` repo, as it's not related to `datasets`"
] | 2023-05-11T07:12:38 | 2023-05-12T13:44:07 | 2023-05-12T13:44:06 | NONE | null | null | null | ### Describe the bug
I had trained one model use deepspeed, when I load the final load I get the follow error:
OSError: Can't load tokenizer for '/XXX/DeepSpeedExamples/applications/DeepSpeed-Chat/output/step3-models/1.3b/actor'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '/home/fm001/hzl/Project/DeepSpeedExamples/applications/DeepSpeed-Chat/output/step3-models/1.3b/actor' is the correct path to a directory containing all relevant files for a BloomTokenizerFast tokenizer.
my load code is : python chat.py --path /XXX/DeepSpeedExamples/applications/DeepSpeed-Chat/output/step3-models/1.3b/actor/
### Steps to reproduce the bug
γγγ
### Expected behavior
γγγ
### Environment info
γγγ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5840/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5842 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5842/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5842/comments | https://api.github.com/repos/huggingface/datasets/issues/5842/events | https://github.com/huggingface/datasets/issues/5842 | 1,705,510,602 | I_kwDODunzps5lqAbK | 5,842 | Remove columns in interable dataset | {
"login": "surya-narayanan",
"id": 17240858,
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surya-narayanan",
"html_url": "https://github.com/surya-narayanan",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Transferring this issue as it's related to the π€ Datasets library ",
"Hi @surya-narayanan! Could you provide some code snippet?",
"This method has been recently added to the `IterableDataset`, so you need to update the `datasets`' installation (`pip install -U datasets`) to use it."
] | 2023-05-11T03:48:46 | 2023-06-21T16:36:42 | 2023-06-21T16:36:41 | NONE | null | null | null | ### Feature request
Right now, remove_columns() produces a NotImplementedError for iterable style datasets
### Motivation
It would be great to have the same functionality irrespective of whether one is using an iterable or a map-style dataset
### Your contribution
hope and courage. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5842/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5843 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5843/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5843/comments | https://api.github.com/repos/huggingface/datasets/issues/5843/events | https://github.com/huggingface/datasets/issues/5843 | 1,705,514,551 | I_kwDODunzps5lqBY3 | 5,843 | Can't add iterable datasets to a Dataset Dict. | {
"login": "surya-narayanan",
"id": 17240858,
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surya-narayanan",
"html_url": "https://github.com/surya-narayanan",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Transferring as this is relating to the π€ Datasets library",
"You need to use `IterableDatasetDict` instead of `DatasetDict` for iterable datasets."
] | 2023-05-11T02:09:29 | 2023-05-25T04:51:59 | 2023-05-25T04:51:59 | NONE | null | null | null | ### System Info
standard env
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Get the following error:
TypeError: Values in `DatasetDict` should be of type `Dataset` but got type '<class 'datasets.iterable_dataset.IterableDataset'>'
### Expected behavior
should be able to add iterable datasets to a dataset dict | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5843/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5839 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5839/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5839/comments | https://api.github.com/repos/huggingface/datasets/issues/5839/events | https://github.com/huggingface/datasets/issues/5839 | 1,704,554,718 | I_kwDODunzps5lmXDe | 5,839 | Make models/functions optimized with `torch.compile` hashable | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-05-10T20:02:08 | 2023-05-10T20:02:08 | null | CONTRIBUTOR | null | null | null | As reported in https://github.com/huggingface/datasets/issues/5819, hashing functions/transforms that reference a model, or a function, optimized with `torch.compile` currently fails due to them not being picklable (the concrete error can be found in the linked issue).
The solutions to consider:
1. hashing/pickling the original, uncompiled version of a compiled model/function (attributes `_orig_mod`/`_torchdynamo_orig_callable`) (less precise than the 2nd option as it ignores the other params of `torch.compute`)
2. wait for https://github.com/pytorch/pytorch/issues/101107 to be resolved
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5839/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5838/comments | https://api.github.com/repos/huggingface/datasets/issues/5838/events | https://github.com/huggingface/datasets/issues/5838 | 1,703,210,848 | I_kwDODunzps5lhO9g | 5,838 | Streaming support for `load_from_disk` | {
"login": "Nilabhra",
"id": 5437792,
"node_id": "MDQ6VXNlcjU0Mzc3OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5437792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nilabhra",
"html_url": "https://github.com/Nilabhra",
"followers_url": "https://api.github.com/users/Nilabhra/followers",
"following_url": "https://api.github.com/users/Nilabhra/following{/other_user}",
"gists_url": "https://api.github.com/users/Nilabhra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nilabhra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nilabhra/subscriptions",
"organizations_url": "https://api.github.com/users/Nilabhra/orgs",
"repos_url": "https://api.github.com/users/Nilabhra/repos",
"events_url": "https://api.github.com/users/Nilabhra/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nilabhra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"As the name says, `load_from_disk` load the data from your disk. If the data is hosted on S3, it is first downloaded locally and then loaded from your disk.\r\n\r\nThere is a discussion on streaming data from S3 here though: #5281 ",
"@lhoestq \r\nThanks for your comment. I have checked out the discussion before and attempted at replicating the mentioned changes in the main branch (#5580). What I found was that if a dataset is saved using `save_to_disk`, it cannot be read by `load_dataset`. The error message asks me to to use `load_from_disk` instead. What would be the correct way of saving the data in this scenario?",
"Using `push_to_hub` you can save the dataset on the HF Hub as parquet files, and reload it / stream it using `load_dataset` :)\r\n\r\nIf you want to save your dataset somewhere else you can use `.to_parquet` to get a parquet file. If your dataset is big it's usually recommended to shard it into multi parquet files (around 1GB each).",
"@lhoestq \r\nThanks for the explanation. Appreciate it. I'll try this out.",
"@lhoestq\r\nI tried the method you mentioned. This the current scenario I'm facing:\r\n\r\n- The parquet file can be read from disk and streaming can be enabled.\r\n- The parquet file can be read from `s3` (local MinIO).\r\n- When `streaming=True` is enabled for `s3`, I get the error mentioned below:\r\n\r\n```\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:502, in S3FileSystem.set_session(self, refresh, kwargs)\r\n 500 conf = AioConfig(**config_kwargs)\r\n 501 if self.session is None:\r\n--> 502 self.session = aiobotocore.session.AioSession(**self.kwargs)\r\n 504 for parameters in (config_kwargs, self.kwargs, init_kwargs, client_kwargs):\r\n 505 for option in (\"region_name\", \"endpoint_url\"):\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'headers'\r\n```\r\n\r\nDoes this mean there is a bug in the main branch?",
"Streaming from S3 is still experimental, there might be a few bugs unfortunately.\r\n\r\nCan you share the full stack trace ?",
"@lhoestq \r\nSure, here you go:\r\n\r\n```python\r\nTypeError Traceback (most recent call last)\r\nCell In[8], line 1\r\n----> 1 dataset = load_dataset(\"parquet\", data_files=[\"s3://<bucket name>/<data folder>/data-parquet\"], storage_options=fs.storage_options, streaming=True)\r\n\r\nFile ~/.../datasets/src/datasets/load.py:1790, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1788 # Return iterable dataset in case of streaming\r\n 1789 if streaming:\r\n-> 1790 return builder_instance.as_streaming_dataset(split=split)\r\n 1792 # Some datasets are already processed on the HF google storage\r\n 1793 # Don't try downloading from Google storage for the packaged datasets as text, json, csv or pandas\r\n 1794 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n\r\nFile ~/.../datasets/src/datasets/builder.py:1264, in DatasetBuilder.as_streaming_dataset(self, split, base_path)\r\n 1257 dl_manager = StreamingDownloadManager(\r\n 1258 base_path=base_path or self.base_path,\r\n 1259 download_config=DownloadConfig(use_auth_token=self.use_auth_token, storage_options=self.storage_options),\r\n 1260 dataset_name=self.name,\r\n 1261 data_dir=self.config.data_dir,\r\n 1262 )\r\n 1263 self._check_manual_download(dl_manager)\r\n-> 1264 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 1265 # By default, return all splits\r\n 1266 if split is None:\r\n\r\nFile ~/.../datasets/src/datasets/packaged_modules/parquet/parquet.py:34, in Parquet._split_generators(self, dl_manager)\r\n 32 if not self.config.data_files:\r\n 33 raise ValueError(f\"At least one data file must be specified, but got data_files={self.config.data_files}\")\r\n---> 34 data_files = dl_manager.download_and_extract(self.config.data_files)\r\n 35 if isinstance(data_files, (str, list, tuple)):\r\n 36 files = data_files\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:1087, in StreamingDownloadManager.download_and_extract(self, url_or_urls)\r\n 1069 def download_and_extract(self, url_or_urls):\r\n 1070 \"\"\"Prepare given `url_or_urls` for streaming (add extraction protocol).\r\n 1071 \r\n 1072 This is the lazy version of `DownloadManager.download_and_extract` for streaming.\r\n (...)\r\n 1085 url(s): (`str` or `list` or `dict`), URL(s) to stream data from matching the given input `url_or_urls`.\r\n 1086 \"\"\"\r\n-> 1087 return self.extract(self.download(url_or_urls))\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:1039, in StreamingDownloadManager.extract(self, url_or_urls)\r\n 1020 def extract(self, url_or_urls):\r\n 1021 \"\"\"Add extraction protocol for given url(s) for streaming.\r\n 1022 \r\n 1023 This is the lazy version of `DownloadManager.extract` for streaming.\r\n (...)\r\n 1037 ```\r\n 1038 \"\"\"\r\n-> 1039 urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)\r\n 1040 return urlpaths\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:443, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)\r\n 441 num_proc = 1\r\n 442 if num_proc <= 1 or len(iterable) < parallel_min_length:\r\n--> 443 mapped = [\r\n 444 _single_map_nested((function, obj, types, None, True, None))\r\n 445 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 446 ]\r\n 447 else:\r\n 448 num_proc = num_proc if num_proc <= len(iterable) else len(iterable)\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:444, in <listcomp>(.0)\r\n 441 num_proc = 1\r\n 442 if num_proc <= 1 or len(iterable) < parallel_min_length:\r\n 443 mapped = [\r\n--> 444 _single_map_nested((function, obj, types, None, True, None))\r\n 445 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 446 ]\r\n 447 else:\r\n 448 num_proc = num_proc if num_proc <= len(iterable) else len(iterable)\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:363, in _single_map_nested(args)\r\n 361 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n 362 else:\r\n--> 363 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n 364 if isinstance(data_struct, list):\r\n 365 return mapped\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:363, in <listcomp>(.0)\r\n 361 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n 362 else:\r\n--> 363 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n 364 if isinstance(data_struct, list):\r\n 365 return mapped\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:346, in _single_map_nested(args)\r\n 344 # Singleton first to spare some computation\r\n 345 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 346 return function(data_struct)\r\n 348 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n 349 if rank is not None and logging.get_verbosity() < logging.WARNING:\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:1044, in StreamingDownloadManager._extract(self, urlpath)\r\n 1042 def _extract(self, urlpath: str) -> str:\r\n 1043 urlpath = str(urlpath)\r\n-> 1044 protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n 1045 # get inner file: zip://train-00000.json.gz::https://foo.bar/data.zip -> zip://train-00000.json.gz\r\n 1046 path = urlpath.split(\"::\")[0]\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:433, in _get_extraction_protocol(urlpath, use_auth_token)\r\n 431 else:\r\n 432 urlpath, kwargs = urlpath, {}\r\n--> 433 with fsspec.open(urlpath, **kwargs) as f:\r\n 434 return _get_extraction_protocol_with_magic_number(f)\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/core.py:102, in OpenFile.__enter__(self)\r\n 99 def __enter__(self):\r\n 100 mode = self.mode.replace(\"t\", \"\").replace(\"b\", \"\") + \"b\"\r\n--> 102 f = self.fs.open(self.path, mode=mode)\r\n 104 self.fobjects = [f]\r\n 106 if self.compression is not None:\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/spec.py:1199, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)\r\n 1197 else:\r\n 1198 ac = kwargs.pop(\"autocommit\", not self._intrans)\r\n-> 1199 f = self._open(\r\n 1200 path,\r\n 1201 mode=mode,\r\n 1202 block_size=block_size,\r\n 1203 autocommit=ac,\r\n 1204 cache_options=cache_options,\r\n 1205 **kwargs,\r\n 1206 )\r\n 1207 if compression is not None:\r\n 1208 from fsspec.compression import compr\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:659, in S3FileSystem._open(self, path, mode, block_size, acl, version_id, fill_cache, cache_type, autocommit, requester_pays, cache_options, **kwargs)\r\n 656 if cache_type is None:\r\n 657 cache_type = self.default_cache_type\r\n--> 659 return S3File(\r\n 660 self,\r\n 661 path,\r\n 662 mode,\r\n 663 block_size=block_size,\r\n 664 acl=acl,\r\n 665 version_id=version_id,\r\n 666 fill_cache=fill_cache,\r\n 667 s3_additional_kwargs=kw,\r\n 668 cache_type=cache_type,\r\n 669 autocommit=autocommit,\r\n 670 requester_pays=requester_pays,\r\n 671 cache_options=cache_options,\r\n 672 )\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:2043, in S3File.__init__(self, s3, path, mode, block_size, acl, version_id, fill_cache, s3_additional_kwargs, autocommit, cache_type, requester_pays, cache_options)\r\n 2041 self.details = s3.info(path)\r\n 2042 self.version_id = self.details.get(\"VersionId\")\r\n-> 2043 super().__init__(\r\n 2044 s3,\r\n 2045 path,\r\n 2046 mode,\r\n 2047 block_size,\r\n 2048 autocommit=autocommit,\r\n 2049 cache_type=cache_type,\r\n 2050 cache_options=cache_options,\r\n 2051 )\r\n 2052 self.s3 = self.fs # compatibility\r\n 2054 # when not using autocommit we want to have transactional state to manage\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/spec.py:1555, in AbstractBufferedFile.__init__(self, fs, path, mode, block_size, autocommit, cache_type, cache_options, size, **kwargs)\r\n 1553 self.size = size\r\n 1554 else:\r\n-> 1555 self.size = self.details[\"size\"]\r\n 1556 self.cache = caches[cache_type](\r\n 1557 self.blocksize, self._fetch_range, self.size, **cache_options\r\n 1558 )\r\n 1559 else:\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/spec.py:1568, in AbstractBufferedFile.details(self)\r\n 1565 @property\r\n 1566 def details(self):\r\n 1567 if self._details is None:\r\n-> 1568 self._details = self.fs.info(self.path)\r\n 1569 return self._details\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/asyn.py:115, in sync_wrapper.<locals>.wrapper(*args, **kwargs)\r\n 112 @functools.wraps(func)\r\n 113 def wrapper(*args, **kwargs):\r\n 114 self = obj or args[0]\r\n--> 115 return sync(self.loop, func, *args, **kwargs)\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/asyn.py:100, in sync(loop, func, timeout, *args, **kwargs)\r\n 98 raise FSTimeoutError from return_result\r\n 99 elif isinstance(return_result, BaseException):\r\n--> 100 raise return_result\r\n 101 else:\r\n 102 return return_result\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/asyn.py:55, in _runner(event, coro, result, timeout)\r\n 53 coro = asyncio.wait_for(coro, timeout=timeout)\r\n 54 try:\r\n---> 55 result[0] = await coro\r\n 56 except Exception as ex:\r\n 57 result[0] = ex\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:1248, in S3FileSystem._info(self, path, bucket, key, refresh, version_id)\r\n 1246 if key:\r\n 1247 try:\r\n-> 1248 out = await self._call_s3(\r\n 1249 \"head_object\",\r\n 1250 self.kwargs,\r\n 1251 Bucket=bucket,\r\n 1252 Key=key,\r\n 1253 **version_id_kw(version_id),\r\n 1254 **self.req_kw,\r\n 1255 )\r\n 1256 return {\r\n 1257 \"ETag\": out.get(\"ETag\", \"\"),\r\n 1258 \"LastModified\": out[\"LastModified\"],\r\n (...)\r\n 1264 \"ContentType\": out.get(\"ContentType\"),\r\n 1265 }\r\n 1266 except FileNotFoundError:\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:341, in S3FileSystem._call_s3(self, method, *akwarglist, **kwargs)\r\n 340 async def _call_s3(self, method, *akwarglist, **kwargs):\r\n--> 341 await self.set_session()\r\n 342 s3 = await self.get_s3(kwargs.get(\"Bucket\"))\r\n 343 method = getattr(s3, method)\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:502, in S3FileSystem.set_session(self, refresh, kwargs)\r\n 500 conf = AioConfig(**config_kwargs)\r\n 501 if self.session is None:\r\n--> 502 self.session = aiobotocore.session.AioSession(**self.kwargs)\r\n 504 for parameters in (config_kwargs, self.kwargs, init_kwargs, client_kwargs):\r\n 505 for option in (\"region_name\", \"endpoint_url\"):\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'headers'\r\n```",
"Is `\"data-parquet\"` a file ? In `data_files` you should pass the paths to the parquet files (not to a directory). Glob patterns are not supported yet for S3 URLs.\r\n\r\nThe bug seems to happen because your provided data file has no extension. Because of that it tries to infer it from the file content, but fails because `_get_extraction_protocol` doesn't support S3 URLs yet.\r\n\r\n",
"@lhoestq \r\nThank you for your answer. Saving the file with `.parquet` extension solved the issue! This is really great! Really appreciate all the help! \r\n\r\nLet me know if I should close the issue or feel free to close it if you want.",
"Cool ! I'm glad it worked out :)\r\n\r\nSure feel free to close the issue, since the original question about streaming with load_from_disk has been answered anyway"
] | 2023-05-10T06:25:22 | 2023-05-12T09:37:45 | 2023-05-12T09:37:45 | NONE | null | null | null | ### Feature request
Support for streaming datasets stored in object stores in `load_from_disk`.
### Motivation
The `load_from_disk` function supports fetching datasets stored in object stores such as `s3`. In many cases, the datasets that are stored in object stores are very large and being able to stream the data from the buckets becomes essential.
### Your contribution
I'd be happy to contribute this feature if I could get the guidance on how to do so. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5838/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5837 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5837/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5837/comments | https://api.github.com/repos/huggingface/datasets/issues/5837/events | https://github.com/huggingface/datasets/issues/5837 | 1,703,019,816 | I_kwDODunzps5lggUo | 5,837 | Use DeepSpeed load myself " .csv " dataset. | {
"login": "LanShanPi",
"id": 58167546,
"node_id": "MDQ6VXNlcjU4MTY3NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/58167546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LanShanPi",
"html_url": "https://github.com/LanShanPi",
"followers_url": "https://api.github.com/users/LanShanPi/followers",
"following_url": "https://api.github.com/users/LanShanPi/following{/other_user}",
"gists_url": "https://api.github.com/users/LanShanPi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LanShanPi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LanShanPi/subscriptions",
"organizations_url": "https://api.github.com/users/LanShanPi/orgs",
"repos_url": "https://api.github.com/users/LanShanPi/repos",
"events_url": "https://api.github.com/users/LanShanPi/events{/privacy}",
"received_events_url": "https://api.github.com/users/LanShanPi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! Doing `load_dataset(\"path/to/data.csv\")` is not supported yet, but you can do\r\n\r\n```python\r\nds = load_dataset(\"csv\", data_files=[\"path/to/data.csv\"])\r\n```",
"@lhoestq thank you.",
"The other question: \r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1767, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1498, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1127, in dataset_module_factory\r\n return PackagedDatasetModuleFactory(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 708, in get_module\r\n data_files = DataFilesDict.from_local_or_remote(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/data_files.py\", line 796, in from_local_or_remote\r\n DataFilesList.from_local_or_remote(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/data_files.py\", line 764, in from_local_or_remote\r\n data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/data_files.py\", line 362, in resolve_patterns_locally_or_by_urls\r\n for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/data_files.py\", line 306, in _resolve_single_pattern_locally\r\n raise FileNotFoundError(error_msg)\r\nFileNotFoundError: Unable to find '/home/fm001/hzl/Data/qa/' at /\r\n>>> mydata = load_dataset(\"/home/fm001/hzl/Data/qa/\")\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1767, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1508, in load_dataset_builder\r\n builder_cls = import_main_class(dataset_module.module_path)\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 115, in import_main_class\r\n module = importlib.import_module(module_path)\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1014, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 671, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 783, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/home/fm001/.cache/huggingface/modules/datasets_modules/datasets/qa/b8b9f481eff9d17b769b4b50f30a51da32b47c94d1af4d2bdffb9fc2c589513a/qa.py\", line 2, in <module>\r\n mydata = load_dataset(\"/home/fm001/hzl/Data/qa/\")\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1767, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1524, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\nTypeError: 'NoneType' object is not callable\r\n\r\nAnd I follow the setting with https://huggingface.co/docs/datasets/dataset_script"
] | 2023-05-10T02:39:28 | 2023-05-15T03:51:36 | null | NONE | null | null | null | ### Describe the bug
When I use DeepSpeed train a model with my own " XXX.csv" dataset I got the follow question:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py", line 1767, in load_dataset
builder_instance = load_dataset_builder(
File "/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py", line 1498, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py", line 1217, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at /home/fm001/hzl/Data/qa.csv/qa.csv.py or any data file in the same directory.
### Steps to reproduce the bug
my code is :
from datasets import load_dataset
mydata = load_dataset("/home/fm001/hzl/Data/qa.csv")
### Expected behavior
γγγ
### Environment info
γγγ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5837/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5836 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5836/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5836/comments | https://api.github.com/repos/huggingface/datasets/issues/5836/events | https://github.com/huggingface/datasets/pull/5836 | 1,702,773,316 | PR_kwDODunzps5QIgzu | 5,836 | [docs] Custom decoding transforms | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5836). All of your documentation changes will be reflected on that endpoint.",
"The error seems unrelated to the changes, so feel free to merge.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006562 / 0.011353 (-0.004791) | 0.004568 / 0.011008 (-0.006440) | 0.098151 / 0.038508 (0.059643) | 0.028117 / 0.023109 (0.005008) | 0.305442 / 0.275898 (0.029544) | 0.338288 / 0.323480 (0.014808) | 0.005012 / 0.007986 (-0.002973) | 0.003415 / 0.004328 (-0.000913) | 0.075022 / 0.004250 (0.070771) | 0.036869 / 0.037052 (-0.000183) | 0.301427 / 0.258489 (0.042937) | 0.348485 / 0.293841 (0.054644) | 0.030761 / 0.128546 (-0.097785) | 0.011461 / 0.075646 (-0.064185) | 0.321987 / 0.419271 (-0.097285) | 0.042885 / 0.043533 (-0.000648) | 0.300691 / 0.255139 (0.045552) | 0.333208 / 0.283200 (0.050008) | 0.090203 / 0.141683 (-0.051480) | 1.459744 / 1.452155 (0.007590) | 1.522960 / 1.492716 (0.030243) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213219 / 0.018006 (0.195213) | 0.408118 / 0.000490 (0.407629) | 0.003716 / 0.000200 (0.003516) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023060 / 0.037411 (-0.014351) | 0.097423 / 0.014526 (0.082897) | 0.103988 / 0.176557 (-0.072568) | 0.162793 / 0.737135 (-0.574343) | 0.108282 / 0.296338 (-0.188056) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431628 / 0.215209 (0.216419) | 4.300881 / 2.077655 (2.223226) | 2.058853 / 1.504120 (0.554733) | 1.897910 / 1.541195 (0.356715) | 1.991723 / 1.468490 (0.523233) | 0.699686 / 4.584777 (-3.885091) | 3.395004 / 3.745712 (-0.350708) | 1.841613 / 5.269862 (-3.428248) | 1.152347 / 4.565676 (-3.413330) | 0.082517 / 0.424275 (-0.341758) | 0.012323 / 0.007607 (0.004715) | 0.535812 / 0.226044 (0.309767) | 5.374103 / 2.268929 (3.105174) | 2.429662 / 55.444624 (-53.014962) | 2.097199 / 6.876477 (-4.779277) | 2.172625 / 2.142072 (0.030552) | 0.810156 / 4.805227 (-3.995071) | 0.151629 / 6.500664 (-6.349035) | 0.066528 / 0.075469 (-0.008941) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.220667 / 1.841788 (-0.621121) | 13.696976 / 8.074308 (5.622668) | 14.042916 / 10.191392 (3.851524) | 0.129626 / 0.680424 (-0.550798) | 0.016593 / 0.534201 (-0.517607) | 0.383747 / 0.579283 (-0.195536) | 0.386872 / 0.434364 (-0.047492) | 0.456524 / 0.540337 (-0.083813) | 0.545033 / 1.386936 (-0.841903) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006361 / 0.011353 (-0.004992) | 0.004516 / 0.011008 (-0.006493) | 0.077155 / 0.038508 (0.038647) | 0.027239 / 0.023109 (0.004130) | 0.359892 / 0.275898 (0.083994) | 0.391994 / 0.323480 (0.068514) | 0.004950 / 0.007986 (-0.003036) | 0.003379 / 0.004328 (-0.000949) | 0.077057 / 0.004250 (0.072806) | 0.039562 / 0.037052 (0.002509) | 0.364244 / 0.258489 (0.105755) | 0.416033 / 0.293841 (0.122192) | 0.031049 / 0.128546 (-0.097497) | 0.011479 / 0.075646 (-0.064167) | 0.086479 / 0.419271 (-0.332793) | 0.039381 / 0.043533 (-0.004151) | 0.372143 / 0.255139 (0.117004) | 0.388569 / 0.283200 (0.105369) | 0.090954 / 0.141683 (-0.050728) | 1.540957 / 1.452155 (0.088802) | 1.596841 / 1.492716 (0.104125) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221130 / 0.018006 (0.203123) | 0.403728 / 0.000490 (0.403238) | 0.003172 / 0.000200 (0.002972) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024963 / 0.037411 (-0.012449) | 0.101065 / 0.014526 (0.086539) | 0.110846 / 0.176557 (-0.065710) | 0.158578 / 0.737135 (-0.578557) | 0.112235 / 0.296338 (-0.184104) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457320 / 0.215209 (0.242111) | 4.548094 / 2.077655 (2.470439) | 2.175376 / 1.504120 (0.671256) | 1.964755 / 1.541195 (0.423561) | 2.008128 / 1.468490 (0.539638) | 0.702448 / 4.584777 (-3.882329) | 3.437595 / 3.745712 (-0.308117) | 3.009871 / 5.269862 (-2.259990) | 1.558181 / 4.565676 (-3.007496) | 0.082568 / 0.424275 (-0.341707) | 0.012371 / 0.007607 (0.004764) | 0.550688 / 0.226044 (0.324644) | 5.534210 / 2.268929 (3.265282) | 2.649605 / 55.444624 (-52.795020) | 2.317293 / 6.876477 (-4.559184) | 2.351525 / 2.142072 (0.209453) | 0.808971 / 4.805227 (-3.996256) | 0.152737 / 6.500664 (-6.347927) | 0.068416 / 0.075469 (-0.007053) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.340219 / 1.841788 (-0.501569) | 13.903388 / 8.074308 (5.829080) | 13.063477 / 10.191392 (2.872085) | 0.130216 / 0.680424 (-0.550208) | 0.016522 / 0.534201 (-0.517679) | 0.398946 / 0.579283 (-0.180337) | 0.382450 / 0.434364 (-0.051914) | 0.491007 / 0.540337 (-0.049330) | 0.577747 / 1.386936 (-0.809189) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#15c37ed142e4fbcb8c00ae62d4c71c84ce41959a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007812 / 0.011353 (-0.003541) | 0.005563 / 0.011008 (-0.005446) | 0.099372 / 0.038508 (0.060864) | 0.035629 / 0.023109 (0.012520) | 0.301457 / 0.275898 (0.025559) | 0.339136 / 0.323480 (0.015656) | 0.006152 / 0.007986 (-0.001834) | 0.005843 / 0.004328 (0.001515) | 0.075280 / 0.004250 (0.071030) | 0.052789 / 0.037052 (0.015736) | 0.301805 / 0.258489 (0.043316) | 0.347918 / 0.293841 (0.054078) | 0.036182 / 0.128546 (-0.092364) | 0.012655 / 0.075646 (-0.062991) | 0.334428 / 0.419271 (-0.084844) | 0.062746 / 0.043533 (0.019213) | 0.296932 / 0.255139 (0.041793) | 0.314115 / 0.283200 (0.030916) | 0.121291 / 0.141683 (-0.020392) | 1.453252 / 1.452155 (0.001097) | 1.564714 / 1.492716 (0.071997) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243810 / 0.018006 (0.225804) | 0.547129 / 0.000490 (0.546640) | 0.004666 / 0.000200 (0.004466) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028214 / 0.037411 (-0.009197) | 0.108878 / 0.014526 (0.094352) | 0.122313 / 0.176557 (-0.054243) | 0.182412 / 0.737135 (-0.554723) | 0.127014 / 0.296338 (-0.169324) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423946 / 0.215209 (0.208737) | 4.207112 / 2.077655 (2.129457) | 2.048658 / 1.504120 (0.544538) | 1.843593 / 1.541195 (0.302398) | 1.952426 / 1.468490 (0.483936) | 0.712098 / 4.584777 (-3.872679) | 3.824971 / 3.745712 (0.079258) | 3.507141 / 5.269862 (-1.762721) | 1.868866 / 4.565676 (-2.696810) | 0.087895 / 0.424275 (-0.336380) | 0.012783 / 0.007607 (0.005176) | 0.524087 / 0.226044 (0.298042) | 5.246498 / 2.268929 (2.977570) | 2.495944 / 55.444624 (-52.948680) | 2.126779 / 6.876477 (-4.749698) | 2.315545 / 2.142072 (0.173472) | 0.859546 / 4.805227 (-3.945681) | 0.173457 / 6.500664 (-6.327208) | 0.067483 / 0.075469 (-0.007986) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.173851 / 1.841788 (-0.667937) | 15.091913 / 8.074308 (7.017605) | 14.640035 / 10.191392 (4.448643) | 0.168498 / 0.680424 (-0.511926) | 0.017513 / 0.534201 (-0.516688) | 0.425770 / 0.579283 (-0.153513) | 0.434248 / 0.434364 (-0.000116) | 0.504204 / 0.540337 (-0.036134) | 0.616885 / 1.386936 (-0.770051) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007775 / 0.011353 (-0.003578) | 0.005153 / 0.011008 (-0.005855) | 0.075461 / 0.038508 (0.036953) | 0.034994 / 0.023109 (0.011885) | 0.372389 / 0.275898 (0.096491) | 0.397911 / 0.323480 (0.074431) | 0.006572 / 0.007986 (-0.001413) | 0.005549 / 0.004328 (0.001220) | 0.075101 / 0.004250 (0.070851) | 0.054014 / 0.037052 (0.016962) | 0.368964 / 0.258489 (0.110475) | 0.425353 / 0.293841 (0.131512) | 0.035546 / 0.128546 (-0.093001) | 0.012707 / 0.075646 (-0.062939) | 0.087418 / 0.419271 (-0.331853) | 0.046425 / 0.043533 (0.002893) | 0.363982 / 0.255139 (0.108843) | 0.376421 / 0.283200 (0.093221) | 0.105369 / 0.141683 (-0.036314) | 1.494408 / 1.452155 (0.042253) | 1.596783 / 1.492716 (0.104067) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.258780 / 0.018006 (0.240773) | 0.533373 / 0.000490 (0.532883) | 0.000432 / 0.000200 (0.000232) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030687 / 0.037411 (-0.006725) | 0.110231 / 0.014526 (0.095705) | 0.123738 / 0.176557 (-0.052819) | 0.171999 / 0.737135 (-0.565137) | 0.127673 / 0.296338 (-0.168665) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448058 / 0.215209 (0.232849) | 4.459381 / 2.077655 (2.381726) | 2.234020 / 1.504120 (0.729900) | 2.038616 / 1.541195 (0.497421) | 2.123795 / 1.468490 (0.655305) | 0.702664 / 4.584777 (-3.882113) | 3.837133 / 3.745712 (0.091420) | 2.138574 / 5.269862 (-3.131287) | 1.375955 / 4.565676 (-3.189722) | 0.086996 / 0.424275 (-0.337280) | 0.012461 / 0.007607 (0.004854) | 0.557978 / 0.226044 (0.331934) | 5.648613 / 2.268929 (3.379685) | 2.777829 / 55.444624 (-52.666796) | 2.392424 / 6.876477 (-4.484052) | 2.482823 / 2.142072 (0.340750) | 0.851891 / 4.805227 (-3.953336) | 0.171335 / 6.500664 (-6.329329) | 0.065041 / 0.075469 (-0.010428) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.319697 / 1.841788 (-0.522091) | 15.748688 / 8.074308 (7.674380) | 13.397042 / 10.191392 (3.205650) | 0.166424 / 0.680424 (-0.514000) | 0.017755 / 0.534201 (-0.516446) | 0.424989 / 0.579283 (-0.154294) | 0.424705 / 0.434364 (-0.009659) | 0.494190 / 0.540337 (-0.046147) | 0.588315 / 1.386936 (-0.798622) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#15c37ed142e4fbcb8c00ae62d4c71c84ce41959a \"CML watermark\")\n"
] | 2023-05-09T21:21:41 | 2023-05-15T07:36:12 | 2023-05-10T20:23:03 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5836",
"html_url": "https://github.com/huggingface/datasets/pull/5836",
"diff_url": "https://github.com/huggingface/datasets/pull/5836.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5836.patch",
"merged_at": "2023-05-10T20:23:03"
} | Adds custom decoding transform solution to the docs to fix #5782. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5836/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5836/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5835/comments | https://api.github.com/repos/huggingface/datasets/issues/5835/events | https://github.com/huggingface/datasets/pull/5835 | 1,702,522,620 | PR_kwDODunzps5QHquR | 5,835 | Always set nullable fields in the writer | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006640 / 0.011353 (-0.004713) | 0.004606 / 0.011008 (-0.006402) | 0.098870 / 0.038508 (0.060362) | 0.028201 / 0.023109 (0.005092) | 0.304396 / 0.275898 (0.028498) | 0.339804 / 0.323480 (0.016324) | 0.005011 / 0.007986 (-0.002974) | 0.003530 / 0.004328 (-0.000799) | 0.075223 / 0.004250 (0.070973) | 0.037922 / 0.037052 (0.000870) | 0.310273 / 0.258489 (0.051784) | 0.348324 / 0.293841 (0.054483) | 0.030181 / 0.128546 (-0.098365) | 0.011584 / 0.075646 (-0.064062) | 0.322637 / 0.419271 (-0.096635) | 0.043119 / 0.043533 (-0.000414) | 0.314514 / 0.255139 (0.059375) | 0.334384 / 0.283200 (0.051185) | 0.092551 / 0.141683 (-0.049132) | 1.496694 / 1.452155 (0.044539) | 1.555426 / 1.492716 (0.062710) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205078 / 0.018006 (0.187072) | 0.399200 / 0.000490 (0.398710) | 0.004881 / 0.000200 (0.004681) | 0.000200 / 0.000054 (0.000146) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025042 / 0.037411 (-0.012369) | 0.101501 / 0.014526 (0.086975) | 0.107430 / 0.176557 (-0.069127) | 0.170107 / 0.737135 (-0.567028) | 0.111253 / 0.296338 (-0.185086) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.460358 / 0.215209 (0.245149) | 4.592037 / 2.077655 (2.514383) | 2.222612 / 1.504120 (0.718493) | 2.022804 / 1.541195 (0.481610) | 2.040824 / 1.468490 (0.572334) | 0.700485 / 4.584777 (-3.884292) | 3.427847 / 3.745712 (-0.317866) | 2.836916 / 5.269862 (-2.432946) | 1.505055 / 4.565676 (-3.060621) | 0.083206 / 0.424275 (-0.341069) | 0.046492 / 0.007607 (0.038885) | 0.555562 / 0.226044 (0.329518) | 5.563574 / 2.268929 (3.294645) | 2.635273 / 55.444624 (-52.809351) | 2.299377 / 6.876477 (-4.577100) | 2.394512 / 2.142072 (0.252440) | 0.809541 / 4.805227 (-3.995686) | 0.151814 / 6.500664 (-6.348850) | 0.067241 / 0.075469 (-0.008228) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188396 / 1.841788 (-0.653392) | 13.714596 / 8.074308 (5.640288) | 14.076906 / 10.191392 (3.885514) | 0.143447 / 0.680424 (-0.536977) | 0.016514 / 0.534201 (-0.517687) | 0.383075 / 0.579283 (-0.196209) | 0.386997 / 0.434364 (-0.047367) | 0.441941 / 0.540337 (-0.098396) | 0.522145 / 1.386936 (-0.864791) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006266 / 0.011353 (-0.005086) | 0.004562 / 0.011008 (-0.006446) | 0.077472 / 0.038508 (0.038964) | 0.027596 / 0.023109 (0.004486) | 0.400498 / 0.275898 (0.124600) | 0.406728 / 0.323480 (0.083248) | 0.004745 / 0.007986 (-0.003241) | 0.003375 / 0.004328 (-0.000954) | 0.076645 / 0.004250 (0.072394) | 0.037756 / 0.037052 (0.000703) | 0.415183 / 0.258489 (0.156694) | 0.413758 / 0.293841 (0.119917) | 0.030624 / 0.128546 (-0.097922) | 0.011525 / 0.075646 (-0.064121) | 0.086033 / 0.419271 (-0.333238) | 0.039307 / 0.043533 (-0.004226) | 0.418192 / 0.255139 (0.163053) | 0.403152 / 0.283200 (0.119952) | 0.094141 / 0.141683 (-0.047542) | 1.459012 / 1.452155 (0.006857) | 1.546493 / 1.492716 (0.053777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239494 / 0.018006 (0.221488) | 0.420918 / 0.000490 (0.420428) | 0.000411 / 0.000200 (0.000211) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024525 / 0.037411 (-0.012886) | 0.099793 / 0.014526 (0.085267) | 0.105888 / 0.176557 (-0.070669) | 0.155912 / 0.737135 (-0.581223) | 0.109937 / 0.296338 (-0.186401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.470108 / 0.215209 (0.254899) | 4.696390 / 2.077655 (2.618735) | 2.467841 / 1.504120 (0.963721) | 2.275012 / 1.541195 (0.733818) | 2.430736 / 1.468490 (0.962245) | 0.700442 / 4.584777 (-3.884335) | 3.458451 / 3.745712 (-0.287261) | 1.921120 / 5.269862 (-3.348742) | 1.183292 / 4.565676 (-3.382384) | 0.083985 / 0.424275 (-0.340290) | 0.012510 / 0.007607 (0.004903) | 0.589066 / 0.226044 (0.363022) | 5.896070 / 2.268929 (3.627141) | 2.935379 / 55.444624 (-52.509245) | 2.599524 / 6.876477 (-4.276953) | 2.663426 / 2.142072 (0.521354) | 0.812096 / 4.805227 (-3.993131) | 0.152559 / 6.500664 (-6.348105) | 0.066906 / 0.075469 (-0.008563) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.333341 / 1.841788 (-0.508446) | 14.441667 / 8.074308 (6.367359) | 14.754069 / 10.191392 (4.562677) | 0.155707 / 0.680424 (-0.524716) | 0.016983 / 0.534201 (-0.517218) | 0.389386 / 0.579283 (-0.189897) | 0.394106 / 0.434364 (-0.040258) | 0.447355 / 0.540337 (-0.092982) | 0.533142 / 1.386936 (-0.853794) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#99ee4467ce77f8f718159a535e237dd8790b5bed \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007801 / 0.011353 (-0.003552) | 0.004884 / 0.011008 (-0.006124) | 0.114754 / 0.038508 (0.076245) | 0.040427 / 0.023109 (0.017318) | 0.402064 / 0.275898 (0.126166) | 0.428830 / 0.323480 (0.105350) | 0.006429 / 0.007986 (-0.001556) | 0.004394 / 0.004328 (0.000066) | 0.087681 / 0.004250 (0.083431) | 0.053684 / 0.037052 (0.016632) | 0.399967 / 0.258489 (0.141478) | 0.445298 / 0.293841 (0.151457) | 0.033194 / 0.128546 (-0.095352) | 0.010288 / 0.075646 (-0.065359) | 0.390719 / 0.419271 (-0.028552) | 0.059311 / 0.043533 (0.015778) | 0.393651 / 0.255139 (0.138512) | 0.418395 / 0.283200 (0.135196) | 0.121494 / 0.141683 (-0.020189) | 1.735470 / 1.452155 (0.283315) | 1.820485 / 1.492716 (0.327769) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012887 / 0.018006 (-0.005119) | 0.491652 / 0.000490 (0.491162) | 0.005481 / 0.000200 (0.005281) | 0.000127 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030931 / 0.037411 (-0.006480) | 0.125212 / 0.014526 (0.110686) | 0.136004 / 0.176557 (-0.040552) | 0.201686 / 0.737135 (-0.535449) | 0.140181 / 0.296338 (-0.156157) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.475003 / 0.215209 (0.259794) | 4.743918 / 2.077655 (2.666263) | 2.149422 / 1.504120 (0.645302) | 1.925016 / 1.541195 (0.383821) | 2.061441 / 1.468490 (0.592951) | 0.619845 / 4.584777 (-3.964932) | 4.534691 / 3.745712 (0.788979) | 2.248198 / 5.269862 (-3.021664) | 1.409868 / 4.565676 (-3.155808) | 0.080265 / 0.424275 (-0.344010) | 0.014455 / 0.007607 (0.006848) | 0.597810 / 0.226044 (0.371765) | 5.845492 / 2.268929 (3.576564) | 2.729139 / 55.444624 (-52.715486) | 2.313879 / 6.876477 (-4.562598) | 2.418763 / 2.142072 (0.276690) | 0.748687 / 4.805227 (-4.056540) | 0.165278 / 6.500664 (-6.335387) | 0.076848 / 0.075469 (0.001379) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.416349 / 1.841788 (-0.425439) | 17.440903 / 8.074308 (9.366595) | 17.025733 / 10.191392 (6.834341) | 0.167428 / 0.680424 (-0.512995) | 0.020484 / 0.534201 (-0.513717) | 0.470273 / 0.579283 (-0.109010) | 0.494380 / 0.434364 (0.060016) | 0.566131 / 0.540337 (0.025794) | 0.690444 / 1.386936 (-0.696492) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007695 / 0.011353 (-0.003657) | 0.005551 / 0.011008 (-0.005457) | 0.087812 / 0.038508 (0.049304) | 0.039107 / 0.023109 (0.015998) | 0.436461 / 0.275898 (0.160563) | 0.465116 / 0.323480 (0.141636) | 0.006590 / 0.007986 (-0.001396) | 0.004672 / 0.004328 (0.000343) | 0.087109 / 0.004250 (0.082858) | 0.054227 / 0.037052 (0.017175) | 0.442660 / 0.258489 (0.184171) | 0.484296 / 0.293841 (0.190455) | 0.033308 / 0.128546 (-0.095238) | 0.010780 / 0.075646 (-0.064866) | 0.095255 / 0.419271 (-0.324016) | 0.054399 / 0.043533 (0.010866) | 0.431734 / 0.255139 (0.176595) | 0.453583 / 0.283200 (0.170383) | 0.116067 / 0.141683 (-0.025616) | 1.780701 / 1.452155 (0.328546) | 1.851077 / 1.492716 (0.358360) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228000 / 0.018006 (0.209994) | 0.485733 / 0.000490 (0.485243) | 0.003955 / 0.000200 (0.003755) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033974 / 0.037411 (-0.003437) | 0.134504 / 0.014526 (0.119978) | 0.144421 / 0.176557 (-0.032135) | 0.202171 / 0.737135 (-0.534964) | 0.152015 / 0.296338 (-0.144323) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.520462 / 0.215209 (0.305253) | 5.233339 / 2.077655 (3.155684) | 2.575013 / 1.504120 (1.070893) | 2.384119 / 1.541195 (0.842924) | 2.403856 / 1.468490 (0.935366) | 0.618656 / 4.584777 (-3.966121) | 4.663582 / 3.745712 (0.917870) | 3.738594 / 5.269862 (-1.531268) | 1.794903 / 4.565676 (-2.770773) | 0.077903 / 0.424275 (-0.346372) | 0.014681 / 0.007607 (0.007074) | 0.648615 / 0.226044 (0.422570) | 6.503721 / 2.268929 (4.234792) | 3.326239 / 55.444624 (-52.118386) | 2.989791 / 6.876477 (-3.886685) | 2.995479 / 2.142072 (0.853407) | 0.765483 / 4.805227 (-4.039744) | 0.169783 / 6.500664 (-6.330882) | 0.077533 / 0.075469 (0.002064) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.518736 / 1.841788 (-0.323051) | 17.989119 / 8.074308 (9.914811) | 15.484365 / 10.191392 (5.292973) | 0.168507 / 0.680424 (-0.511917) | 0.020289 / 0.534201 (-0.513912) | 0.467491 / 0.579283 (-0.111793) | 0.501714 / 0.434364 (0.067350) | 0.553418 / 0.540337 (0.013081) | 0.662199 / 1.386936 (-0.724737) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5ebda17e4362bd5f6123543a14fa526a3b54481a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007044 / 0.011353 (-0.004309) | 0.004750 / 0.011008 (-0.006258) | 0.096694 / 0.038508 (0.058186) | 0.035682 / 0.023109 (0.012573) | 0.300613 / 0.275898 (0.024715) | 0.334831 / 0.323480 (0.011351) | 0.006428 / 0.007986 (-0.001558) | 0.004456 / 0.004328 (0.000128) | 0.075060 / 0.004250 (0.070810) | 0.053166 / 0.037052 (0.016114) | 0.299601 / 0.258489 (0.041112) | 0.359521 / 0.293841 (0.065680) | 0.028072 / 0.128546 (-0.100474) | 0.009216 / 0.075646 (-0.066430) | 0.328895 / 0.419271 (-0.090377) | 0.050881 / 0.043533 (0.007349) | 0.298265 / 0.255139 (0.043126) | 0.318095 / 0.283200 (0.034896) | 0.116046 / 0.141683 (-0.025637) | 1.491312 / 1.452155 (0.039157) | 1.556053 / 1.492716 (0.063337) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.014248 / 0.018006 (-0.003758) | 0.551455 / 0.000490 (0.550965) | 0.006096 / 0.000200 (0.005897) | 0.000145 / 0.000054 (0.000091) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030598 / 0.037411 (-0.006813) | 0.109549 / 0.014526 (0.095023) | 0.123207 / 0.176557 (-0.053350) | 0.181940 / 0.737135 (-0.555195) | 0.128965 / 0.296338 (-0.167374) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404552 / 0.215209 (0.189343) | 4.030674 / 2.077655 (1.953020) | 1.841819 / 1.504120 (0.337699) | 1.650055 / 1.541195 (0.108860) | 1.763208 / 1.468490 (0.294718) | 0.532715 / 4.584777 (-4.052062) | 3.774810 / 3.745712 (0.029098) | 3.221927 / 5.269862 (-2.047934) | 1.607974 / 4.565676 (-2.957702) | 0.067160 / 0.424275 (-0.357116) | 0.012479 / 0.007607 (0.004872) | 0.498801 / 0.226044 (0.272757) | 4.980567 / 2.268929 (2.711638) | 2.356017 / 55.444624 (-53.088608) | 2.018975 / 6.876477 (-4.857502) | 2.218343 / 2.142072 (0.076270) | 0.645714 / 4.805227 (-4.159514) | 0.145470 / 6.500664 (-6.355195) | 0.065666 / 0.075469 (-0.009803) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.205756 / 1.841788 (-0.636031) | 15.682779 / 8.074308 (7.608470) | 14.748987 / 10.191392 (4.557595) | 0.167105 / 0.680424 (-0.513319) | 0.017554 / 0.534201 (-0.516647) | 0.393924 / 0.579283 (-0.185359) | 0.432659 / 0.434364 (-0.001705) | 0.502033 / 0.540337 (-0.038304) | 0.602244 / 1.386936 (-0.784692) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007077 / 0.011353 (-0.004276) | 0.004911 / 0.011008 (-0.006097) | 0.075120 / 0.038508 (0.036612) | 0.035460 / 0.023109 (0.012351) | 0.362569 / 0.275898 (0.086671) | 0.398995 / 0.323480 (0.075515) | 0.006587 / 0.007986 (-0.001398) | 0.004571 / 0.004328 (0.000242) | 0.074647 / 0.004250 (0.070397) | 0.057331 / 0.037052 (0.020279) | 0.365123 / 0.258489 (0.106634) | 0.408617 / 0.293841 (0.114776) | 0.028911 / 0.128546 (-0.099635) | 0.009533 / 0.075646 (-0.066113) | 0.081566 / 0.419271 (-0.337705) | 0.048841 / 0.043533 (0.005308) | 0.367245 / 0.255139 (0.112106) | 0.375975 / 0.283200 (0.092776) | 0.123211 / 0.141683 (-0.018472) | 1.471588 / 1.452155 (0.019433) | 1.569342 / 1.492716 (0.076625) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.328443 / 0.018006 (0.310436) | 0.541402 / 0.000490 (0.540912) | 0.000440 / 0.000200 (0.000240) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030772 / 0.037411 (-0.006639) | 0.115833 / 0.014526 (0.101307) | 0.127837 / 0.176557 (-0.048719) | 0.180897 / 0.737135 (-0.556238) | 0.132458 / 0.296338 (-0.163881) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445979 / 0.215209 (0.230770) | 4.453101 / 2.077655 (2.375447) | 2.276625 / 1.504120 (0.772505) | 2.102167 / 1.541195 (0.560972) | 2.181583 / 1.468490 (0.713093) | 0.525069 / 4.584777 (-4.059708) | 3.803446 / 3.745712 (0.057734) | 1.954173 / 5.269862 (-3.315688) | 1.088734 / 4.565676 (-3.476942) | 0.066020 / 0.424275 (-0.358255) | 0.012158 / 0.007607 (0.004551) | 0.546828 / 0.226044 (0.320783) | 5.454060 / 2.268929 (3.185132) | 2.756154 / 55.444624 (-52.688470) | 2.476501 / 6.876477 (-4.399976) | 2.525875 / 2.142072 (0.383803) | 0.647515 / 4.805227 (-4.157712) | 0.144511 / 6.500664 (-6.356153) | 0.067060 / 0.075469 (-0.008409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306456 / 1.841788 (-0.535332) | 15.822623 / 8.074308 (7.748315) | 14.929114 / 10.191392 (4.737721) | 0.168650 / 0.680424 (-0.511773) | 0.018043 / 0.534201 (-0.516158) | 0.396712 / 0.579283 (-0.182572) | 0.425800 / 0.434364 (-0.008564) | 0.466452 / 0.540337 (-0.073885) | 0.564370 / 1.386936 (-0.822566) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5ebda17e4362bd5f6123543a14fa526a3b54481a \"CML watermark\")\n"
] | 2023-05-09T18:16:59 | 2023-05-23T16:10:29 | 2023-05-19T13:04:30 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5835",
"html_url": "https://github.com/huggingface/datasets/pull/5835",
"diff_url": "https://github.com/huggingface/datasets/pull/5835.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5835.patch",
"merged_at": "2023-05-19T13:04:30"
} | This fixes loading of e.g. parquet data with non-nullable fields.
Indeed `datasets.Features` doesn't support non-nullable fields, which can lead to data not concatenable due to arrow schema mismatch. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5835/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5834/comments | https://api.github.com/repos/huggingface/datasets/issues/5834/events | https://github.com/huggingface/datasets/issues/5834 | 1,702,448,892 | I_kwDODunzps5leU78 | 5,834 | Is uint8 supported? | {
"login": "ryokan0123",
"id": 17979572,
"node_id": "MDQ6VXNlcjE3OTc5NTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/17979572?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryokan0123",
"html_url": "https://github.com/ryokan0123",
"followers_url": "https://api.github.com/users/ryokan0123/followers",
"following_url": "https://api.github.com/users/ryokan0123/following{/other_user}",
"gists_url": "https://api.github.com/users/ryokan0123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryokan0123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryokan0123/subscriptions",
"organizations_url": "https://api.github.com/users/ryokan0123/orgs",
"repos_url": "https://api.github.com/users/ryokan0123/repos",
"events_url": "https://api.github.com/users/ryokan0123/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryokan0123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! The numpy formatting detaults to int64 and float32 - but you can use uint8 using\r\n```python\r\nds = ds.with_format(\"numpy\", dtype=np.uint8)\r\n```",
"Related to https://github.com/huggingface/datasets/issues/5517.",
"Thank you!\r\nBy setting `ds.with_format(\"numpy\", dtype=np.uint8)`, the dataset returns the data in `uint8`.\r\n\r\nHowever, `with_format` and `set_format` seem to cast the data on-the-fly.\r\nI want to reduce the dataset size by using `uint8` instead of `int64` and I observe no difference between using `int64` and `uint8` for the vector.\r\nIs there any way to actually store the data in `uint8` and save the disk space and the downloading time when loaded from the hub?\r\n",
"If the feature type is `Value(\"uint8\")` then it's written an uint8 on disk using the uint8 Arrow dtype.\r\n\r\ne.g.\r\n```python\r\nds = Dataset.from_dict({\"a\": range(10)}, features=Features({\"a\": Value(\"uint8\")}))\r\nds.data.nbytes\r\n# 10\r\n```",
"Oh, I understand now.\r\nThe data was stored in `uint8` from the beginning (when the dataset returns `int64`).\r\n\r\nThank you for your time!\r\nMy question is fully resolved."
] | 2023-05-09T17:31:13 | 2023-05-13T05:04:21 | 2023-05-13T05:04:21 | NONE | null | null | null | ### Describe the bug
I expect the dataset to store the data in the `uint8` data type, but it's returning `int64` instead.
While I've found that `datasets` doesn't yet support float16 (https://github.com/huggingface/datasets/issues/4981), I'm wondering if this is the case for other data types as well.
Is there a way to store vector data as `uint8` and then upload it to the hub?
### Steps to reproduce the bug
```python
from datasets import Features, Dataset, Sequence, Value
import numpy as np
dataset = Dataset.from_dict(
{"vector": [np.array([0, 1, 2], dtype=np.uint8)]}, features=Features({"vector": Sequence(Value("uint8"))})
).with_format("numpy")
print(dataset[0]["vector"].dtype)
```
### Expected behavior
Expected: `uint8`
Actual: `int64`
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-12.1-x86_64-i386-64bit
- Python version: 3.8.12
- Huggingface_hub version: 0.12.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5834/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5833 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5833/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5833/comments | https://api.github.com/repos/huggingface/datasets/issues/5833/events | https://github.com/huggingface/datasets/issues/5833 | 1,702,280,682 | I_kwDODunzps5ldr3q | 5,833 | Unable to push dataset - `create_pr` problem | {
"login": "agombert",
"id": 17645711,
"node_id": "MDQ6VXNlcjE3NjQ1NzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/17645711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agombert",
"html_url": "https://github.com/agombert",
"followers_url": "https://api.github.com/users/agombert/followers",
"following_url": "https://api.github.com/users/agombert/following{/other_user}",
"gists_url": "https://api.github.com/users/agombert/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agombert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agombert/subscriptions",
"organizations_url": "https://api.github.com/users/agombert/orgs",
"repos_url": "https://api.github.com/users/agombert/repos",
"events_url": "https://api.github.com/users/agombert/events{/privacy}",
"received_events_url": "https://api.github.com/users/agombert/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @agombert.\r\n\r\nIn this case, I think the root issue is authentication: before pushing to Hub, you should authenticate. See our docs: https://huggingface.co/docs/datasets/upload_dataset#upload-with-python\r\n> 2. To upload a dataset on the Hub in Python, you need to log in to your Hugging Face account:\r\n ```\r\n huggingface-cli login\r\n ```",
"Hey @albertvillanova well I actually did :D \r\n\r\n<img width=\"1079\" alt=\"Capture dβeΜcran 2023-05-09 aΜ 18 02 58\" src=\"https://github.com/huggingface/datasets/assets/17645711/e091aa20-06b1-4dd3-bfdb-35e832c66f8d\">\r\n",
"That is weird that you get a Forbidden error if you are properly authenticated...\r\n\r\nToday we had a big outage issue affecting the Hugging Face Hub. Could you please retry to push_to_hub your dataset? Maybe that was the cause...",
"Yes I've just tried again and same error 403 :/",
"Login successful but also got this error \"Forbidden: pass `create_pr=1` as a query parameter to create a Pull Request\"",
"Make sure your API token has a `write` role. I had the same issue as you with the `read` token. Creating a `write` token and using that solved the issue.",
"> Make sure your API token has a `write` role. I had the same issue as you with the `read` token. Creating a `write` token and using that solved the issue.\r\n\r\nI generate a token with write role. It works! thank you so much.",
"@dmitrijsk amazing thanks so much ! \r\nThe error should be clearer when the token is read-only β I wasted a lot of time there.."
] | 2023-05-09T15:32:55 | 2023-07-20T17:17:00 | null | NONE | null | null | null | ### Describe the bug
I can't upload to the hub the dataset I manually created locally (Image dataset). I have a problem when using the method `.push_to_hub` which asks for a `create_pr` attribute which is not compatible.
### Steps to reproduce the bug
here what I have:
```python
dataset.push_to_hub("agomberto/FrenchCensus-handwritten-texts")
```
Output:
```python
Pushing split train to the Hub.
Pushing dataset shards to the dataset hub: 0%| | 0/2 [00:00<?, ?it/s]
Creating parquet from Arrow format: 0%| | 0/3 [00:00<?, ?ba/s]
Creating parquet from Arrow format: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 12.70ba/s]
Pushing dataset shards to the dataset hub: 0%| | 0/2 [00:01<?, ?it/s]
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py:259, in hf_raise_for_status(response, endpoint_name)
258 try:
--> 259 response.raise_for_status()
260 except HTTPError as e:
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/requests/models.py:1021, in Response.raise_for_status(self)
1020 if http_error_msg:
-> 1021 raise HTTPError(http_error_msg, response=self)
HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/agomberto/FrenchCensus-handwritten-texts/commit/main
The above exception was the direct cause of the following exception:
HfHubHTTPError Traceback (most recent call last)
Cell In[7], line 1
----> 1 dataset.push_to_hub("agomberto/FrenchCensus-handwritten-texts")
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/datasets/dataset_dict.py:1583, in DatasetDict.push_to_hub(self, repo_id, private, token, branch, max_shard_size, num_shards, embed_external_files)
1581 logger.warning(f"Pushing split {split} to the Hub.")
1582 # The split=key needs to be removed before merging
-> 1583 repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parquet_shards_to_hub(
1584 repo_id,
1585 split=split,
1586 private=private,
1587 token=token,
1588 branch=branch,
1589 max_shard_size=max_shard_size,
1590 num_shards=num_shards.get(split),
1591 embed_external_files=embed_external_files,
1592 )
1593 total_uploaded_size += uploaded_size
1594 total_dataset_nbytes += dataset_nbytes
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/datasets/arrow_dataset.py:5275, in Dataset._push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, max_shard_size, num_shards, embed_external_files)
5273 shard.to_parquet(buffer)
5274 uploaded_size += buffer.tell()
-> 5275 _retry(
5276 api.upload_file,
5277 func_kwargs={
5278 "path_or_fileobj": buffer.getvalue(),
5279 "path_in_repo": shard_path_in_repo,
5280 "repo_id": repo_id,
5281 "token": token,
5282 "repo_type": "dataset",
5283 "revision": branch,
5284 },
5285 exceptions=HTTPError,
5286 status_codes=[504],
5287 base_wait_time=2.0,
5288 max_retries=5,
5289 max_wait_time=20.0,
5290 )
5291 shards_path_in_repo.append(shard_path_in_repo)
5293 # Cleanup to remove unused files
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/datasets/utils/file_utils.py:285, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)
283 except exceptions as err:
284 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):
--> 285 raise err
286 else:
287 sleep_time = min(max_wait_time, base_wait_time * 2**retry) # Exponential backoff
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/datasets/utils/file_utils.py:282, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)
280 while True:
281 try:
--> 282 return func(*func_args, **func_kwargs)
283 except exceptions as err:
284 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:120, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
117 if check_use_auth_token:
118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 120 return fn(*args, **kwargs)
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/hf_api.py:2998, in HfApi.upload_file(self, path_or_fileobj, path_in_repo, repo_id, token, repo_type, revision, commit_message, commit_description, create_pr, parent_commit)
2990 commit_message = (
2991 commit_message if commit_message is not None else f"Upload {path_in_repo} with huggingface_hub"
2992 )
2993 operation = CommitOperationAdd(
2994 path_or_fileobj=path_or_fileobj,
2995 path_in_repo=path_in_repo,
2996 )
-> 2998 commit_info = self.create_commit(
2999 repo_id=repo_id,
3000 repo_type=repo_type,
3001 operations=[operation],
3002 commit_message=commit_message,
3003 commit_description=commit_description,
3004 token=token,
3005 revision=revision,
3006 create_pr=create_pr,
3007 parent_commit=parent_commit,
3008 )
3010 if commit_info.pr_url is not None:
3011 revision = quote(_parse_revision_from_pr_url(commit_info.pr_url), safe="")
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:120, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
117 if check_use_auth_token:
118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 120 return fn(*args, **kwargs)
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/hf_api.py:2548, in HfApi.create_commit(self, repo_id, operations, commit_message, commit_description, token, repo_type, revision, create_pr, num_threads, parent_commit)
2546 try:
2547 commit_resp = get_session().post(url=commit_url, headers=headers, data=data, params=params)
-> 2548 hf_raise_for_status(commit_resp, endpoint_name="commit")
2549 except RepositoryNotFoundError as e:
2550 e.append_to_message(_CREATE_COMMIT_NO_REPO_ERROR_MESSAGE)
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py:301, in hf_raise_for_status(response, endpoint_name)
297 raise BadRequestError(message, response=response) from e
299 # Convert `HTTPError` into a `HfHubHTTPError` to display request information
300 # as well (request id and/or server error message)
--> 301 raise HfHubHTTPError(str(e), response=response) from e
HfHubHTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/agomberto/FrenchCensus-handwritten-texts/commit/main (Request ID: Root=1-645a66bf-255ad91602a6404e6cb70fba)
Forbidden: pass `create_pr=1` as a query parameter to create a Pull Request
```
And then when I do
```python
dataset.push_to_hub("agomberto/FrenchCensus-handwritten-texts", create_pr=1)
```
I get
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[8], line 1
----> 1 dataset.push_to_hub("agomberto/FrenchCensus-handwritten-texts", create_pr=1)
TypeError: push_to_hub() got an unexpected keyword argument 'create_pr'
```
### Expected behavior
I would like to have the dataset updloaded [here](https://huggingface.co/datasets/agomberto/FrenchCensus-handwritten-texts).
### Environment info
```bash
- `datasets` version: 2.12.0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.8.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 1.5.3
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5833/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5832 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5832/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5832/comments | https://api.github.com/repos/huggingface/datasets/issues/5832/events | https://github.com/huggingface/datasets/issues/5832 | 1,702,135,336 | I_kwDODunzps5ldIYo | 5,832 | 404 Client Error: Not Found for url: https://huggingface.co/api/models/bert-large-cased | {
"login": "varungupta31",
"id": 51288316,
"node_id": "MDQ6VXNlcjUxMjg4MzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/51288316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/varungupta31",
"html_url": "https://github.com/varungupta31",
"followers_url": "https://api.github.com/users/varungupta31/followers",
"following_url": "https://api.github.com/users/varungupta31/following{/other_user}",
"gists_url": "https://api.github.com/users/varungupta31/gists{/gist_id}",
"starred_url": "https://api.github.com/users/varungupta31/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varungupta31/subscriptions",
"organizations_url": "https://api.github.com/users/varungupta31/orgs",
"repos_url": "https://api.github.com/users/varungupta31/repos",
"events_url": "https://api.github.com/users/varungupta31/events{/privacy}",
"received_events_url": "https://api.github.com/users/varungupta31/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"moved to https://github.com/huggingface/transformers/issues/23233"
] | 2023-05-09T14:14:59 | 2023-05-09T14:25:59 | 2023-05-09T14:25:59 | NONE | null | null | null | ### Describe the bug
Running [Bert-Large-Cased](https://huggingface.co/bert-large-cased) model causes `HTTPError`, with the following traceback-
```
HTTPError Traceback (most recent call last)
<ipython-input-6-5c580443a1ad> in <module>
----> 1 tokenizer = BertTokenizer.from_pretrained('bert-large-cased')
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1646 # At this point pretrained_model_name_or_path is either a directory or a model identifier name
1647 fast_tokenizer_file = get_fast_tokenizer_file(
-> 1648 pretrained_model_name_or_path, revision=revision, use_auth_token=use_auth_token
1649 )
1650 additional_files_names = {
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in get_fast_tokenizer_file(path_or_repo, revision, use_auth_token)
3406 """
3407 # Inspect all files from the repo/folder.
-> 3408 all_files = get_list_of_files(path_or_repo, revision=revision, use_auth_token=use_auth_token)
3409 tokenizer_files_map = {}
3410 for file_name in all_files:
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/file_utils.py in get_list_of_files(path_or_repo, revision, use_auth_token)
1685 token = None
1686 model_info = HfApi(endpoint=HUGGINGFACE_CO_RESOLVE_ENDPOINT).model_info(
-> 1687 path_or_repo, revision=revision, token=token
1688 )
1689 return [f.rfilename for f in model_info.siblings]
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/huggingface_hub/hf_api.py in model_info(self, repo_id, revision, token)
246 )
247 r = requests.get(path, headers=headers)
--> 248 r.raise_for_status()
249 d = r.json()
250 return ModelInfo(**d)
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/requests/models.py in raise_for_status(self)
951
952 if http_error_msg:
--> 953 raise HTTPError(http_error_msg, response=self)
954
955 def close(self):
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/bert-large-cased
```
I have also tried running in offline mode, as [discussed here](https://huggingface.co/docs/transformers/installation#offline-mode)
```
HF_DATASETS_OFFLINE=1
TRANSFORMERS_OFFLINE=1
```
### Steps to reproduce the bug
1. `from transformers import BertTokenizer, BertModel`
2. `tokenizer = BertTokenizer.from_pretrained('bert-large-cased')`
### Expected behavior
Run without the HTTP error.
### Environment info
| # Name | Version | Build | Channel | |
|--------------------|------------|-----------------------------|---------|---|
| _libgcc_mutex | 0.1 | main | | |
| _openmp_mutex | 4.5 | 1_gnu | | |
| _pytorch_select | 0.1 | cpu_0 | | |
| appdirs | 1.4.4 | pypi_0 | pypi | |
| backcall | 0.2.0 | pypi_0 | pypi | |
| blas | 1.0 | mkl | | |
| bzip2 | 1.0.8 | h7b6447c_0 | | |
| ca-certificates | 2021.7.5 | h06a4308_1 | | |
| certifi | 2021.5.30 | py37h06a4308_0 | | |
| cffi | 1.14.6 | py37h400218f_0 | | |
| charset-normalizer | 2.0.3 | pypi_0 | pypi | |
| click | 8.0.1 | pypi_0 | pypi | |
| colorama | 0.4.4 | pypi_0 | pypi | |
| cudatoolkit | 11.1.74 | h6bb024c_0 | nvidia | |
| cycler | 0.11.0 | pypi_0 | pypi | |
| decorator | 5.0.9 | pypi_0 | pypi | |
| docker-pycreds | 0.4.0 | pypi_0 | pypi | |
| docopt | 0.6.2 | pypi_0 | pypi | |
| dominate | 2.6.0 | pypi_0 | pypi | |
| ffmpeg | 4.3 | hf484d3e_0 | pytorch | |
| filelock | 3.0.12 | pypi_0 | pypi | |
| fonttools | 4.38.0 | pypi_0 | pypi | |
| freetype | 2.10.4 | h5ab3b9f_0 | | |
| gitdb | 4.0.7 | pypi_0 | pypi | |
| gitpython | 3.1.18 | pypi_0 | pypi | |
| gmp | 6.2.1 | h2531618_2 | | |
| gnutls | 3.6.15 | he1e5248_0 | | |
| huggingface-hub | 0.0.12 | pypi_0 | pypi | |
| humanize | 3.10.0 | pypi_0 | pypi | |
| idna | 3.2 | pypi_0 | pypi | |
| importlib-metadata | 4.6.1 | pypi_0 | pypi | |
| intel-openmp | 2019.4 | 243 | | |
| ipdb | 0.13.9 | pypi_0 | pypi | |
| ipython | 7.25.0 | pypi_0 | pypi | |
| ipython-genutils | 0.2.0 | pypi_0 | pypi | |
| jedi | 0.18.0 | pypi_0 | pypi | |
| joblib | 1.0.1 | pypi_0 | pypi | |
| jpeg | 9b | h024ee3a_2 | | |
| jsonpickle | 1.5.2 | pypi_0 | pypi | |
| kiwisolver | 1.4.4 | pypi_0 | pypi | |
| lame | 3.100 | h7b6447c_0 | | |
| lcms2 | 2.12 | h3be6417_0 | | |
| ld_impl_linux-64 | 2.35.1 | h7274673_9 | | |
| libffi | 3.3 | he6710b0_2 | | |
| libgcc-ng | 9.3.0 | h5101ec6_17 | | |
| libgomp | 9.3.0 | h5101ec6_17 | | |
| libiconv | 1.15 | h63c8f33_5 | | |
| libidn2 | 2.3.2 | h7f8727e_0 | | |
| libmklml | 2019.0.5 | 0 | | |
| libpng | 1.6.37 | hbc83047_0 | | |
| libstdcxx-ng | 9.3.0 | hd4cf53a_17 | | |
| libtasn1 | 4.16.0 | h27cfd23_0 | | |
| libtiff | 4.2.0 | h85742a9_0 | | |
| libunistring | 0.9.10 | h27cfd23_0 | | |
| libuv | 1.40.0 | h7b6447c_0 | | |
| libwebp-base | 1.2.0 | h27cfd23_0 | | |
| lz4-c | 1.9.3 | h2531618_0 | | |
| matplotlib | 3.5.3 | pypi_0 | pypi | |
| matplotlib-inline | 0.1.2 | pypi_0 | pypi | |
| mergedeep | 1.3.4 | pypi_0 | pypi | |
| mkl | 2020.2 | 256 | | |
| mkl-service | 2.3.0 | py37he8ac12f_0 | | |
| mkl_fft | 1.3.0 | py37h54f3939_0 | | |
| mkl_random | 1.1.1 | py37h0573a6f_0 | | |
| msgpack | 1.0.2 | pypi_0 | pypi | |
| munch | 2.5.0 | pypi_0 | pypi | |
| ncurses | 6.2 | he6710b0_1 | | |
| nettle | 3.7.3 | hbbd107a_1 | | |
| ninja | 1.10.2 | hff7bd54_1 | | |
| nltk | 3.8.1 | pypi_0 | pypi | |
| numpy | 1.19.2 | py37h54aff64_0 | | |
| numpy-base | 1.19.2 | py37hfa32c7d_0 | | |
| olefile | 0.46 | py37_0 | | |
| openh264 | 2.1.0 | hd408876_0 | | |
| openjpeg | 2.3.0 | h05c96fa_1 | | |
| openssl | 1.1.1k | h27cfd23_0 | | |
| packaging | 21.0 | pypi_0 | pypi | |
| pandas | 1.3.1 | pypi_0 | pypi | |
| parso | 0.8.2 | pypi_0 | pypi | |
| pathtools | 0.1.2 | pypi_0 | pypi | |
| pexpect | 4.8.0 | pypi_0 | pypi | |
| pickleshare | 0.7.5 | pypi_0 | pypi | |
| pillow | 8.3.1 | py37h2c7a002_0 | | |
| pip | 21.1.3 | py37h06a4308_0 | | |
| prompt-toolkit | 3.0.19 | pypi_0 | pypi | |
| protobuf | 4.21.12 | pypi_0 | pypi | |
| psutil | 5.8.0 | pypi_0 | pypi | |
| ptyprocess | 0.7.0 | pypi_0 | pypi | |
| py-cpuinfo | 8.0.0 | pypi_0 | pypi | |
| pycparser | 2.20 | py_2 | | |
| pygments | 2.9.0 | pypi_0 | pypi | |
| pyparsing | 2.4.7 | pypi_0 | pypi | |
| python | 3.7.10 | h12debd9_4 | | |
| python-dateutil | 2.8.2 | pypi_0 | pypi | |
| pytorch | 1.9.0 | py3.7_cuda11.1_cudnn8.0.5_0 | pytorch | |
| pytz | 2021.1 | pypi_0 | pypi | |
| pyyaml | 5.4.1 | pypi_0 | pypi | |
| readline | 8.1 | h27cfd23_0 | | |
| regex | 2022.10.31 | pypi_0 | pypi | |
| requests | 2.26.0 | pypi_0 | pypi | |
| sacred | 0.8.2 | pypi_0 | pypi | |
| sacremoses | 0.0.45 | pypi_0 | pypi | |
| scikit-learn | 0.24.2 | pypi_0 | pypi | |
| scipy | 1.7.0 | pypi_0 | pypi | |
| sentry-sdk | 1.15.0 | pypi_0 | pypi | |
| setproctitle | 1.3.2 | pypi_0 | pypi | |
| setuptools | 52.0.0 | py37h06a4308_0 | | |
| six | 1.16.0 | pyhd3eb1b0_0 | | |
| smmap | 4.0.0 | pypi_0 | pypi | |
| sqlite | 3.36.0 | hc218d9a_0 | | |
| threadpoolctl | 2.2.0 | pypi_0 | pypi | |
| tk | 8.6.10 | hbc83047_0 | | |
| tokenizers | 0.10.3 | pypi_0 | pypi | |
| toml | 0.10.2 | pypi_0 | pypi | |
| torchaudio | 0.9.0 | py37 | pytorch | |
| torchvision | 0.10.0 | py37_cu111 | pytorch | |
| tqdm | 4.61.2 | pypi_0 | pypi | |
| traitlets | 5.0.5 | pypi_0 | pypi | |
| transformers | 4.9.1 | pypi_0 | pypi | |
| typing-extensions | 3.10.0.0 | hd3eb1b0_0 | | |
| typing_extensions | 3.10.0.0 | pyh06a4308_0 | | |
| urllib3 | 1.26.14 | pypi_0 | pypi | |
| wandb | 0.13.10 | pypi_0 | pypi | |
| wcwidth | 0.2.5 | pypi_0 | pypi | |
| wheel | 0.36.2 | pyhd3eb1b0_0 | | |
| wrapt | 1.12.1 | pypi_0 | pypi | |
| xz | 5.2.5 | h7b6447c_0 | | |
| zipp | 3.5.0 | pypi_0 | pypi | |
| zlib | 1.2.11 | h7b6447c_3 | | |
| zstd | 1.4.9 | haebb681_0 | | | | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5832/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5831 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5831/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5831/comments | https://api.github.com/repos/huggingface/datasets/issues/5831/events | https://github.com/huggingface/datasets/issues/5831 | 1,701,813,835 | I_kwDODunzps5lb55L | 5,831 | [Bug]504 Server Error when loading dataset which was already cached | {
"login": "SingL3",
"id": 20473466,
"node_id": "MDQ6VXNlcjIwNDczNDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/20473466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SingL3",
"html_url": "https://github.com/SingL3",
"followers_url": "https://api.github.com/users/SingL3/followers",
"following_url": "https://api.github.com/users/SingL3/following{/other_user}",
"gists_url": "https://api.github.com/users/SingL3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SingL3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SingL3/subscriptions",
"organizations_url": "https://api.github.com/users/SingL3/orgs",
"repos_url": "https://api.github.com/users/SingL3/repos",
"events_url": "https://api.github.com/users/SingL3/events{/privacy}",
"received_events_url": "https://api.github.com/users/SingL3/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I am experiencing the same problem with the following environment:\r\n\r\n* `datasets` version: 2.11.0\r\n* Platform: `Linux 5.19.0-41-generic x86_64 GNU/Linux`\r\n* Python version: `3.8.5`\r\n* Huggingface_hub version: 0.13.3\r\n* PyArrow version: `11.0.0`\r\n* Pandas version: `1.5.3`\r\n\r\nTrying to get some diagnostics, I got the following: \r\n\r\n```python\r\n>>> from huggingface_hub import scan_cache_dir\r\n>>> sd = scan_cache_dir()\r\n>>> sd\r\nHFCacheInfo(size_on_disk=0, repos=frozenset(), warnings=[CorruptedCacheException('Repo path is not a directory: /home/myname/.cache/huggingface/hub/version_diffusers_cache.txt')])\r\n\r\n```\r\nHowever, that might also be because I had tried to manually specify the `cache_dir` and that resulted in trying to download the dataset again ... but into a folder one level higher up than it should have.\r\n\r\nNote that my issue is with the `huggan/wikiart` dataset, so it is not a dataset-specific issue.",
"same problem with a private dataset repo, seems the huggingface hub server got some connection problem?",
"Yes, dataset server seems down for now",
"@SingL3 You can avoid this error by setting the [`HF_DATASETS_OFFLINE`](https://huggingface.co/docs/datasets/v2.12.0/en/loading#offline) env variable to 1. By default, if an internet connection is available, we check whether the cache of a cached dataset is up-to-date.\r\n\r\n@lucidBrot `datasets`' cache is still not aligned with `huggigface_hub`'s. We plan to align it eventually.",
"Today we had a big issue affecting the Hugging Face Hub, thus all the `504 Server Error: Gateway Time-out` errors.\r\n\r\nIt is fixed now and loading your datasets should work as expected.",
"Hi, @albertvillanova.\r\nIf there is a locally cached version of datasets or something cache using huggingface_hub, when a network problem(either client or server) occurs, is it a better way to fallback to use the current cached version rather than raise a exception and exit?"
] | 2023-05-09T10:31:07 | 2023-05-10T01:48:20 | null | NONE | null | null | null | ### Describe the bug
I have already cached the dataset using:
```
dataset = load_dataset("databricks/databricks-dolly-15k",
cache_dir="/mnt/data/llm/datasets/databricks-dolly-15k")
```
After that, I tried to load it again using the same machine, I got this error:
```
Traceback (most recent call last):
File "/mnt/home/llm/pythia/train.py", line 16, in <module>
dataset = load_dataset("databricks/databricks-dolly-15k",
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/datasets/load.py", line 1773, in load_dataset
builder_instance = load_dataset_builder(
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/datasets/load.py", line 1502, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/datasets/load.py", line 1219, in dataset_module_factory
raise e1 from None
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/datasets/load.py", line 1186, in dataset_module_factory
raise e
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/datasets/load.py", line 1160, in dataset_module_factory
dataset_info = hf_api.dataset_info(
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn
return fn(*args, **kwargs)
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 1667, in dataset_info
hf_raise_for_status(r)
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 301, in hf_raise_for_status
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/databricks/databricks-dolly-15k
```
### Steps to reproduce the bug
1. cache the databrick-dolly-15k dataset using load_dataset, setting a cache_dir
2. use load_dataset again, setting the same cache_dir
### Expected behavior
Dataset loaded succuessfully.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-4.18.0-372.16.1.el8_6.x86_64-x86_64-with-glibc2.27
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5831/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5831/timeline | null | reopened | false |
https://api.github.com/repos/huggingface/datasets/issues/5830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5830/comments | https://api.github.com/repos/huggingface/datasets/issues/5830/events | https://github.com/huggingface/datasets/pull/5830 | 1,701,451,399 | PR_kwDODunzps5QEFEi | 5,830 | Debug windows #2 | {
"login": "HyukjinKwon",
"id": 6477701,
"node_id": "MDQ6VXNlcjY0Nzc3MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6477701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HyukjinKwon",
"html_url": "https://github.com/HyukjinKwon",
"followers_url": "https://api.github.com/users/HyukjinKwon/followers",
"following_url": "https://api.github.com/users/HyukjinKwon/following{/other_user}",
"gists_url": "https://api.github.com/users/HyukjinKwon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HyukjinKwon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HyukjinKwon/subscriptions",
"organizations_url": "https://api.github.com/users/HyukjinKwon/orgs",
"repos_url": "https://api.github.com/users/HyukjinKwon/repos",
"events_url": "https://api.github.com/users/HyukjinKwon/events{/privacy}",
"received_events_url": "https://api.github.com/users/HyukjinKwon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2023-05-09T06:40:34 | 2023-05-09T06:40:47 | 2023-05-09T06:40:47 | NONE | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5830",
"html_url": "https://github.com/huggingface/datasets/pull/5830",
"diff_url": "https://github.com/huggingface/datasets/pull/5830.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5830.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5830/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5829 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5829/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5829/comments | https://api.github.com/repos/huggingface/datasets/issues/5829/events | https://github.com/huggingface/datasets/issues/5829 | 1,699,958,189 | I_kwDODunzps5lU02t | 5,829 | (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64')) | {
"login": "elcolie",
"id": 18206728,
"node_id": "MDQ6VXNlcjE4MjA2NzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/18206728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elcolie",
"html_url": "https://github.com/elcolie",
"followers_url": "https://api.github.com/users/elcolie/followers",
"following_url": "https://api.github.com/users/elcolie/following{/other_user}",
"gists_url": "https://api.github.com/users/elcolie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elcolie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elcolie/subscriptions",
"organizations_url": "https://api.github.com/users/elcolie/orgs",
"repos_url": "https://api.github.com/users/elcolie/repos",
"events_url": "https://api.github.com/users/elcolie/events{/privacy}",
"received_events_url": "https://api.github.com/users/elcolie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Can you paste the error stack trace?",
"That is weird. I can't reproduce it again after reboot.\r\n```python\r\nIn [2]: import platform\r\n\r\nIn [3]: platform.platform()\r\nOut[3]: 'macOS-13.2-arm64-arm-64bit'\r\n\r\nIn [4]: from datasets import load_dataset\r\n ...:\r\n ...: jazzy = load_dataset(\"nomic-ai/gpt4all-j-prompt-generations\", revision='v1.2-jazzy')\r\nFound cached dataset parquet (/Users/sarit/.cache/huggingface/datasets/nomic-ai___parquet/nomic-ai--gpt4all-j-prompt-generations-a3b62015e2e52043/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec)\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 63.25it/s]\r\n```"
] | 2023-05-08T10:07:14 | 2023-06-30T11:39:14 | 2023-05-09T00:46:42 | NONE | null | null | null | ### Describe the bug
M2 MBP can't run
```python
from datasets import load_dataset
jazzy = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision='v1.2-jazzy')
```
### Steps to reproduce the bug
1. Use M2 MBP
2. Python 3.10.10 from pyenv
3. Run
```
from datasets import load_dataset
jazzy = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision='v1.2-jazzy')
```
### Expected behavior
Be able to run normally
### Environment info
```
from datasets import load_dataset
jazzy = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision='v1.2-jazzy')
```
OSX: 13.2
CPU: M2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5829/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5828/comments | https://api.github.com/repos/huggingface/datasets/issues/5828/events | https://github.com/huggingface/datasets/issues/5828 | 1,699,235,739 | I_kwDODunzps5lSEeb | 5,828 | Stream data concatenation issue | {
"login": "krishnapriya-18",
"id": 48817796,
"node_id": "MDQ6VXNlcjQ4ODE3Nzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/48817796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krishnapriya-18",
"html_url": "https://github.com/krishnapriya-18",
"followers_url": "https://api.github.com/users/krishnapriya-18/followers",
"following_url": "https://api.github.com/users/krishnapriya-18/following{/other_user}",
"gists_url": "https://api.github.com/users/krishnapriya-18/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krishnapriya-18/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krishnapriya-18/subscriptions",
"organizations_url": "https://api.github.com/users/krishnapriya-18/orgs",
"repos_url": "https://api.github.com/users/krishnapriya-18/repos",
"events_url": "https://api.github.com/users/krishnapriya-18/events{/privacy}",
"received_events_url": "https://api.github.com/users/krishnapriya-18/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! \r\n\r\nYou can call `map` as follows to avoid the error:\r\n```python\r\naugmented_dataset_cln = dataset_cln['train'].map(augment_dataset, features=dataset_cln['train'].features)\r\n```",
"Thanks it is solved",
"Hi! \r\nI have run into the same problem with you. Could you please let me know how you solve it? Thanks!"
] | 2023-05-07T21:02:54 | 2023-06-29T20:07:56 | 2023-05-10T05:05:47 | NONE | null | null | null | ### Describe the bug
I am not able to concatenate the augmentation of the stream data. I am using the latest version of dataset.
ValueError: The features can't be aligned because the key audio of features {'audio_id': Value(dtype='string',
id=None), 'audio': {'array': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'path':
Value(dtype='null', id=None), 'sampling_rate': Value(dtype='int64', id=None)}, 'transcript': Value(dtype='string',
id=None)} has unexpected type - {'array': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None),
'path': Value(dtype='null', id=None), 'sampling_rate': Value(dtype='int64', id=None)} (expected either
Audio(sampling_rate=16000, mono=True, decode=True, id=None) or Value("null").
### Steps to reproduce the bug
dataset = load_dataset("tobiolatunji/afrispeech-200", "all", streaming=True).shuffle(seed=42)
dataset_cln = dataset.remove_columns(['speaker_id', 'path', 'age_group', 'gender', 'accent', 'domain', 'country', 'duration'])
dataset_cln = dataset_cln.cast_column("audio", Audio(sampling_rate=16000))
from audiomentations import AddGaussianNoise,Compose,Gain,OneOf,PitchShift,PolarityInversion,TimeStretch
augmentation = Compose([
AddGaussianNoise(min_amplitude=0.005, max_amplitude=0.015, p=0.2)
])
def augment_dataset(batch):
audio = batch["audio"]
audio["array"] = augmentation(audio["array"], sample_rate=audio["sampling_rate"])
return batch
augmented_dataset_cln = dataset_cln['train'].map(augment_dataset)
dataset_cln['train'] = interleave_datasets([dataset_cln['train'], augmented_dataset_cln])
dataset_cln['train'] = dataset_cln['train'].shuffle(seed=42)
### Expected behavior
I should be able to merge as sampling rate is same.
### Environment info
import datasets
import transformers
import accelerate
print(datasets.__version__)
print(transformers.__version__)
print(torch.__version__)
print(evaluate.__version__)
print(accelerate.__version__)
2.12.0
4.28.1
2.0.0
0.4.0
0.18.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5828/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5827 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5827/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5827/comments | https://api.github.com/repos/huggingface/datasets/issues/5827/events | https://github.com/huggingface/datasets/issues/5827 | 1,698,891,246 | I_kwDODunzps5lQwXu | 5,827 | load json dataset interrupt when dtype cast problem occured | {
"login": "1014661165",
"id": 46060451,
"node_id": "MDQ6VXNlcjQ2MDYwNDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/46060451?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1014661165",
"html_url": "https://github.com/1014661165",
"followers_url": "https://api.github.com/users/1014661165/followers",
"following_url": "https://api.github.com/users/1014661165/following{/other_user}",
"gists_url": "https://api.github.com/users/1014661165/gists{/gist_id}",
"starred_url": "https://api.github.com/users/1014661165/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1014661165/subscriptions",
"organizations_url": "https://api.github.com/users/1014661165/orgs",
"repos_url": "https://api.github.com/users/1014661165/repos",
"events_url": "https://api.github.com/users/1014661165/events{/privacy}",
"received_events_url": "https://api.github.com/users/1014661165/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Indeed the JSON dataset builder raises an error when it encounters an unexpected type.\r\n\r\nThere's an old PR open to add away to ignore such elements though, if it can help: https://github.com/huggingface/datasets/pull/2838"
] | 2023-05-07T04:52:09 | 2023-05-10T12:32:28 | null | NONE | null | null | null | ### Describe the bug
i have a json like this:
[
{"id": 1, "name": 1},
{"id": 2, "name": "Nan"},
{"id": 3, "name": 3},
....
]
οΌwhich have several problematic rows data like row 2, then i load it with datasets.load_dataset('json', data_files=['xx.json'], split='train'), it will report like this:
Generating train split: 0 examples [00:00, ? examples/s]Failed to read file 'C:\Users\gawinjunwu\Downloads\test\data\a.json' with error <class 'pyarrow.lib.ArrowInvalid'>: Could not convert '2' with type str: tried to convert to int64
Traceback (most recent call last):
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 1858, in _prepare_split_single
for _, table in generator:
File "D:\Python3.9\lib\site-packages\datasets\packaged_modules\json\json.py", line 146, in _generate_tables
raise ValueError(f"Not able to read records in the JSON file at {file}.") from None
ValueError: Not able to read records in the JSON file at C:\Users\gawinjunwu\Downloads\test\data\a.json.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\gawinjunwu\Downloads\test\scripts\a.py", line 4, in <module>
ds = load_dataset('json', data_dir='data', split='train')
File "D:\Python3.9\lib\site-packages\datasets\load.py", line 1797, in load_dataset
builder_instance.download_and_prepare(
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 890, in download_and_prepare
self._download_and_prepare(
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 985, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 1746, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 1891, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset.
Could datasets skip those problematic data row?
### Steps to reproduce the bug
prepare a json file like this:
[
{"id": 1, "name": 1},
{"id": 2, "name": "Nan"},
{"id": 3, "name": 3}
]
then use datasets.load_dataset('json', dir_files=['xxx.json']) to load the json file
### Expected behavior
skip the problematic data row and load row1 and row3
### Environment info
python3.9 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5827/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5826 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5826/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5826/comments | https://api.github.com/repos/huggingface/datasets/issues/5826/events | https://github.com/huggingface/datasets/pull/5826 | 1,698,155,751 | PR_kwDODunzps5P5FYZ | 5,826 | Support working_dir in from_spark | {
"login": "maddiedawson",
"id": 106995444,
"node_id": "U_kgDOBmCe9A",
"avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maddiedawson",
"html_url": "https://github.com/maddiedawson",
"followers_url": "https://api.github.com/users/maddiedawson/followers",
"following_url": "https://api.github.com/users/maddiedawson/following{/other_user}",
"gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions",
"organizations_url": "https://api.github.com/users/maddiedawson/orgs",
"repos_url": "https://api.github.com/users/maddiedawson/repos",
"events_url": "https://api.github.com/users/maddiedawson/events{/privacy}",
"received_events_url": "https://api.github.com/users/maddiedawson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Added env var",
"@lhoestq would you or another maintainer be able to review please? :)",
"I removed the env var",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005771 / 0.011353 (-0.005582) | 0.004086 / 0.011008 (-0.006922) | 0.097170 / 0.038508 (0.058661) | 0.027464 / 0.023109 (0.004355) | 0.305425 / 0.275898 (0.029527) | 0.343869 / 0.323480 (0.020389) | 0.004899 / 0.007986 (-0.003087) | 0.003294 / 0.004328 (-0.001034) | 0.074710 / 0.004250 (0.070459) | 0.034982 / 0.037052 (-0.002070) | 0.306063 / 0.258489 (0.047574) | 0.343115 / 0.293841 (0.049274) | 0.025155 / 0.128546 (-0.103392) | 0.008429 / 0.075646 (-0.067217) | 0.318680 / 0.419271 (-0.100591) | 0.043304 / 0.043533 (-0.000229) | 0.306703 / 0.255139 (0.051564) | 0.335535 / 0.283200 (0.052335) | 0.087428 / 0.141683 (-0.054255) | 1.483769 / 1.452155 (0.031614) | 1.538753 / 1.492716 (0.046037) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203313 / 0.018006 (0.185307) | 0.413864 / 0.000490 (0.413375) | 0.003186 / 0.000200 (0.002986) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022862 / 0.037411 (-0.014550) | 0.097306 / 0.014526 (0.082780) | 0.102823 / 0.176557 (-0.073733) | 0.162803 / 0.737135 (-0.574333) | 0.106311 / 0.296338 (-0.190028) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451710 / 0.215209 (0.236501) | 4.508520 / 2.077655 (2.430865) | 2.181118 / 1.504120 (0.676998) | 1.977607 / 1.541195 (0.436412) | 2.008366 / 1.468490 (0.539876) | 0.565388 / 4.584777 (-4.019389) | 3.439318 / 3.745712 (-0.306394) | 1.747512 / 5.269862 (-3.522349) | 1.102124 / 4.565676 (-3.463553) | 0.069212 / 0.424275 (-0.355063) | 0.011926 / 0.007607 (0.004318) | 0.553414 / 0.226044 (0.327370) | 5.548959 / 2.268929 (3.280031) | 2.628769 / 55.444624 (-52.815856) | 2.301003 / 6.876477 (-4.575473) | 2.341744 / 2.142072 (0.199672) | 0.673092 / 4.805227 (-4.132135) | 0.137722 / 6.500664 (-6.362942) | 0.066909 / 0.075469 (-0.008560) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196854 / 1.841788 (-0.644934) | 13.421776 / 8.074308 (5.347468) | 13.839760 / 10.191392 (3.648368) | 0.140557 / 0.680424 (-0.539867) | 0.016619 / 0.534201 (-0.517582) | 0.357985 / 0.579283 (-0.221298) | 0.387018 / 0.434364 (-0.047346) | 0.452798 / 0.540337 (-0.087540) | 0.542085 / 1.386936 (-0.844851) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005868 / 0.011353 (-0.005484) | 0.004103 / 0.011008 (-0.006905) | 0.076126 / 0.038508 (0.037618) | 0.027744 / 0.023109 (0.004635) | 0.357257 / 0.275898 (0.081359) | 0.387981 / 0.323480 (0.064501) | 0.004807 / 0.007986 (-0.003178) | 0.003337 / 0.004328 (-0.000991) | 0.075486 / 0.004250 (0.071236) | 0.035121 / 0.037052 (-0.001931) | 0.361385 / 0.258489 (0.102896) | 0.399346 / 0.293841 (0.105505) | 0.025263 / 0.128546 (-0.103284) | 0.008571 / 0.075646 (-0.067075) | 0.081815 / 0.419271 (-0.337457) | 0.041114 / 0.043533 (-0.002418) | 0.362840 / 0.255139 (0.107701) | 0.380926 / 0.283200 (0.097727) | 0.092728 / 0.141683 (-0.048955) | 1.517647 / 1.452155 (0.065492) | 1.534914 / 1.492716 (0.042198) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199669 / 0.018006 (0.181663) | 0.399070 / 0.000490 (0.398580) | 0.002014 / 0.000200 (0.001814) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024541 / 0.037411 (-0.012870) | 0.099676 / 0.014526 (0.085151) | 0.106503 / 0.176557 (-0.070054) | 0.153755 / 0.737135 (-0.583380) | 0.108564 / 0.296338 (-0.187775) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443842 / 0.215209 (0.228633) | 4.441158 / 2.077655 (2.363503) | 2.159496 / 1.504120 (0.655376) | 1.955358 / 1.541195 (0.414163) | 1.973864 / 1.468490 (0.505374) | 0.550467 / 4.584777 (-4.034310) | 3.381831 / 3.745712 (-0.363881) | 2.561192 / 5.269862 (-2.708670) | 1.361684 / 4.565676 (-3.203992) | 0.068140 / 0.424275 (-0.356135) | 0.012005 / 0.007607 (0.004398) | 0.551921 / 0.226044 (0.325877) | 5.503591 / 2.268929 (3.234662) | 2.591609 / 55.444624 (-52.853015) | 2.246681 / 6.876477 (-4.629796) | 2.290941 / 2.142072 (0.148868) | 0.655212 / 4.805227 (-4.150015) | 0.136013 / 6.500664 (-6.364651) | 0.066995 / 0.075469 (-0.008474) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.300438 / 1.841788 (-0.541350) | 13.866224 / 8.074308 (5.791916) | 13.932624 / 10.191392 (3.741232) | 0.144345 / 0.680424 (-0.536079) | 0.016623 / 0.534201 (-0.517578) | 0.357629 / 0.579283 (-0.221654) | 0.389759 / 0.434364 (-0.044605) | 0.417704 / 0.540337 (-0.122633) | 0.501358 / 1.386936 (-0.885578) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#89f775226321ba94e5bf4670a323c0fb44f5f65c \"CML watermark\")\n",
"Thank you!"
] | 2023-05-05T20:22:40 | 2023-05-25T17:45:54 | 2023-05-25T08:46:15 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5826",
"html_url": "https://github.com/huggingface/datasets/pull/5826",
"diff_url": "https://github.com/huggingface/datasets/pull/5826.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5826.patch",
"merged_at": "2023-05-25T08:46:15"
} | Accept `working_dir` as an argument to `Dataset.from_spark`. Setting a non-NFS working directory for Spark workers to materialize to will improve write performance. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5826/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5825 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5825/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5825/comments | https://api.github.com/repos/huggingface/datasets/issues/5825/events | https://github.com/huggingface/datasets/issues/5825 | 1,697,327,483 | I_kwDODunzps5lKyl7 | 5,825 | FileNotFound even though exists | {
"login": "Muennighoff",
"id": 62820084,
"node_id": "MDQ6VXNlcjYyODIwMDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muennighoff",
"html_url": "https://github.com/Muennighoff",
"followers_url": "https://api.github.com/users/Muennighoff/followers",
"following_url": "https://api.github.com/users/Muennighoff/following{/other_user}",
"gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions",
"organizations_url": "https://api.github.com/users/Muennighoff/orgs",
"repos_url": "https://api.github.com/users/Muennighoff/repos",
"events_url": "https://api.github.com/users/Muennighoff/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muennighoff/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi! \r\n\r\nThis would only work if `bigscience/xP3` was a no-code dataset, but it isn't (it has a Python builder script).\r\n\r\nBut this should work: \r\n```python\r\nload_dataset(\"json\", data_files=\"https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl\")\r\n```\r\n\r\n",
"I see, it's not compatible w/ regex right?\r\ne.g.\r\n`load_dataset(\"json\", data_files=\"https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/*\")`",
"> I see, it's not compatible w/ regex right? e.g. `load_dataset(\"json\", data_files=\"https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/*\")`\r\n\r\nIt should work for patterns that \"reference\" the local filesystem, but to make this work with the Hub, we must implement https://github.com/huggingface/datasets/issues/5281 first.\r\n\r\nIn the meantime, you can fetch these glob files with `HfFileSystem` and pass them as a list to `load_dataset`:\r\n```python\r\nfrom datasets import load_dataset\r\nfrom huggingface_hub import HfFileSystem, hf_hub_url # `HfFileSystem` requires the latest version of `huggingface_hub`\r\n\r\nfs = HfFileSystem()\r\nglob_files = fs.glob(\"datasets/bigscience/xP3/ur/*\")\r\n# convert fsspec URLs to HTTP URLs\r\nresolved_paths = [fs.resolve_path(file) for file in glob_files]\r\ndata_files = [hf_hub_url(resolved_path.repo_id, resolved_path.path_in_repo, repo_type=resolved_path.repo_type) for resolved_path in resolved_paths]\r\n\r\nds = load_dataset(\"json\", data_files=data_files)\r\n```"
] | 2023-05-05T09:49:55 | 2023-05-07T17:43:46 | null | CONTRIBUTOR | null | null | null | ### Describe the bug
I'm trying to download https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl which works fine in my webbrowser, but somehow not with datasets. Am I doing sth wrong?
```
Downloading builder script: 100%
2.82k/2.82k [00:00<00:00, 64.2kB/s]
Downloading readme: 100%
12.6k/12.6k [00:00<00:00, 585kB/s]
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[<ipython-input-2-4b45446a91d5>](https://localhost:8080/#) in <cell line: 4>()
2 lang = "ur"
3 fname = "xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl"
----> 4 dataset = load_dataset("bigscience/xP3", data_files=f"{lang}/{fname}")
6 frames
[/usr/local/lib/python3.10/dist-packages/datasets/data_files.py](https://localhost:8080/#) in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions)
291 if allowed_extensions is not None:
292 error_msg += f" with any supported extension {list(allowed_extensions)}"
--> 293 raise FileNotFoundError(error_msg)
294 return sorted(out)
295
FileNotFoundError: Unable to find 'https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl' at /content/https:/huggingface.co/datasets/bigscience/xP3/resolve/main
```
### Steps to reproduce the bug
```
!pip install -q datasets
from datasets import load_dataset
lang = "ur"
fname = "xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl"
dataset = load_dataset("bigscience/xP3", data_files=f"{lang}/{fname}")
```
### Expected behavior
Correctly downloads
### Environment info
latest versions | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5825/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5824 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5824/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5824/comments | https://api.github.com/repos/huggingface/datasets/issues/5824/events | https://github.com/huggingface/datasets/pull/5824 | 1,697,152,148 | PR_kwDODunzps5P1rIZ | 5,824 | Fix incomplete docstring for `BuilderConfig` | {
"login": "Laurent2916",
"id": 21087104,
"node_id": "MDQ6VXNlcjIxMDg3MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Laurent2916",
"html_url": "https://github.com/Laurent2916",
"followers_url": "https://api.github.com/users/Laurent2916/followers",
"following_url": "https://api.github.com/users/Laurent2916/following{/other_user}",
"gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions",
"organizations_url": "https://api.github.com/users/Laurent2916/orgs",
"repos_url": "https://api.github.com/users/Laurent2916/repos",
"events_url": "https://api.github.com/users/Laurent2916/events{/privacy}",
"received_events_url": "https://api.github.com/users/Laurent2916/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007658 / 0.011353 (-0.003695) | 0.005497 / 0.011008 (-0.005511) | 0.097142 / 0.038508 (0.058633) | 0.034602 / 0.023109 (0.011493) | 0.304191 / 0.275898 (0.028293) | 0.329103 / 0.323480 (0.005624) | 0.005936 / 0.007986 (-0.002049) | 0.004324 / 0.004328 (-0.000004) | 0.073387 / 0.004250 (0.069137) | 0.049657 / 0.037052 (0.012604) | 0.301352 / 0.258489 (0.042863) | 0.343095 / 0.293841 (0.049254) | 0.036767 / 0.128546 (-0.091779) | 0.012438 / 0.075646 (-0.063208) | 0.333804 / 0.419271 (-0.085468) | 0.064557 / 0.043533 (0.021024) | 0.302397 / 0.255139 (0.047258) | 0.319739 / 0.283200 (0.036540) | 0.119264 / 0.141683 (-0.022418) | 1.465309 / 1.452155 (0.013155) | 1.578194 / 1.492716 (0.085478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256552 / 0.018006 (0.238545) | 0.555344 / 0.000490 (0.554854) | 0.004845 / 0.000200 (0.004645) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027215 / 0.037411 (-0.010197) | 0.107071 / 0.014526 (0.092545) | 0.116343 / 0.176557 (-0.060213) | 0.172646 / 0.737135 (-0.564490) | 0.123366 / 0.296338 (-0.172973) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411421 / 0.215209 (0.196212) | 4.126028 / 2.077655 (2.048373) | 1.975826 / 1.504120 (0.471706) | 1.784404 / 1.541195 (0.243210) | 1.848697 / 1.468490 (0.380207) | 0.686400 / 4.584777 (-3.898377) | 3.677649 / 3.745712 (-0.068063) | 2.077787 / 5.269862 (-3.192075) | 1.310912 / 4.565676 (-3.254764) | 0.083980 / 0.424275 (-0.340295) | 0.012183 / 0.007607 (0.004575) | 0.506969 / 0.226044 (0.280924) | 5.094730 / 2.268929 (2.825802) | 2.419790 / 55.444624 (-53.024834) | 2.106592 / 6.876477 (-4.769884) | 2.244309 / 2.142072 (0.102237) | 0.814312 / 4.805227 (-3.990915) | 0.167872 / 6.500664 (-6.332792) | 0.065339 / 0.075469 (-0.010130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.193314 / 1.841788 (-0.648474) | 14.980621 / 8.074308 (6.906313) | 14.352452 / 10.191392 (4.161060) | 0.164531 / 0.680424 (-0.515893) | 0.017432 / 0.534201 (-0.516769) | 0.422193 / 0.579283 (-0.157090) | 0.410047 / 0.434364 (-0.024317) | 0.497011 / 0.540337 (-0.043326) | 0.581395 / 1.386936 (-0.805541) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007214 / 0.011353 (-0.004139) | 0.005449 / 0.011008 (-0.005559) | 0.074320 / 0.038508 (0.035812) | 0.034261 / 0.023109 (0.011152) | 0.378265 / 0.275898 (0.102367) | 0.414419 / 0.323480 (0.090939) | 0.005804 / 0.007986 (-0.002182) | 0.004205 / 0.004328 (-0.000124) | 0.073266 / 0.004250 (0.069015) | 0.050444 / 0.037052 (0.013392) | 0.372999 / 0.258489 (0.114510) | 0.436032 / 0.293841 (0.142191) | 0.035432 / 0.128546 (-0.093114) | 0.012581 / 0.075646 (-0.063065) | 0.085777 / 0.419271 (-0.333495) | 0.046902 / 0.043533 (0.003369) | 0.378732 / 0.255139 (0.123593) | 0.401746 / 0.283200 (0.118547) | 0.113398 / 0.141683 (-0.028285) | 1.463851 / 1.452155 (0.011696) | 1.566387 / 1.492716 (0.073670) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261246 / 0.018006 (0.243240) | 0.546730 / 0.000490 (0.546241) | 0.005245 / 0.000200 (0.005045) | 0.000103 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029441 / 0.037411 (-0.007970) | 0.111834 / 0.014526 (0.097308) | 0.122411 / 0.176557 (-0.054145) | 0.171288 / 0.737135 (-0.565847) | 0.130338 / 0.296338 (-0.166001) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433405 / 0.215209 (0.218196) | 4.315790 / 2.077655 (2.238135) | 2.121934 / 1.504120 (0.617814) | 1.924123 / 1.541195 (0.382928) | 2.029077 / 1.468490 (0.560587) | 0.710245 / 4.584777 (-3.874532) | 3.844393 / 3.745712 (0.098681) | 3.576580 / 5.269862 (-1.693281) | 1.930985 / 4.565676 (-2.634691) | 0.092186 / 0.424275 (-0.332090) | 0.012307 / 0.007607 (0.004700) | 0.533722 / 0.226044 (0.307677) | 5.324447 / 2.268929 (3.055519) | 2.615451 / 55.444624 (-52.829174) | 2.282310 / 6.876477 (-4.594167) | 2.319847 / 2.142072 (0.177774) | 0.849364 / 4.805227 (-3.955864) | 0.172722 / 6.500664 (-6.327942) | 0.064721 / 0.075469 (-0.010748) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.289942 / 1.841788 (-0.551846) | 15.875062 / 8.074308 (7.800754) | 14.784682 / 10.191392 (4.593290) | 0.144432 / 0.680424 (-0.535991) | 0.017703 / 0.534201 (-0.516498) | 0.424357 / 0.579283 (-0.154926) | 0.419078 / 0.434364 (-0.015286) | 0.489331 / 0.540337 (-0.051006) | 0.585284 / 1.386936 (-0.801652) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e3f4f124a1b118a5bfff5bae76b25a68aedbebbc \"CML watermark\")\n"
] | 2023-05-05T07:34:28 | 2023-05-05T12:39:14 | 2023-05-05T12:31:54 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5824",
"html_url": "https://github.com/huggingface/datasets/pull/5824",
"diff_url": "https://github.com/huggingface/datasets/pull/5824.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5824.patch",
"merged_at": "2023-05-05T12:31:54"
} | Fixes #5820
Also fixed a couple of typos I spotted | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5824/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5823 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5823/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5823/comments | https://api.github.com/repos/huggingface/datasets/issues/5823/events | https://github.com/huggingface/datasets/issues/5823 | 1,697,024,789 | I_kwDODunzps5lJosV | 5,823 | [2.12.0] DatasetDict.save_to_disk not saving to S3 | {
"login": "thejamesmarq",
"id": 5233185,
"node_id": "MDQ6VXNlcjUyMzMxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5233185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thejamesmarq",
"html_url": "https://github.com/thejamesmarq",
"followers_url": "https://api.github.com/users/thejamesmarq/followers",
"following_url": "https://api.github.com/users/thejamesmarq/following{/other_user}",
"gists_url": "https://api.github.com/users/thejamesmarq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thejamesmarq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thejamesmarq/subscriptions",
"organizations_url": "https://api.github.com/users/thejamesmarq/orgs",
"repos_url": "https://api.github.com/users/thejamesmarq/repos",
"events_url": "https://api.github.com/users/thejamesmarq/events{/privacy}",
"received_events_url": "https://api.github.com/users/thejamesmarq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Can you try adding the `s3://` prefix ?\r\n```python\r\nf\"s3://{s3_bucket}/{s3_dir}/{dataset_name}\"\r\n```",
"Ugh, yeah that was it. Thank you!"
] | 2023-05-05T05:22:59 | 2023-05-05T15:01:18 | 2023-05-05T15:01:17 | NONE | null | null | null | ### Describe the bug
When trying to save a `DatasetDict` to a private S3 bucket using `save_to_disk`, the artifacts are instead saved locally, and not in the S3 bucket.
I have tried using the deprecated `fs` as well as the `storage_options` arguments and I get the same results.
### Steps to reproduce the bug
1. Create a DatsetDict `dataset`
2. Create a S3FileSystem object
`s3 = datasets.filesystems.S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key)`
3. Save using `dataset_dict.save_to_disk(f"{s3_bucket}/{s3_dir}/{dataset_name}", storage_options=s3.storage_options)` or `dataset_dict.save_to_disk(f"{s3_bucket}/{s3_dir}/{dataset_name}", fs=s3)`
4. Check the corresponding S3 bucket and verify nothing has been uploaded
5. Check the path at f"{s3_bucket}/{s3_dir}/{dataset_name}" and verify that files have been saved there
### Expected behavior
Artifacts are uploaded at the f"{s3_bucket}/{s3_dir}/{dataset_name}" S3 location.
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-13.3.1-x86_64-i386-64bit
- Python version: 3.11.2
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5823/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5822 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5822/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5822/comments | https://api.github.com/repos/huggingface/datasets/issues/5822/events | https://github.com/huggingface/datasets/issues/5822 | 1,696,627,308 | I_kwDODunzps5lIHps | 5,822 | Audio Dataset with_format torch problem | {
"login": "paulbauriegel",
"id": 20282916,
"node_id": "MDQ6VXNlcjIwMjgyOTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/20282916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paulbauriegel",
"html_url": "https://github.com/paulbauriegel",
"followers_url": "https://api.github.com/users/paulbauriegel/followers",
"following_url": "https://api.github.com/users/paulbauriegel/following{/other_user}",
"gists_url": "https://api.github.com/users/paulbauriegel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paulbauriegel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paulbauriegel/subscriptions",
"organizations_url": "https://api.github.com/users/paulbauriegel/orgs",
"repos_url": "https://api.github.com/users/paulbauriegel/repos",
"events_url": "https://api.github.com/users/paulbauriegel/events{/privacy}",
"received_events_url": "https://api.github.com/users/paulbauriegel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Can you try with a more recent version of `datasets` ?",
"Ok, yes it worked with the most recent version. Thanks"
] | 2023-05-04T20:07:51 | 2023-05-11T20:45:53 | 2023-05-11T20:45:53 | NONE | null | null | null | ### Describe the bug
Common Voice v10 Delta (German) Dataset from here https://commonvoice.mozilla.org/de/datasets
```
audio_dataset = \
(Dataset
.from_dict({"audio": ('/tmp/cv-corpus-10.0-delta-2022-07-04/de/clips/' + df.path).to_list()})
.cast_column("audio", Audio(sampling_rate=16_000))
.with_format('numpy'))
audio_dataset[0]["audio"]
```
works, but
```
audio_dataset = \
(Dataset
.from_dict({"audio": ('/tmp/cv-corpus-10.0-delta-2022-07-04/de/clips/' + df.path).to_list()})
.cast_column("audio", Audio(sampling_rate=16_000))
.with_format('torch'))
audio_dataset[0]["audio"]
```
does not instead I get
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[54], line 1
----> 1 audio_dataset[0]["audio"]
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/arrow_dataset.py:2154, in Dataset.__getitem__(self, key)
2152 def __getitem__(self, key): # noqa: F811
2153 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2154 return self._getitem(
2155 key,
2156 )
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/arrow_dataset.py:2139, in Dataset._getitem(self, key, decoded, **kwargs)
2137 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)
2138 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2139 formatted_output = format_table(
2140 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2141 )
2142 return formatted_output
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:58, in TorchFormatter.format_row(self, pa_table)
56 def format_row(self, pa_table: pa.Table) -> dict:
57 row = self.numpy_arrow_extractor().extract_row(pa_table)
---> 58 return self.recursive_tensorize(row)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:54, in TorchFormatter.recursive_tensorize(self, data_struct)
53 def recursive_tensorize(self, data_struct: dict):
---> 54 return map_nested(self._recursive_tensorize, data_struct, map_list=False)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:356, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)
354 num_proc = 1
355 if num_proc <= 1 or len(iterable) <= num_proc:
--> 356 mapped = [
357 _single_map_nested((function, obj, types, None, True, None))
358 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
359 ]
360 else:
361 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:357, in <listcomp>(.0)
354 num_proc = 1
355 if num_proc <= 1 or len(iterable) <= num_proc:
356 mapped = [
--> 357 _single_map_nested((function, obj, types, None, True, None))
358 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
359 ]
360 else:
361 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:309, in _single_map_nested(args)
306 pbar = logging.tqdm(pbar_iterable, disable=disable_tqdm, position=rank, unit="obj", desc=pbar_desc)
308 if isinstance(data_struct, dict):
--> 309 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
310 else:
311 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:309, in <dictcomp>(.0)
306 pbar = logging.tqdm(pbar_iterable, disable=disable_tqdm, position=rank, unit="obj", desc=pbar_desc)
308 if isinstance(data_struct, dict):
--> 309 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
310 else:
311 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:293, in _single_map_nested(args)
291 # Singleton first to spare some computation
292 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 293 return function(data_struct)
295 # Reduce logging to keep things readable in multiprocessing with tqdm
296 if rank is not None and logging.get_verbosity() < logging.WARNING:
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:51, in TorchFormatter._recursive_tensorize(self, data_struct)
49 if data_struct.dtype == np.object: # pytorch tensors cannot be instantied from an array of objects
50 return [self.recursive_tensorize(substruct) for substruct in data_struct]
---> 51 return self._tensorize(data_struct)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:38, in TorchFormatter._tensorize(self, value)
35 import torch
37 default_dtype = {}
---> 38 if np.issubdtype(value.dtype, np.integer):
39 default_dtype = {"dtype": torch.int64}
40 elif np.issubdtype(value.dtype, np.floating):
AttributeError: 'NoneType' object has no attribute 'dtype'
```
### Steps to reproduce the bug
1. Download some audio dataset in this case I used Common Voice v10 Delta (German) Dataset from here https://commonvoice.mozilla.org/de/datasets
2. Try the Code from above
### Expected behavior
It should work for torch
### Environment info
pytorch: 2.0.0
datasets: 2.3.2
numpy: 1.21.6
Python: 3.8
Linux | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5822/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5821 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5821/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5821/comments | https://api.github.com/repos/huggingface/datasets/issues/5821/events | https://github.com/huggingface/datasets/pull/5821 | 1,696,400,343 | PR_kwDODunzps5PzHLU | 5,821 | IterableDataset Arrow formatting | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007593 / 0.011353 (-0.003760) | 0.005554 / 0.011008 (-0.005454) | 0.097663 / 0.038508 (0.059155) | 0.034915 / 0.023109 (0.011806) | 0.303116 / 0.275898 (0.027218) | 0.342376 / 0.323480 (0.018897) | 0.006044 / 0.007986 (-0.001942) | 0.004239 / 0.004328 (-0.000090) | 0.074561 / 0.004250 (0.070310) | 0.049109 / 0.037052 (0.012057) | 0.311302 / 0.258489 (0.052813) | 0.360717 / 0.293841 (0.066876) | 0.035119 / 0.128546 (-0.093428) | 0.012465 / 0.075646 (-0.063181) | 0.333648 / 0.419271 (-0.085624) | 0.051294 / 0.043533 (0.007762) | 0.297298 / 0.255139 (0.042159) | 0.321957 / 0.283200 (0.038757) | 0.108206 / 0.141683 (-0.033477) | 1.425023 / 1.452155 (-0.027132) | 1.526395 / 1.492716 (0.033678) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300694 / 0.018006 (0.282688) | 0.515141 / 0.000490 (0.514651) | 0.003965 / 0.000200 (0.003765) | 0.000260 / 0.000054 (0.000206) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029428 / 0.037411 (-0.007983) | 0.107634 / 0.014526 (0.093108) | 0.123662 / 0.176557 (-0.052895) | 0.182886 / 0.737135 (-0.554249) | 0.128361 / 0.296338 (-0.167977) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398809 / 0.215209 (0.183600) | 3.984428 / 2.077655 (1.906773) | 1.795337 / 1.504120 (0.291217) | 1.609235 / 1.541195 (0.068040) | 1.724825 / 1.468490 (0.256335) | 0.698413 / 4.584777 (-3.886364) | 3.857479 / 3.745712 (0.111767) | 2.135203 / 5.269862 (-3.134659) | 1.348458 / 4.565676 (-3.217218) | 0.086445 / 0.424275 (-0.337830) | 0.012717 / 0.007607 (0.005110) | 0.498713 / 0.226044 (0.272668) | 4.988685 / 2.268929 (2.719757) | 2.284764 / 55.444624 (-53.159860) | 1.961162 / 6.876477 (-4.915315) | 2.147514 / 2.142072 (0.005441) | 0.850334 / 4.805227 (-3.954894) | 0.171664 / 6.500664 (-6.329000) | 0.065526 / 0.075469 (-0.009943) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.204398 / 1.841788 (-0.637390) | 15.625790 / 8.074308 (7.551482) | 14.614980 / 10.191392 (4.423588) | 0.167135 / 0.680424 (-0.513289) | 0.017631 / 0.534201 (-0.516570) | 0.427337 / 0.579283 (-0.151946) | 0.439203 / 0.434364 (0.004839) | 0.499670 / 0.540337 (-0.040668) | 0.587577 / 1.386936 (-0.799359) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007866 / 0.011353 (-0.003486) | 0.005798 / 0.011008 (-0.005210) | 0.075803 / 0.038508 (0.037295) | 0.035773 / 0.023109 (0.012664) | 0.361965 / 0.275898 (0.086067) | 0.402780 / 0.323480 (0.079300) | 0.006521 / 0.007986 (-0.001465) | 0.004613 / 0.004328 (0.000284) | 0.075196 / 0.004250 (0.070946) | 0.055324 / 0.037052 (0.018272) | 0.363468 / 0.258489 (0.104979) | 0.410344 / 0.293841 (0.116503) | 0.036324 / 0.128546 (-0.092222) | 0.012891 / 0.075646 (-0.062755) | 0.086991 / 0.419271 (-0.332280) | 0.048082 / 0.043533 (0.004549) | 0.357238 / 0.255139 (0.102099) | 0.377065 / 0.283200 (0.093865) | 0.118586 / 0.141683 (-0.023097) | 1.463161 / 1.452155 (0.011007) | 1.582686 / 1.492716 (0.089969) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267916 / 0.018006 (0.249909) | 0.540862 / 0.000490 (0.540373) | 0.003148 / 0.000200 (0.002948) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032290 / 0.037411 (-0.005122) | 0.115468 / 0.014526 (0.100943) | 0.125743 / 0.176557 (-0.050814) | 0.177469 / 0.737135 (-0.559667) | 0.133579 / 0.296338 (-0.162759) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446727 / 0.215209 (0.231518) | 4.467938 / 2.077655 (2.390284) | 2.330171 / 1.504120 (0.826052) | 2.165624 / 1.541195 (0.624429) | 2.298063 / 1.468490 (0.829573) | 0.702241 / 4.584777 (-3.882536) | 3.845302 / 3.745712 (0.099590) | 2.169278 / 5.269862 (-3.100584) | 1.401392 / 4.565676 (-3.164285) | 0.086672 / 0.424275 (-0.337603) | 0.012355 / 0.007607 (0.004748) | 0.543639 / 0.226044 (0.317595) | 5.425876 / 2.268929 (3.156947) | 2.781794 / 55.444624 (-52.662831) | 2.503724 / 6.876477 (-4.372752) | 2.622580 / 2.142072 (0.480507) | 0.847143 / 4.805227 (-3.958084) | 0.171721 / 6.500664 (-6.328943) | 0.067894 / 0.075469 (-0.007575) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.292194 / 1.841788 (-0.549594) | 15.497311 / 8.074308 (7.423003) | 15.002463 / 10.191392 (4.811071) | 0.152244 / 0.680424 (-0.528180) | 0.018085 / 0.534201 (-0.516116) | 0.445787 / 0.579283 (-0.133496) | 0.448960 / 0.434364 (0.014596) | 0.515319 / 0.540337 (-0.025019) | 0.623840 / 1.386936 (-0.763096) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f8417a41547ce0c939bd342398be621f5ce3e340 \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006938 / 0.011353 (-0.004415) | 0.005100 / 0.011008 (-0.005909) | 0.096525 / 0.038508 (0.058017) | 0.033764 / 0.023109 (0.010655) | 0.301107 / 0.275898 (0.025209) | 0.333140 / 0.323480 (0.009660) | 0.005719 / 0.007986 (-0.002266) | 0.005192 / 0.004328 (0.000864) | 0.073685 / 0.004250 (0.069434) | 0.048149 / 0.037052 (0.011096) | 0.299244 / 0.258489 (0.040754) | 0.347518 / 0.293841 (0.053677) | 0.034810 / 0.128546 (-0.093736) | 0.012284 / 0.075646 (-0.063363) | 0.333600 / 0.419271 (-0.085672) | 0.050750 / 0.043533 (0.007217) | 0.299782 / 0.255139 (0.044643) | 0.322712 / 0.283200 (0.039512) | 0.105659 / 0.141683 (-0.036024) | 1.457536 / 1.452155 (0.005381) | 1.571604 / 1.492716 (0.078887) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207190 / 0.018006 (0.189184) | 0.439230 / 0.000490 (0.438740) | 0.006403 / 0.000200 (0.006203) | 0.000282 / 0.000054 (0.000228) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027424 / 0.037411 (-0.009987) | 0.107180 / 0.014526 (0.092655) | 0.118356 / 0.176557 (-0.058201) | 0.175557 / 0.737135 (-0.561579) | 0.125671 / 0.296338 (-0.170668) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411249 / 0.215209 (0.196039) | 4.094494 / 2.077655 (2.016839) | 1.946843 / 1.504120 (0.442723) | 1.766503 / 1.541195 (0.225308) | 1.831406 / 1.468490 (0.362916) | 0.704637 / 4.584777 (-3.880140) | 3.819204 / 3.745712 (0.073492) | 3.412598 / 5.269862 (-1.857263) | 1.796385 / 4.565676 (-2.769291) | 0.084591 / 0.424275 (-0.339684) | 0.012568 / 0.007607 (0.004961) | 0.506372 / 0.226044 (0.280327) | 5.049461 / 2.268929 (2.780532) | 2.409860 / 55.444624 (-53.034765) | 2.064514 / 6.876477 (-4.811963) | 2.192808 / 2.142072 (0.050735) | 0.833773 / 4.805227 (-3.971455) | 0.167948 / 6.500664 (-6.332716) | 0.064617 / 0.075469 (-0.010852) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.174739 / 1.841788 (-0.667048) | 14.605634 / 8.074308 (6.531326) | 14.321043 / 10.191392 (4.129651) | 0.145892 / 0.680424 (-0.534532) | 0.017413 / 0.534201 (-0.516788) | 0.444940 / 0.579283 (-0.134343) | 0.430792 / 0.434364 (-0.003572) | 0.539699 / 0.540337 (-0.000638) | 0.640279 / 1.386936 (-0.746657) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007159 / 0.011353 (-0.004194) | 0.005313 / 0.011008 (-0.005695) | 0.073630 / 0.038508 (0.035122) | 0.033459 / 0.023109 (0.010350) | 0.356959 / 0.275898 (0.081061) | 0.385918 / 0.323480 (0.062438) | 0.005714 / 0.007986 (-0.002272) | 0.004074 / 0.004328 (-0.000254) | 0.073278 / 0.004250 (0.069028) | 0.047193 / 0.037052 (0.010140) | 0.360300 / 0.258489 (0.101811) | 0.398052 / 0.293841 (0.104212) | 0.035670 / 0.128546 (-0.092876) | 0.012499 / 0.075646 (-0.063147) | 0.086677 / 0.419271 (-0.332595) | 0.046534 / 0.043533 (0.003001) | 0.370029 / 0.255139 (0.114890) | 0.376040 / 0.283200 (0.092841) | 0.105184 / 0.141683 (-0.036499) | 1.419779 / 1.452155 (-0.032375) | 1.538925 / 1.492716 (0.046209) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220465 / 0.018006 (0.202459) | 0.438836 / 0.000490 (0.438346) | 0.000428 / 0.000200 (0.000228) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029114 / 0.037411 (-0.008298) | 0.111871 / 0.014526 (0.097345) | 0.124367 / 0.176557 (-0.052189) | 0.173737 / 0.737135 (-0.563398) | 0.128435 / 0.296338 (-0.167904) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440706 / 0.215209 (0.225497) | 4.414826 / 2.077655 (2.337171) | 2.128899 / 1.504120 (0.624780) | 1.929551 / 1.541195 (0.388357) | 2.013130 / 1.468490 (0.544640) | 0.708566 / 4.584777 (-3.876211) | 3.846459 / 3.745712 (0.100747) | 2.158829 / 5.269862 (-3.111032) | 1.339454 / 4.565676 (-3.226223) | 0.086345 / 0.424275 (-0.337930) | 0.012085 / 0.007607 (0.004478) | 0.546360 / 0.226044 (0.320316) | 5.461612 / 2.268929 (3.192683) | 2.657388 / 55.444624 (-52.787237) | 2.298403 / 6.876477 (-4.578074) | 2.344572 / 2.142072 (0.202499) | 0.844276 / 4.805227 (-3.960951) | 0.170225 / 6.500664 (-6.330439) | 0.064684 / 0.075469 (-0.010785) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.265114 / 1.841788 (-0.576674) | 15.058156 / 8.074308 (6.983848) | 14.485182 / 10.191392 (4.293790) | 0.165960 / 0.680424 (-0.514464) | 0.017481 / 0.534201 (-0.516719) | 0.425141 / 0.579283 (-0.154142) | 0.434883 / 0.434364 (0.000519) | 0.506701 / 0.540337 (-0.033637) | 0.613240 / 1.386936 (-0.773697) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8f019dffffb214b44b30dd9ac56fdea12259e148 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007651 / 0.011353 (-0.003702) | 0.005503 / 0.011008 (-0.005505) | 0.098751 / 0.038508 (0.060243) | 0.036822 / 0.023109 (0.013713) | 0.340754 / 0.275898 (0.064856) | 0.387247 / 0.323480 (0.063767) | 0.006513 / 0.007986 (-0.001473) | 0.006135 / 0.004328 (0.001807) | 0.073656 / 0.004250 (0.069406) | 0.055508 / 0.037052 (0.018456) | 0.352493 / 0.258489 (0.094004) | 0.408003 / 0.293841 (0.114162) | 0.036346 / 0.128546 (-0.092201) | 0.012562 / 0.075646 (-0.063085) | 0.335111 / 0.419271 (-0.084160) | 0.051928 / 0.043533 (0.008395) | 0.339405 / 0.255139 (0.084266) | 0.366840 / 0.283200 (0.083640) | 0.114353 / 0.141683 (-0.027330) | 1.449062 / 1.452155 (-0.003092) | 1.567310 / 1.492716 (0.074594) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.262975 / 0.018006 (0.244968) | 0.570302 / 0.000490 (0.569813) | 0.003419 / 0.000200 (0.003219) | 0.000100 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027363 / 0.037411 (-0.010049) | 0.109033 / 0.014526 (0.094507) | 0.119048 / 0.176557 (-0.057509) | 0.175891 / 0.737135 (-0.561244) | 0.124577 / 0.296338 (-0.171762) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397988 / 0.215209 (0.182779) | 3.993210 / 2.077655 (1.915555) | 1.809275 / 1.504120 (0.305155) | 1.614664 / 1.541195 (0.073469) | 1.723650 / 1.468490 (0.255159) | 0.698484 / 4.584777 (-3.886293) | 3.914135 / 3.745712 (0.168423) | 2.142622 / 5.269862 (-3.127239) | 1.360215 / 4.565676 (-3.205461) | 0.086340 / 0.424275 (-0.337935) | 0.012836 / 0.007607 (0.005229) | 0.500728 / 0.226044 (0.274684) | 5.006744 / 2.268929 (2.737815) | 2.350668 / 55.444624 (-53.093956) | 1.979816 / 6.876477 (-4.896660) | 2.190159 / 2.142072 (0.048087) | 0.854063 / 4.805227 (-3.951164) | 0.170203 / 6.500664 (-6.330461) | 0.066903 / 0.075469 (-0.008566) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.184012 / 1.841788 (-0.657775) | 15.407350 / 8.074308 (7.333042) | 14.758180 / 10.191392 (4.566788) | 0.169280 / 0.680424 (-0.511144) | 0.017419 / 0.534201 (-0.516781) | 0.434359 / 0.579283 (-0.144925) | 0.442515 / 0.434364 (0.008151) | 0.503132 / 0.540337 (-0.037205) | 0.602589 / 1.386936 (-0.784347) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008022 / 0.011353 (-0.003331) | 0.005473 / 0.011008 (-0.005535) | 0.076106 / 0.038508 (0.037598) | 0.037065 / 0.023109 (0.013956) | 0.380039 / 0.275898 (0.104141) | 0.394205 / 0.323480 (0.070725) | 0.006447 / 0.007986 (-0.001539) | 0.006011 / 0.004328 (0.001682) | 0.075236 / 0.004250 (0.070985) | 0.054425 / 0.037052 (0.017372) | 0.381707 / 0.258489 (0.123218) | 0.411237 / 0.293841 (0.117396) | 0.037222 / 0.128546 (-0.091324) | 0.012627 / 0.075646 (-0.063020) | 0.086733 / 0.419271 (-0.332538) | 0.053857 / 0.043533 (0.010324) | 0.373374 / 0.255139 (0.118235) | 0.381680 / 0.283200 (0.098480) | 0.121962 / 0.141683 (-0.019721) | 1.430804 / 1.452155 (-0.021351) | 1.562517 / 1.492716 (0.069801) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.262034 / 0.018006 (0.244028) | 0.563497 / 0.000490 (0.563007) | 0.002726 / 0.000200 (0.002526) | 0.000099 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031071 / 0.037411 (-0.006341) | 0.111983 / 0.014526 (0.097457) | 0.126634 / 0.176557 (-0.049923) | 0.177511 / 0.737135 (-0.559625) | 0.132599 / 0.296338 (-0.163739) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436148 / 0.215209 (0.220939) | 4.344850 / 2.077655 (2.267195) | 2.105877 / 1.504120 (0.601757) | 1.920934 / 1.541195 (0.379739) | 2.072930 / 1.468490 (0.604440) | 0.701793 / 4.584777 (-3.882984) | 3.841621 / 3.745712 (0.095909) | 3.602550 / 5.269862 (-1.667311) | 1.775999 / 4.565676 (-2.789677) | 0.086024 / 0.424275 (-0.338251) | 0.012275 / 0.007607 (0.004668) | 0.532815 / 0.226044 (0.306770) | 5.336273 / 2.268929 (3.067344) | 2.638842 / 55.444624 (-52.805782) | 2.301842 / 6.876477 (-4.574635) | 2.407448 / 2.142072 (0.265376) | 0.855836 / 4.805227 (-3.949392) | 0.170348 / 6.500664 (-6.330317) | 0.066926 / 0.075469 (-0.008543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291515 / 1.841788 (-0.550272) | 15.869825 / 8.074308 (7.795517) | 15.068227 / 10.191392 (4.876835) | 0.156953 / 0.680424 (-0.523471) | 0.017761 / 0.534201 (-0.516440) | 0.429515 / 0.579283 (-0.149768) | 0.432758 / 0.434364 (-0.001605) | 0.500080 / 0.540337 (-0.040258) | 0.601451 / 1.386936 (-0.785485) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#00b148b09da2074fcaba0538a23c7f46d28d387c \"CML watermark\")\n",
"Will need to take https://github.com/huggingface/datasets/pull/5810 into account if it gets merged before this one",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006914 / 0.011353 (-0.004439) | 0.004727 / 0.011008 (-0.006281) | 0.098880 / 0.038508 (0.060372) | 0.036663 / 0.023109 (0.013554) | 0.317575 / 0.275898 (0.041677) | 0.360301 / 0.323480 (0.036821) | 0.006084 / 0.007986 (-0.001901) | 0.004118 / 0.004328 (-0.000210) | 0.074330 / 0.004250 (0.070079) | 0.042422 / 0.037052 (0.005369) | 0.335625 / 0.258489 (0.077136) | 0.366616 / 0.293841 (0.072775) | 0.028523 / 0.128546 (-0.100023) | 0.008883 / 0.075646 (-0.066763) | 0.332475 / 0.419271 (-0.086797) | 0.051746 / 0.043533 (0.008214) | 0.324952 / 0.255139 (0.069813) | 0.339660 / 0.283200 (0.056460) | 0.103714 / 0.141683 (-0.037969) | 1.472130 / 1.452155 (0.019976) | 1.516548 / 1.492716 (0.023831) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229538 / 0.018006 (0.211532) | 0.449077 / 0.000490 (0.448588) | 0.003707 / 0.000200 (0.003507) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027897 / 0.037411 (-0.009514) | 0.115452 / 0.014526 (0.100926) | 0.118830 / 0.176557 (-0.057726) | 0.176228 / 0.737135 (-0.560907) | 0.125966 / 0.296338 (-0.170372) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436947 / 0.215209 (0.221738) | 4.355687 / 2.077655 (2.278033) | 2.195857 / 1.504120 (0.691737) | 2.028133 / 1.541195 (0.486938) | 2.119872 / 1.468490 (0.651382) | 0.524256 / 4.584777 (-4.060521) | 3.864064 / 3.745712 (0.118352) | 3.446181 / 5.269862 (-1.823680) | 1.610307 / 4.565676 (-2.955370) | 0.065981 / 0.424275 (-0.358294) | 0.012172 / 0.007607 (0.004565) | 0.545341 / 0.226044 (0.319297) | 5.451728 / 2.268929 (3.182800) | 2.690734 / 55.444624 (-52.753890) | 2.368203 / 6.876477 (-4.508274) | 2.549533 / 2.142072 (0.407460) | 0.651296 / 4.805227 (-4.153931) | 0.143697 / 6.500664 (-6.356968) | 0.065170 / 0.075469 (-0.010299) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198898 / 1.841788 (-0.642890) | 15.349348 / 8.074308 (7.275040) | 15.314467 / 10.191392 (5.123075) | 0.177219 / 0.680424 (-0.503205) | 0.018223 / 0.534201 (-0.515978) | 0.396209 / 0.579283 (-0.183074) | 0.427810 / 0.434364 (-0.006554) | 0.475107 / 0.540337 (-0.065230) | 0.561224 / 1.386936 (-0.825712) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007024 / 0.011353 (-0.004329) | 0.004851 / 0.011008 (-0.006157) | 0.075031 / 0.038508 (0.036523) | 0.036411 / 0.023109 (0.013302) | 0.375999 / 0.275898 (0.100101) | 0.433033 / 0.323480 (0.109553) | 0.006089 / 0.007986 (-0.001897) | 0.005638 / 0.004328 (0.001309) | 0.072599 / 0.004250 (0.068348) | 0.048489 / 0.037052 (0.011436) | 0.381807 / 0.258489 (0.123318) | 0.441531 / 0.293841 (0.147691) | 0.029044 / 0.128546 (-0.099503) | 0.009052 / 0.075646 (-0.066595) | 0.080086 / 0.419271 (-0.339186) | 0.046919 / 0.043533 (0.003386) | 0.360399 / 0.255139 (0.105260) | 0.405445 / 0.283200 (0.122245) | 0.108815 / 0.141683 (-0.032868) | 1.415168 / 1.452155 (-0.036987) | 1.511756 / 1.492716 (0.019040) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210287 / 0.018006 (0.192281) | 0.445139 / 0.000490 (0.444650) | 0.000386 / 0.000200 (0.000186) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030457 / 0.037411 (-0.006954) | 0.117225 / 0.014526 (0.102699) | 0.122833 / 0.176557 (-0.053724) | 0.170441 / 0.737135 (-0.566694) | 0.131589 / 0.296338 (-0.164750) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446541 / 0.215209 (0.231332) | 4.471214 / 2.077655 (2.393560) | 2.145894 / 1.504120 (0.641774) | 1.958113 / 1.541195 (0.416919) | 2.069623 / 1.468490 (0.601132) | 0.527562 / 4.584777 (-4.057215) | 3.838285 / 3.745712 (0.092573) | 1.884780 / 5.269862 (-3.385081) | 1.088124 / 4.565676 (-3.477553) | 0.066099 / 0.424275 (-0.358176) | 0.011973 / 0.007607 (0.004366) | 0.540369 / 0.226044 (0.314325) | 5.403554 / 2.268929 (3.134626) | 2.749920 / 55.444624 (-52.694704) | 2.543169 / 6.876477 (-4.333308) | 2.403116 / 2.142072 (0.261043) | 0.638723 / 4.805227 (-4.166505) | 0.142232 / 6.500664 (-6.358432) | 0.065551 / 0.075469 (-0.009918) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.298307 / 1.841788 (-0.543481) | 15.986177 / 8.074308 (7.911869) | 15.530453 / 10.191392 (5.339061) | 0.160138 / 0.680424 (-0.520286) | 0.017988 / 0.534201 (-0.516213) | 0.397857 / 0.579283 (-0.181427) | 0.435071 / 0.434364 (0.000707) | 0.480096 / 0.540337 (-0.060241) | 0.589139 / 1.386936 (-0.797797) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5bd9c974e08e059ce36dc0843256747016e843c5 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006976 / 0.011353 (-0.004377) | 0.005068 / 0.011008 (-0.005940) | 0.098178 / 0.038508 (0.059670) | 0.035167 / 0.023109 (0.012057) | 0.324093 / 0.275898 (0.048195) | 0.350749 / 0.323480 (0.027269) | 0.006128 / 0.007986 (-0.001858) | 0.004361 / 0.004328 (0.000033) | 0.075412 / 0.004250 (0.071161) | 0.052083 / 0.037052 (0.015031) | 0.326726 / 0.258489 (0.068237) | 0.371450 / 0.293841 (0.077609) | 0.028522 / 0.128546 (-0.100025) | 0.009210 / 0.075646 (-0.066436) | 0.329296 / 0.419271 (-0.089976) | 0.051182 / 0.043533 (0.007649) | 0.319863 / 0.255139 (0.064724) | 0.329140 / 0.283200 (0.045941) | 0.111653 / 0.141683 (-0.030030) | 1.464205 / 1.452155 (0.012050) | 1.555779 / 1.492716 (0.063062) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282372 / 0.018006 (0.264366) | 0.569227 / 0.000490 (0.568737) | 0.005289 / 0.000200 (0.005089) | 0.000095 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029875 / 0.037411 (-0.007537) | 0.111889 / 0.014526 (0.097364) | 0.125678 / 0.176557 (-0.050878) | 0.184695 / 0.737135 (-0.552441) | 0.129737 / 0.296338 (-0.166602) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417404 / 0.215209 (0.202195) | 4.172367 / 2.077655 (2.094712) | 2.008088 / 1.504120 (0.503968) | 1.813182 / 1.541195 (0.271988) | 1.882727 / 1.468490 (0.414237) | 0.525764 / 4.584777 (-4.059013) | 3.815202 / 3.745712 (0.069490) | 1.884197 / 5.269862 (-3.385664) | 1.073779 / 4.565676 (-3.491897) | 0.066125 / 0.424275 (-0.358150) | 0.012473 / 0.007607 (0.004866) | 0.522197 / 0.226044 (0.296153) | 5.218486 / 2.268929 (2.949557) | 2.413846 / 55.444624 (-53.030779) | 2.093298 / 6.876477 (-4.783179) | 2.320583 / 2.142072 (0.178511) | 0.648832 / 4.805227 (-4.156395) | 0.146168 / 6.500664 (-6.354496) | 0.065869 / 0.075469 (-0.009600) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.181859 / 1.841788 (-0.659929) | 15.369517 / 8.074308 (7.295209) | 14.896270 / 10.191392 (4.704878) | 0.146793 / 0.680424 (-0.533630) | 0.017960 / 0.534201 (-0.516241) | 0.421801 / 0.579283 (-0.157482) | 0.438357 / 0.434364 (0.003993) | 0.524554 / 0.540337 (-0.015783) | 0.621041 / 1.386936 (-0.765895) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007104 / 0.011353 (-0.004249) | 0.004895 / 0.011008 (-0.006113) | 0.075641 / 0.038508 (0.037133) | 0.034821 / 0.023109 (0.011712) | 0.363875 / 0.275898 (0.087977) | 0.403042 / 0.323480 (0.079562) | 0.006747 / 0.007986 (-0.001238) | 0.005793 / 0.004328 (0.001465) | 0.074709 / 0.004250 (0.070458) | 0.058801 / 0.037052 (0.021749) | 0.366900 / 0.258489 (0.108411) | 0.414442 / 0.293841 (0.120601) | 0.029099 / 0.128546 (-0.099448) | 0.009394 / 0.075646 (-0.066253) | 0.082612 / 0.419271 (-0.336659) | 0.049076 / 0.043533 (0.005543) | 0.358828 / 0.255139 (0.103689) | 0.378261 / 0.283200 (0.095061) | 0.122147 / 0.141683 (-0.019535) | 1.454155 / 1.452155 (0.002000) | 1.572437 / 1.492716 (0.079720) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.293133 / 0.018006 (0.275127) | 0.536785 / 0.000490 (0.536295) | 0.000457 / 0.000200 (0.000257) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031046 / 0.037411 (-0.006366) | 0.113929 / 0.014526 (0.099403) | 0.126222 / 0.176557 (-0.050335) | 0.173992 / 0.737135 (-0.563143) | 0.129635 / 0.296338 (-0.166704) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441984 / 0.215209 (0.226775) | 4.406002 / 2.077655 (2.328348) | 2.173912 / 1.504120 (0.669792) | 2.000507 / 1.541195 (0.459312) | 2.172766 / 1.468490 (0.704276) | 0.524530 / 4.584777 (-4.060247) | 3.758827 / 3.745712 (0.013115) | 1.886701 / 5.269862 (-3.383160) | 1.073601 / 4.565676 (-3.492075) | 0.066137 / 0.424275 (-0.358139) | 0.011926 / 0.007607 (0.004319) | 0.541103 / 0.226044 (0.315059) | 5.404162 / 2.268929 (3.135233) | 2.634271 / 55.444624 (-52.810354) | 2.366156 / 6.876477 (-4.510321) | 2.566877 / 2.142072 (0.424804) | 0.639088 / 4.805227 (-4.166139) | 0.141810 / 6.500664 (-6.358854) | 0.065446 / 0.075469 (-0.010023) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.288173 / 1.841788 (-0.553614) | 15.897051 / 8.074308 (7.822743) | 15.243404 / 10.191392 (5.052012) | 0.162380 / 0.680424 (-0.518043) | 0.017716 / 0.534201 (-0.516485) | 0.396400 / 0.579283 (-0.182883) | 0.420479 / 0.434364 (-0.013885) | 0.476238 / 0.540337 (-0.064099) | 0.583039 / 1.386936 (-0.803897) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bd373f69f12e926f4e2a489c14df36c38ce07bcc \"CML watermark\")\n",
"I fixed the docstring and type hint",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006310 / 0.011353 (-0.005043) | 0.004297 / 0.011008 (-0.006711) | 0.098288 / 0.038508 (0.059780) | 0.029295 / 0.023109 (0.006185) | 0.386804 / 0.275898 (0.110906) | 0.425717 / 0.323480 (0.102237) | 0.005516 / 0.007986 (-0.002470) | 0.005058 / 0.004328 (0.000730) | 0.074318 / 0.004250 (0.070068) | 0.040609 / 0.037052 (0.003557) | 0.388159 / 0.258489 (0.129670) | 0.428683 / 0.293841 (0.134842) | 0.026207 / 0.128546 (-0.102340) | 0.008655 / 0.075646 (-0.066991) | 0.321601 / 0.419271 (-0.097671) | 0.055329 / 0.043533 (0.011796) | 0.390452 / 0.255139 (0.135313) | 0.409084 / 0.283200 (0.125884) | 0.099555 / 0.141683 (-0.042128) | 1.484289 / 1.452155 (0.032134) | 1.549892 / 1.492716 (0.057176) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219466 / 0.018006 (0.201460) | 0.437288 / 0.000490 (0.436798) | 0.003556 / 0.000200 (0.003356) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023876 / 0.037411 (-0.013535) | 0.100205 / 0.014526 (0.085679) | 0.106365 / 0.176557 (-0.070191) | 0.164353 / 0.737135 (-0.572782) | 0.109987 / 0.296338 (-0.186352) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418819 / 0.215209 (0.203610) | 4.168558 / 2.077655 (2.090903) | 1.862883 / 1.504120 (0.358764) | 1.673308 / 1.541195 (0.132114) | 1.742338 / 1.468490 (0.273848) | 0.550113 / 4.584777 (-4.034664) | 3.492085 / 3.745712 (-0.253627) | 1.734579 / 5.269862 (-3.535283) | 1.006876 / 4.565676 (-3.558801) | 0.068014 / 0.424275 (-0.356261) | 0.012242 / 0.007607 (0.004634) | 0.520633 / 0.226044 (0.294588) | 5.214095 / 2.268929 (2.945167) | 2.319282 / 55.444624 (-53.125343) | 1.979521 / 6.876477 (-4.896956) | 2.099595 / 2.142072 (-0.042477) | 0.659306 / 4.805227 (-4.145921) | 0.135282 / 6.500664 (-6.365382) | 0.067417 / 0.075469 (-0.008052) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.232099 / 1.841788 (-0.609689) | 13.967219 / 8.074308 (5.892910) | 14.347105 / 10.191392 (4.155713) | 0.146360 / 0.680424 (-0.534063) | 0.017021 / 0.534201 (-0.517180) | 0.363254 / 0.579283 (-0.216030) | 0.404391 / 0.434364 (-0.029973) | 0.428670 / 0.540337 (-0.111668) | 0.514942 / 1.386936 (-0.871994) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006360 / 0.011353 (-0.004993) | 0.004160 / 0.011008 (-0.006848) | 0.074856 / 0.038508 (0.036347) | 0.028624 / 0.023109 (0.005515) | 0.355624 / 0.275898 (0.079726) | 0.403678 / 0.323480 (0.080198) | 0.005253 / 0.007986 (-0.002732) | 0.004808 / 0.004328 (0.000480) | 0.074215 / 0.004250 (0.069964) | 0.040641 / 0.037052 (0.003588) | 0.358473 / 0.258489 (0.099984) | 0.414442 / 0.293841 (0.120601) | 0.025595 / 0.128546 (-0.102951) | 0.008506 / 0.075646 (-0.067140) | 0.081547 / 0.419271 (-0.337725) | 0.039719 / 0.043533 (-0.003814) | 0.355420 / 0.255139 (0.100281) | 0.380953 / 0.283200 (0.097753) | 0.100064 / 0.141683 (-0.041618) | 1.459639 / 1.452155 (0.007484) | 1.557288 / 1.492716 (0.064572) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232837 / 0.018006 (0.214831) | 0.424788 / 0.000490 (0.424298) | 0.000397 / 0.000200 (0.000197) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026156 / 0.037411 (-0.011256) | 0.103633 / 0.014526 (0.089107) | 0.109633 / 0.176557 (-0.066923) | 0.159407 / 0.737135 (-0.577728) | 0.113874 / 0.296338 (-0.182465) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471980 / 0.215209 (0.256771) | 4.724424 / 2.077655 (2.646769) | 2.459950 / 1.504120 (0.955830) | 2.280926 / 1.541195 (0.739731) | 2.368478 / 1.468490 (0.899987) | 0.552809 / 4.584777 (-4.031968) | 3.461985 / 3.745712 (-0.283728) | 1.757060 / 5.269862 (-3.512802) | 1.009599 / 4.565676 (-3.556077) | 0.068407 / 0.424275 (-0.355868) | 0.012341 / 0.007607 (0.004734) | 0.576287 / 0.226044 (0.350242) | 5.767331 / 2.268929 (3.498402) | 2.965743 / 55.444624 (-52.478882) | 2.644935 / 6.876477 (-4.231542) | 2.699663 / 2.142072 (0.557591) | 0.656005 / 4.805227 (-4.149222) | 0.136315 / 6.500664 (-6.364349) | 0.068355 / 0.075469 (-0.007114) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.308301 / 1.841788 (-0.533486) | 14.587268 / 8.074308 (6.512960) | 14.385670 / 10.191392 (4.194278) | 0.148154 / 0.680424 (-0.532270) | 0.016798 / 0.534201 (-0.517402) | 0.360761 / 0.579283 (-0.218523) | 0.392566 / 0.434364 (-0.041798) | 0.431604 / 0.540337 (-0.108734) | 0.528463 / 1.386936 (-0.858473) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f2778e1ab255545cb2171379fd2276c85768a2ad \"CML watermark\")\n",
"let me know if it sounds good for you now @albertvillanova :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008414 / 0.011353 (-0.002939) | 0.005320 / 0.011008 (-0.005688) | 0.115585 / 0.038508 (0.077077) | 0.040815 / 0.023109 (0.017706) | 0.363453 / 0.275898 (0.087555) | 0.385954 / 0.323480 (0.062474) | 0.006463 / 0.007986 (-0.001523) | 0.005571 / 0.004328 (0.001242) | 0.084831 / 0.004250 (0.080581) | 0.050294 / 0.037052 (0.013242) | 0.375684 / 0.258489 (0.117195) | 0.394672 / 0.293841 (0.100831) | 0.033618 / 0.128546 (-0.094928) | 0.010451 / 0.075646 (-0.065195) | 0.388937 / 0.419271 (-0.030334) | 0.059974 / 0.043533 (0.016441) | 0.360437 / 0.255139 (0.105298) | 0.375149 / 0.283200 (0.091950) | 0.118397 / 0.141683 (-0.023286) | 1.726759 / 1.452155 (0.274604) | 1.811928 / 1.492716 (0.319212) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239186 / 0.018006 (0.221180) | 0.483728 / 0.000490 (0.483238) | 0.003285 / 0.000200 (0.003085) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030514 / 0.037411 (-0.006898) | 0.127111 / 0.014526 (0.112585) | 0.136185 / 0.176557 (-0.040371) | 0.204541 / 0.737135 (-0.532594) | 0.143228 / 0.296338 (-0.153111) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.465840 / 0.215209 (0.250631) | 4.611160 / 2.077655 (2.533506) | 2.119307 / 1.504120 (0.615187) | 1.882463 / 1.541195 (0.341268) | 1.946067 / 1.468490 (0.477577) | 0.602352 / 4.584777 (-3.982425) | 4.576313 / 3.745712 (0.830601) | 2.112860 / 5.269862 (-3.157001) | 1.224388 / 4.565676 (-3.341289) | 0.073808 / 0.424275 (-0.350467) | 0.013157 / 0.007607 (0.005550) | 0.592208 / 0.226044 (0.366163) | 5.948971 / 2.268929 (3.680042) | 2.690144 / 55.444624 (-52.754480) | 2.236489 / 6.876477 (-4.639987) | 2.423617 / 2.142072 (0.281545) | 0.752053 / 4.805227 (-4.053175) | 0.168185 / 6.500664 (-6.332480) | 0.075454 / 0.075469 (-0.000015) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.407432 / 1.841788 (-0.434356) | 17.054545 / 8.074308 (8.980236) | 15.661362 / 10.191392 (5.469970) | 0.175027 / 0.680424 (-0.505397) | 0.020262 / 0.534201 (-0.513939) | 0.479052 / 0.579283 (-0.100231) | 0.509829 / 0.434364 (0.075465) | 0.601935 / 0.540337 (0.061598) | 0.726754 / 1.386936 (-0.660182) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007698 / 0.011353 (-0.003655) | 0.005267 / 0.011008 (-0.005741) | 0.085832 / 0.038508 (0.047324) | 0.041974 / 0.023109 (0.018865) | 0.418966 / 0.275898 (0.143068) | 0.466314 / 0.323480 (0.142834) | 0.006580 / 0.007986 (-0.001406) | 0.007063 / 0.004328 (0.002735) | 0.087120 / 0.004250 (0.082870) | 0.054908 / 0.037052 (0.017856) | 0.423813 / 0.258489 (0.165323) | 0.489878 / 0.293841 (0.196037) | 0.032823 / 0.128546 (-0.095723) | 0.010471 / 0.075646 (-0.065175) | 0.095839 / 0.419271 (-0.323432) | 0.056421 / 0.043533 (0.012888) | 0.420526 / 0.255139 (0.165387) | 0.447975 / 0.283200 (0.164775) | 0.126604 / 0.141683 (-0.015079) | 1.723097 / 1.452155 (0.270942) | 1.819539 / 1.492716 (0.326822) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.279604 / 0.018006 (0.261598) | 0.496129 / 0.000490 (0.495639) | 0.005419 / 0.000200 (0.005219) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035069 / 0.037411 (-0.002343) | 0.133064 / 0.014526 (0.118538) | 0.145404 / 0.176557 (-0.031152) | 0.205237 / 0.737135 (-0.531898) | 0.150684 / 0.296338 (-0.145654) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.513596 / 0.215209 (0.298387) | 5.104861 / 2.077655 (3.027206) | 2.487908 / 1.504120 (0.983788) | 2.271383 / 1.541195 (0.730188) | 2.421043 / 1.468490 (0.952553) | 0.625204 / 4.584777 (-3.959573) | 4.555389 / 3.745712 (0.809677) | 4.181518 / 5.269862 (-1.088344) | 1.676059 / 4.565676 (-2.889617) | 0.078786 / 0.424275 (-0.345489) | 0.014186 / 0.007607 (0.006579) | 0.638360 / 0.226044 (0.412315) | 6.367915 / 2.268929 (4.098986) | 3.095175 / 55.444624 (-52.349449) | 2.706707 / 6.876477 (-4.169769) | 2.735907 / 2.142072 (0.593835) | 0.756323 / 4.805227 (-4.048905) | 0.164783 / 6.500664 (-6.335881) | 0.076291 / 0.075469 (0.000822) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.667058 / 1.841788 (-0.174730) | 18.687459 / 8.074308 (10.613151) | 17.111596 / 10.191392 (6.920204) | 0.167218 / 0.680424 (-0.513206) | 0.020995 / 0.534201 (-0.513206) | 0.463985 / 0.579283 (-0.115298) | 0.502705 / 0.434364 (0.068341) | 0.562877 / 0.540337 (0.022540) | 0.682249 / 1.386936 (-0.704687) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#028822a5d657f6c1251f61b56a701c4d7d2ab0a7 \"CML watermark\")\n",
"> Maybe we should fix all the tests in test_iterable_dataset.py that contain .with_format(\"torch\")?\r\n\r\nthey're updated in https://github.com/huggingface/datasets/pull/5852",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005931 / 0.011353 (-0.005421) | 0.004004 / 0.011008 (-0.007004) | 0.098632 / 0.038508 (0.060124) | 0.027820 / 0.023109 (0.004711) | 0.302944 / 0.275898 (0.027046) | 0.332684 / 0.323480 (0.009204) | 0.005529 / 0.007986 (-0.002457) | 0.004814 / 0.004328 (0.000485) | 0.074477 / 0.004250 (0.070227) | 0.034875 / 0.037052 (-0.002178) | 0.304542 / 0.258489 (0.046053) | 0.342853 / 0.293841 (0.049012) | 0.025263 / 0.128546 (-0.103283) | 0.008558 / 0.075646 (-0.067089) | 0.322522 / 0.419271 (-0.096750) | 0.043980 / 0.043533 (0.000447) | 0.306618 / 0.255139 (0.051479) | 0.331692 / 0.283200 (0.048492) | 0.087434 / 0.141683 (-0.054248) | 1.464686 / 1.452155 (0.012531) | 1.575038 / 1.492716 (0.082322) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221920 / 0.018006 (0.203914) | 0.417108 / 0.000490 (0.416619) | 0.004625 / 0.000200 (0.004425) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023493 / 0.037411 (-0.013918) | 0.096684 / 0.014526 (0.082158) | 0.102035 / 0.176557 (-0.074522) | 0.166609 / 0.737135 (-0.570526) | 0.107456 / 0.296338 (-0.188883) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418713 / 0.215209 (0.203504) | 4.156913 / 2.077655 (2.079258) | 1.869064 / 1.504120 (0.364944) | 1.666219 / 1.541195 (0.125024) | 1.676491 / 1.468490 (0.208001) | 0.553843 / 4.584777 (-4.030934) | 3.380471 / 3.745712 (-0.365241) | 2.970370 / 5.269862 (-2.299491) | 1.421597 / 4.565676 (-3.144080) | 0.068019 / 0.424275 (-0.356256) | 0.012995 / 0.007607 (0.005387) | 0.519410 / 0.226044 (0.293365) | 5.198251 / 2.268929 (2.929323) | 2.352969 / 55.444624 (-53.091655) | 2.008981 / 6.876477 (-4.867496) | 2.066519 / 2.142072 (-0.075553) | 0.658982 / 4.805227 (-4.146245) | 0.134341 / 6.500664 (-6.366323) | 0.065893 / 0.075469 (-0.009576) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.207509 / 1.841788 (-0.634279) | 13.863838 / 8.074308 (5.789530) | 13.363359 / 10.191392 (3.171967) | 0.129076 / 0.680424 (-0.551348) | 0.016818 / 0.534201 (-0.517383) | 0.357956 / 0.579283 (-0.221327) | 0.386174 / 0.434364 (-0.048189) | 0.418663 / 0.540337 (-0.121674) | 0.498708 / 1.386936 (-0.888228) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006132 / 0.011353 (-0.005220) | 0.004335 / 0.011008 (-0.006673) | 0.078517 / 0.038508 (0.040009) | 0.027685 / 0.023109 (0.004576) | 0.357956 / 0.275898 (0.082058) | 0.392397 / 0.323480 (0.068918) | 0.005364 / 0.007986 (-0.002622) | 0.004922 / 0.004328 (0.000593) | 0.078061 / 0.004250 (0.073810) | 0.038889 / 0.037052 (0.001837) | 0.360952 / 0.258489 (0.102463) | 0.402790 / 0.293841 (0.108949) | 0.025542 / 0.128546 (-0.103004) | 0.008718 / 0.075646 (-0.066929) | 0.085799 / 0.419271 (-0.333472) | 0.044256 / 0.043533 (0.000723) | 0.358366 / 0.255139 (0.103227) | 0.393500 / 0.283200 (0.110300) | 0.096382 / 0.141683 (-0.045301) | 1.530889 / 1.452155 (0.078735) | 1.621007 / 1.492716 (0.128291) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.180572 / 0.018006 (0.162566) | 0.429478 / 0.000490 (0.428988) | 0.002966 / 0.000200 (0.002766) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024530 / 0.037411 (-0.012881) | 0.101401 / 0.014526 (0.086875) | 0.108208 / 0.176557 (-0.068349) | 0.159582 / 0.737135 (-0.577554) | 0.111170 / 0.296338 (-0.185168) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.465768 / 0.215209 (0.250559) | 4.706311 / 2.077655 (2.628656) | 2.437756 / 1.504120 (0.933636) | 2.245694 / 1.541195 (0.704499) | 2.282637 / 1.468490 (0.814147) | 0.552752 / 4.584777 (-4.032025) | 3.432992 / 3.745712 (-0.312720) | 1.800054 / 5.269862 (-3.469808) | 1.037852 / 4.565676 (-3.527824) | 0.068240 / 0.424275 (-0.356035) | 0.012433 / 0.007607 (0.004826) | 0.574867 / 0.226044 (0.348822) | 5.707623 / 2.268929 (3.438695) | 2.909746 / 55.444624 (-52.534878) | 2.585423 / 6.876477 (-4.291054) | 2.636801 / 2.142072 (0.494729) | 0.686593 / 4.805227 (-4.118634) | 0.136633 / 6.500664 (-6.364031) | 0.068598 / 0.075469 (-0.006871) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.286628 / 1.841788 (-0.555159) | 14.333258 / 8.074308 (6.258949) | 14.355793 / 10.191392 (4.164401) | 0.133459 / 0.680424 (-0.546965) | 0.017090 / 0.534201 (-0.517111) | 0.358852 / 0.579283 (-0.220431) | 0.399929 / 0.434364 (-0.034435) | 0.422838 / 0.540337 (-0.117500) | 0.515199 / 1.386936 (-0.871737) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7437d0f676da8634b5655a227cb8c3508c7372a2 \"CML watermark\")\n"
] | 2023-05-04T17:23:43 | 2023-05-31T09:43:26 | 2023-05-31T09:36:18 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5821",
"html_url": "https://github.com/huggingface/datasets/pull/5821",
"diff_url": "https://github.com/huggingface/datasets/pull/5821.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5821.patch",
"merged_at": "2023-05-31T09:36:18"
} | Adding an optional `.iter_arrow` to examples iterable. This allows to use Arrow formatting in map/filter.
This will also be useful for torch formatting, since we can reuse the TorchFormatter that converts Arrow data to torch tensors
Related to https://github.com/huggingface/datasets/issues/5793 and https://github.com/huggingface/datasets/issues/3444
Required for https://github.com/huggingface/datasets/pull/5852
### Example:
Speed x10 in map
```python
from datasets import Dataset
import pyarrow.compute as pc
import time
ds = Dataset.from_dict({"a": range(100_000)})
ids = ds.to_iterable_dataset()
ids = ids.map(lambda x: {"a": [a + 10 for a in x["a"]]}, batched=True)
_start = time.time()
print(f"Python ({sum(1 for _ in ids)} items):\t{(time.time() - _start) * 1000:.1f}ms")
# Python (100000 items): 695.7ms
ids = ds.to_iterable_dataset().with_format("arrow")
ids = ids.map(lambda t: t.set_column(0, "a", pc.add(t[0], 10)), batched=True)
ids = ids.with_format(None)
_start = time.time()
print(f"Arrow ({sum(1 for _ in ids)} items):\t{(time.time() - _start) * 1000:.1f}ms)")
# Arrow (100000 items): 81.0ms)
```
### Implementation details
I added an optional `iter_arrow` method to examples iterable. If an example iterable has this method, then it can be used to iterate on the examples by batch of arrow tables. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5821/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5821/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5820 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5820/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5820/comments | https://api.github.com/repos/huggingface/datasets/issues/5820/events | https://github.com/huggingface/datasets/issues/5820 | 1,695,892,811 | I_kwDODunzps5lFUVL | 5,820 | Incomplete docstring for `BuilderConfig` | {
"login": "Laurent2916",
"id": 21087104,
"node_id": "MDQ6VXNlcjIxMDg3MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Laurent2916",
"html_url": "https://github.com/Laurent2916",
"followers_url": "https://api.github.com/users/Laurent2916/followers",
"following_url": "https://api.github.com/users/Laurent2916/following{/other_user}",
"gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions",
"organizations_url": "https://api.github.com/users/Laurent2916/orgs",
"repos_url": "https://api.github.com/users/Laurent2916/repos",
"events_url": "https://api.github.com/users/Laurent2916/events{/privacy}",
"received_events_url": "https://api.github.com/users/Laurent2916/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Thanks for reporting! You are more than welcome to improve `BuilderConfig`'s docstring.\r\n\r\nThis class serves an identical purpose as `tensorflow_datasets`'s `BuilderConfig`, and its docstring is [here](https://github.com/tensorflow/datasets/blob/a95e38b5bb018312c3d3720619c2a8ef83ebf57f/tensorflow_datasets/core/dataset_builder.py#L81), so feel free to re-use parts of it."
] | 2023-05-04T12:14:34 | 2023-05-05T12:31:56 | 2023-05-05T12:31:56 | CONTRIBUTOR | null | null | null | Hi guys !
I stumbled upon this docstring while working on a project.
Some of the attributes have missing descriptions.
https://github.com/huggingface/datasets/blob/bc5fef5b6d91f009e4101684adcb374df2c170f6/src/datasets/builder.py#L104-L117 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5820/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5819/comments | https://api.github.com/repos/huggingface/datasets/issues/5819/events | https://github.com/huggingface/datasets/issues/5819 | 1,695,536,738 | I_kwDODunzps5lD9Zi | 5,819 | Cannot pickle error in Dataset.from_generator() | {
"login": "xinghaow99",
"id": 50691954,
"node_id": "MDQ6VXNlcjUwNjkxOTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/50691954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinghaow99",
"html_url": "https://github.com/xinghaow99",
"followers_url": "https://api.github.com/users/xinghaow99/followers",
"following_url": "https://api.github.com/users/xinghaow99/following{/other_user}",
"gists_url": "https://api.github.com/users/xinghaow99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xinghaow99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinghaow99/subscriptions",
"organizations_url": "https://api.github.com/users/xinghaow99/orgs",
"repos_url": "https://api.github.com/users/xinghaow99/repos",
"events_url": "https://api.github.com/users/xinghaow99/events{/privacy}",
"received_events_url": "https://api.github.com/users/xinghaow99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! It should work if you put `model = torch.compile(model)` inside the `generate_data` function. If a referenced object is outside, it needs to be pickable, and that's not the case for the compiled models (or functions). ",
"> Hi! It should work if you put `model = torch.compile(model)` inside the `generate_data` function. If a referenced object is outside, it needs to be pickable, and that's not the case for the compiled models (or functions).\r\n\r\nHi! Thank you for your reply! Everything works perfectly with your suggestion!\r\n\r\nClosing the issue.\r\n"
] | 2023-05-04T08:39:09 | 2023-05-05T19:20:59 | 2023-05-05T19:20:58 | NONE | null | null | null | ### Describe the bug
I'm trying to use Dataset.from_generator() to generate a large dataset.
### Steps to reproduce the bug
Code to reproduce:
```
from transformers import T5Tokenizer, T5ForConditionalGeneration, GenerationConfig
import torch
from tqdm import tqdm
from datasets import load_dataset
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small", device_map="auto")
model = torch.compile(model)
def generate_data(data_loader):
model.eval()
for batch in tqdm(data_loader):
input_ids = tokenizer(batch['instruction'], return_tensors='pt', padding=True, truncation=True).input_ids.to("cuda:0")
with torch.no_grad():
outputs = model.generate(input_ids, generation_config=generation_config)
decoder_hidden_states = outputs.decoder_hidden_states
for i, h in zip(batch['instruction'], decoder_hidden_states):
yield {"instruction": i, "decoder_hidden_states": h}
generation_config = GenerationConfig(
temperature=1,
max_new_tokens=1024,
do_sample=False,
num_return_sequences=1,
return_dict_in_generate=True,
output_scores=True,
output_hidden_states=True,
)
from datasets import Dataset, load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset("HuggingFaceH4/databricks_dolly_15k")
train_loader = DataLoader(dataset['train'], batch_size=2, shuffle=True)
dataset = Dataset.from_generator(generator=generate_data, gen_kwargs={"data_loader": train_loader})
dataset.save_to_disk("data/flant5_small_generation")
```
### Expected behavior
The dataset should be generated and saved.
But the following error occurred:
```
Traceback (most recent call last):
File "/remote-home/xhwang/alpaca-lora/data_collection_t5.py", line 46, in <module>
dataset = Dataset.from_generator(generator=generate_data, gen_kwargs={"data_loader": train_loader})
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1035, in from_generator
return GeneratorDatasetInputStream(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/io/generator.py", line 28, in __init__
self.builder = Generator(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 336, in __init__
self.config, self.config_id = self._create_builder_config(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 505, in _create_builder_config
config_id = builder_config.create_config_id(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 179, in create_config_id
suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/fingerprint.py", line 236, in hash
return cls.hash_default(value)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/fingerprint.py", line 229, in hash_default
return cls.hash_bytes(dumps(value))
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 726, in dumps
dump(obj, file)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 701, in dump
Pickler(file, recurse=True).dump(obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 394, in dump
StockPickler.dump(self, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 487, in dump
self.save(obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1084, in _save_with_postproc
pickler._batch_setitems(iter(source.items()))
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1070, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 887, in save_tuple
save(element)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1070, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 887, in save_tuple
save(element)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 1003, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1084, in _save_with_postproc
pickler._batch_setitems(iter(source.items()))
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 578, in save
rv = reduce(self.proto)
TypeError: cannot pickle 'ConfigModuleInstance' object
```
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-4.15.0-156-generic-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.13.2
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5819/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5818 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5818/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5818/comments | https://api.github.com/repos/huggingface/datasets/issues/5818/events | https://github.com/huggingface/datasets/issues/5818 | 1,695,052,555 | I_kwDODunzps5lCHML | 5,818 | Ability to update a dataset | {
"login": "davidgilbertson",
"id": 4443482,
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidgilbertson",
"html_url": "https://github.com/davidgilbertson",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"This [reply](https://discuss.huggingface.co/t/how-do-i-add-things-rows-to-an-already-saved-dataset/27423) from @mariosasko on the forums may be useful :)",
"In this case, I think we can avoid the `PermissionError` by unpacking the underlying `ConcatenationTable` and saving only the newly added data blocks (in new files).",
"Thanks @stevhliu and @mariosasko , so saving to individual files then loading them later, concatenating again and saving again is the recommended way. Good to know.\r\n\r\nQuestion that I hope doesn't sound rude: is this sort of thing (processing a dataset that doesn't fit in memory) outside of `datasets`'s core area of focus? Are there other tools you would recommend to do this sort of thing that play nice with `datasets`? Or is it just that I've found myself in a niche situation that hasn't specifically been catered for?"
] | 2023-05-04T01:08:13 | 2023-05-04T20:43:39 | null | NONE | null | null | null | ### Feature request
The ability to load a dataset, add or change something, and save it back to disk.
Maybe it's possible, but I can't work out how to do it, e.g. this fails:
```py
import datasets
dataset = datasets.load_from_disk("data/test1")
dataset = dataset.add_item({"text": "A new item"})
dataset.save_to_disk("data/test1")
```
With the error:
```
PermissionError: Tried to overwrite /mnt/c/Users/david/py/learning/mini_projects/data_sorting_and_filtering/data/test1 but a dataset can't overwrite itself.
```
### Motivation
My use case is that I want to process a dataset in a particular way but it doesn't fit in memory if I do it in one go. So I want to perform a loop and at each step in the loop, process one shard and append it to an ever-growing dataset. The code in the loop will load a dataset, add some rows, then save it again.
Maybe I'm just thinking about things incorrectly and there's a better approach. FWIW I can't use `dataset.map()` to do the task because that doesn't work with `num_proc` when adding rows, so is confined to a single process which is too slow.
The only other way I can think of is to create a new file each time, but surely that's not how people do this sort of thing.
### Your contribution
na | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5818/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5817/comments | https://api.github.com/repos/huggingface/datasets/issues/5817/events | https://github.com/huggingface/datasets/issues/5817 | 1,694,891,866 | I_kwDODunzps5lBf9a | 5,817 | Setting `num_proc` errors when `.map` returns additional items. | {
"login": "davidgilbertson",
"id": 4443482,
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidgilbertson",
"html_url": "https://github.com/davidgilbertson",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Unfortunately I couldn't reproduce on my side locally and with datasets 2.11 and python 3.10.11 on colab.\r\nWhat version of `multiprocess` are you using ?",
"I've got `multiprocess` version `0.70.14`.\r\n\r\nI've done some more testing and the error only occurs in PyCharm's Python Console. It seems to be [this PyCharm bug](https://youtrack.jetbrains.com/issue/PY-51922/Multiprocessing-bug.-Can-only-run-in-debugger.), I'll close this.",
"For other users facing this, my workaround is to conditionally set `num_proc` so I can work interactively in the PyCharm Python Console while developing, then when I'm ready to run on the whole dataset, run it as a script and use multiprocessing.\r\n\r\n```py\r\nmapped_ds = ds.map(\r\n my_map_function,\r\n batched=True,\r\n remove_columns=ds.column_names,\r\n num_proc=1 if \"PYCHARM_HOSTED\" in os.environ else 8,\r\n)\r\n```"
] | 2023-05-03T21:46:53 | 2023-05-04T21:14:21 | 2023-05-04T20:22:25 | NONE | null | null | null | ### Describe the bug
I'm using a map function that returns more rows than are passed in.
If I try to use `num_proc` I get:
```
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 563, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 528, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3097, in map
for rank, done, content in iflatmap_unordered(
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1372, in iflatmap_unordered
yield queue.get(timeout=0.05)
File "<string>", line 2, in get
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/managers.py", line 818, in _callmethod
kind, result = conn.recv()
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/connection.py", line 258, in recv
buf = self._recv_bytes()
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/connection.py", line 422, in _recv_bytes
buf = self._recv(4)
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/connection.py", line 391, in _recv
raise EOFError
EOFError
```
### Steps to reproduce the bug
This is copied from the [Datasets docs](https://huggingface.co/docs/datasets/v2.12.0/en/process#batch-processing), with `num_proc` added, and will error.
```py
import datasets
dataset = ... # any old dataset
def chunk_examples(examples):
chunks = []
for sentence in examples["text"]:
chunks += [sentence[i : i + 50] for i in range(0, len(sentence), 50)]
return {"chunks": chunks}
chunked_dataset = dataset.map(
chunk_examples,
batched=True,
remove_columns=dataset.column_names,
num_proc=2, # Remove and it works
)
```
### Expected behavior
Should work fine. On a related note, multi-processing also fails if there is a Meta class anywhere in scope (and there are plenty in the standard library). This is the fault of `dill` and is a long standing issue.
Have you considered using Loky for multiprocessing? I've found that the built-in `datasets` multi-processing breaks more than it works so have written my own function using `loky`, for reference:
```py
import datasets
import loky
def fast_loop(dataset: datasets.Dataset, func, num_proc=None):
if num_proc is None:
import os
num_proc = len(os.sched_getaffinity(0))
shards = [
dataset.shard(num_shards=num_proc, index=i, contiguous=True)
for i in range(num_proc)
]
executor = loky.get_reusable_executor(max_workers=num_proc)
results = executor.map(func, shards)
return datasets.combine.concatenate_datasets(list(results))
```
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- Huggingface_hub version: 0.12.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5817/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5816/comments | https://api.github.com/repos/huggingface/datasets/issues/5816/events | https://github.com/huggingface/datasets/pull/5816 | 1,694,590,856 | PR_kwDODunzps5Ps4t9 | 5,816 | Preserve `stopping_strategy` of shuffled interleaved dataset (random cycling case) | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007862 / 0.011353 (-0.003491) | 0.005747 / 0.011008 (-0.005261) | 0.106818 / 0.038508 (0.068310) | 0.036630 / 0.023109 (0.013521) | 0.344218 / 0.275898 (0.068320) | 0.398803 / 0.323480 (0.075324) | 0.006187 / 0.007986 (-0.001799) | 0.005686 / 0.004328 (0.001358) | 0.078568 / 0.004250 (0.074318) | 0.051786 / 0.037052 (0.014734) | 0.361736 / 0.258489 (0.103247) | 0.396323 / 0.293841 (0.102482) | 0.037943 / 0.128546 (-0.090603) | 0.013957 / 0.075646 (-0.061689) | 0.366782 / 0.419271 (-0.052490) | 0.054700 / 0.043533 (0.011167) | 0.349692 / 0.255139 (0.094553) | 0.366481 / 0.283200 (0.083281) | 0.117394 / 0.141683 (-0.024289) | 1.593156 / 1.452155 (0.141001) | 1.708864 / 1.492716 (0.216148) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229529 / 0.018006 (0.211523) | 0.490531 / 0.000490 (0.490042) | 0.002934 / 0.000200 (0.002734) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028074 / 0.037411 (-0.009337) | 0.122321 / 0.014526 (0.107795) | 0.129120 / 0.176557 (-0.047436) | 0.188413 / 0.737135 (-0.548722) | 0.138983 / 0.296338 (-0.157355) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.479350 / 0.215209 (0.264141) | 4.926201 / 2.077655 (2.848546) | 2.265557 / 1.504120 (0.761437) | 2.014580 / 1.541195 (0.473386) | 2.120517 / 1.468490 (0.652027) | 0.795334 / 4.584777 (-3.789443) | 4.509754 / 3.745712 (0.764042) | 4.328313 / 5.269862 (-0.941548) | 2.153304 / 4.565676 (-2.412373) | 0.102942 / 0.424275 (-0.321333) | 0.053504 / 0.007607 (0.045896) | 0.609392 / 0.226044 (0.383347) | 6.114048 / 2.268929 (3.845119) | 2.773306 / 55.444624 (-52.671318) | 2.443434 / 6.876477 (-4.433042) | 2.612005 / 2.142072 (0.469932) | 0.950435 / 4.805227 (-3.854792) | 0.194081 / 6.500664 (-6.306583) | 0.074513 / 0.075469 (-0.000956) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.402897 / 1.841788 (-0.438891) | 18.263033 / 8.074308 (10.188724) | 16.579809 / 10.191392 (6.388417) | 0.212319 / 0.680424 (-0.468104) | 0.020468 / 0.534201 (-0.513733) | 0.494850 / 0.579283 (-0.084433) | 0.483790 / 0.434364 (0.049426) | 0.572073 / 0.540337 (0.031735) | 0.684353 / 1.386936 (-0.702583) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009732 / 0.011353 (-0.001621) | 0.005901 / 0.011008 (-0.005107) | 0.084568 / 0.038508 (0.046060) | 0.038743 / 0.023109 (0.015634) | 0.431323 / 0.275898 (0.155425) | 0.472124 / 0.323480 (0.148644) | 0.006255 / 0.007986 (-0.001731) | 0.005892 / 0.004328 (0.001563) | 0.081913 / 0.004250 (0.077662) | 0.055560 / 0.037052 (0.018507) | 0.442857 / 0.258489 (0.184368) | 0.481887 / 0.293841 (0.188046) | 0.040730 / 0.128546 (-0.087816) | 0.014339 / 0.075646 (-0.061307) | 0.099258 / 0.419271 (-0.320013) | 0.054692 / 0.043533 (0.011159) | 0.436323 / 0.255139 (0.181184) | 0.461046 / 0.283200 (0.177846) | 0.125972 / 0.141683 (-0.015710) | 1.673173 / 1.452155 (0.221018) | 1.781364 / 1.492716 (0.288648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271450 / 0.018006 (0.253444) | 0.514484 / 0.000490 (0.513994) | 0.000455 / 0.000200 (0.000255) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036104 / 0.037411 (-0.001308) | 0.143306 / 0.014526 (0.128780) | 0.151105 / 0.176557 (-0.025451) | 0.210737 / 0.737135 (-0.526399) | 0.151404 / 0.296338 (-0.144934) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.573613 / 0.215209 (0.358404) | 5.828222 / 2.077655 (3.750567) | 2.993028 / 1.504120 (1.488908) | 2.617900 / 1.541195 (1.076706) | 2.754673 / 1.468490 (1.286183) | 1.010624 / 4.584777 (-3.574152) | 4.971261 / 3.745712 (1.225549) | 4.382017 / 5.269862 (-0.887845) | 1.971894 / 4.565676 (-2.593782) | 0.104404 / 0.424275 (-0.319871) | 0.014595 / 0.007607 (0.006988) | 0.657684 / 0.226044 (0.431639) | 6.566151 / 2.268929 (4.297222) | 3.221378 / 55.444624 (-52.223246) | 2.809402 / 6.876477 (-4.067075) | 2.882426 / 2.142072 (0.740354) | 1.006134 / 4.805227 (-3.799093) | 0.204469 / 6.500664 (-6.296196) | 0.078147 / 0.075469 (0.002678) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.574768 / 1.841788 (-0.267020) | 18.193335 / 8.074308 (10.119027) | 17.275353 / 10.191392 (7.083961) | 0.166890 / 0.680424 (-0.513534) | 0.020612 / 0.534201 (-0.513589) | 0.496179 / 0.579283 (-0.083104) | 0.507824 / 0.434364 (0.073460) | 0.620984 / 0.540337 (0.080647) | 0.749727 / 1.386936 (-0.637209) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#06988d3e01820b93ebcdc76158339fd6f67329dc \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006534 / 0.011353 (-0.004819) | 0.004456 / 0.011008 (-0.006553) | 0.097978 / 0.038508 (0.059470) | 0.027614 / 0.023109 (0.004505) | 0.309833 / 0.275898 (0.033935) | 0.337006 / 0.323480 (0.013526) | 0.004986 / 0.007986 (-0.002999) | 0.004521 / 0.004328 (0.000193) | 0.075053 / 0.004250 (0.070803) | 0.037095 / 0.037052 (0.000043) | 0.305430 / 0.258489 (0.046941) | 0.345298 / 0.293841 (0.051457) | 0.029784 / 0.128546 (-0.098762) | 0.011449 / 0.075646 (-0.064197) | 0.323346 / 0.419271 (-0.095925) | 0.042188 / 0.043533 (-0.001345) | 0.318653 / 0.255139 (0.063514) | 0.333799 / 0.283200 (0.050599) | 0.088194 / 0.141683 (-0.053488) | 1.511012 / 1.452155 (0.058857) | 1.578205 / 1.492716 (0.085489) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229695 / 0.018006 (0.211689) | 0.413276 / 0.000490 (0.412786) | 0.009142 / 0.000200 (0.008942) | 0.000537 / 0.000054 (0.000482) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024327 / 0.037411 (-0.013084) | 0.097953 / 0.014526 (0.083427) | 0.105551 / 0.176557 (-0.071005) | 0.169397 / 0.737135 (-0.567738) | 0.109784 / 0.296338 (-0.186554) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417713 / 0.215209 (0.202504) | 4.190703 / 2.077655 (2.113048) | 1.873504 / 1.504120 (0.369384) | 1.664540 / 1.541195 (0.123346) | 1.704539 / 1.468490 (0.236049) | 0.699840 / 4.584777 (-3.884937) | 3.480605 / 3.745712 (-0.265107) | 1.844229 / 5.269862 (-3.425633) | 1.155793 / 4.565676 (-3.409883) | 0.083013 / 0.424275 (-0.341262) | 0.012414 / 0.007607 (0.004807) | 0.518357 / 0.226044 (0.292313) | 5.186136 / 2.268929 (2.917207) | 2.329263 / 55.444624 (-53.115361) | 1.991395 / 6.876477 (-4.885081) | 2.074563 / 2.142072 (-0.067509) | 0.801388 / 4.805227 (-4.003839) | 0.152236 / 6.500664 (-6.348428) | 0.067414 / 0.075469 (-0.008055) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197290 / 1.841788 (-0.644497) | 13.666537 / 8.074308 (5.592229) | 13.017190 / 10.191392 (2.825798) | 0.142109 / 0.680424 (-0.538314) | 0.016321 / 0.534201 (-0.517880) | 0.378434 / 0.579283 (-0.200849) | 0.381101 / 0.434364 (-0.053263) | 0.444113 / 0.540337 (-0.096225) | 0.521448 / 1.386936 (-0.865488) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006273 / 0.011353 (-0.005080) | 0.004408 / 0.011008 (-0.006600) | 0.077100 / 0.038508 (0.038592) | 0.027361 / 0.023109 (0.004251) | 0.358170 / 0.275898 (0.082272) | 0.390125 / 0.323480 (0.066646) | 0.004736 / 0.007986 (-0.003250) | 0.004663 / 0.004328 (0.000334) | 0.077626 / 0.004250 (0.073376) | 0.037103 / 0.037052 (0.000051) | 0.360044 / 0.258489 (0.101555) | 0.411539 / 0.293841 (0.117698) | 0.030173 / 0.128546 (-0.098373) | 0.011618 / 0.075646 (-0.064028) | 0.086036 / 0.419271 (-0.333235) | 0.039077 / 0.043533 (-0.004456) | 0.382223 / 0.255139 (0.127084) | 0.384817 / 0.283200 (0.101618) | 0.094591 / 0.141683 (-0.047092) | 1.494961 / 1.452155 (0.042807) | 1.583769 / 1.492716 (0.091053) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227467 / 0.018006 (0.209460) | 0.396648 / 0.000490 (0.396159) | 0.000382 / 0.000200 (0.000182) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025346 / 0.037411 (-0.012065) | 0.102086 / 0.014526 (0.087560) | 0.108570 / 0.176557 (-0.067986) | 0.158777 / 0.737135 (-0.578359) | 0.112885 / 0.296338 (-0.183453) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.460731 / 0.215209 (0.245522) | 4.556450 / 2.077655 (2.478795) | 2.258185 / 1.504120 (0.754065) | 2.122584 / 1.541195 (0.581389) | 2.224638 / 1.468490 (0.756148) | 0.691909 / 4.584777 (-3.892868) | 3.482634 / 3.745712 (-0.263078) | 2.772837 / 5.269862 (-2.497024) | 1.533897 / 4.565676 (-3.031780) | 0.083025 / 0.424275 (-0.341250) | 0.012629 / 0.007607 (0.005022) | 0.548397 / 0.226044 (0.322352) | 5.492005 / 2.268929 (3.223077) | 2.669841 / 55.444624 (-52.774784) | 2.366947 / 6.876477 (-4.509529) | 2.496795 / 2.142072 (0.354722) | 0.804868 / 4.805227 (-4.000359) | 0.151686 / 6.500664 (-6.348978) | 0.068333 / 0.075469 (-0.007136) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.320414 / 1.841788 (-0.521374) | 14.367567 / 8.074308 (6.293258) | 14.047702 / 10.191392 (3.856310) | 0.129087 / 0.680424 (-0.551337) | 0.016658 / 0.534201 (-0.517543) | 0.381949 / 0.579283 (-0.197335) | 0.390105 / 0.434364 (-0.044258) | 0.445947 / 0.540337 (-0.094390) | 0.531074 / 1.386936 (-0.855862) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c67c9f3797ecc231b34d87ddef489c1238ec4046 \"CML watermark\")\n"
] | 2023-05-03T18:34:18 | 2023-05-04T14:31:55 | 2023-05-04T14:24:49 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5816",
"html_url": "https://github.com/huggingface/datasets/pull/5816",
"diff_url": "https://github.com/huggingface/datasets/pull/5816.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5816.patch",
"merged_at": "2023-05-04T14:24:49"
} | Preserve the `stopping_strategy` in the `RandomlyCyclingMultiSourcesExamplesIterable.shard_data_sources` to fix shuffling a dataset interleaved (from multiple sources) with probabilities.
Fix #5812
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5816/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5814/comments | https://api.github.com/repos/huggingface/datasets/issues/5814/events | https://github.com/huggingface/datasets/pull/5814 | 1,693,216,778 | PR_kwDODunzps5PoOQ9 | 5,814 | Repro windows crash | {
"login": "maddiedawson",
"id": 106995444,
"node_id": "U_kgDOBmCe9A",
"avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maddiedawson",
"html_url": "https://github.com/maddiedawson",
"followers_url": "https://api.github.com/users/maddiedawson/followers",
"following_url": "https://api.github.com/users/maddiedawson/following{/other_user}",
"gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions",
"organizations_url": "https://api.github.com/users/maddiedawson/orgs",
"repos_url": "https://api.github.com/users/maddiedawson/repos",
"events_url": "https://api.github.com/users/maddiedawson/events{/privacy}",
"received_events_url": "https://api.github.com/users/maddiedawson/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5814). All of your documentation changes will be reflected on that endpoint."
] | 2023-05-02T23:30:18 | 2023-05-02T23:47:07 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5814",
"html_url": "https://github.com/huggingface/datasets/pull/5814",
"diff_url": "https://github.com/huggingface/datasets/pull/5814.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5814.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5814/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5815/comments | https://api.github.com/repos/huggingface/datasets/issues/5815/events | https://github.com/huggingface/datasets/issues/5815 | 1,693,701,743 | I_kwDODunzps5k89Zv | 5,815 | Easy way to create a Kaggle dataset from a Huggingface dataset? | {
"login": "hrbigelow",
"id": 5355286,
"node_id": "MDQ6VXNlcjUzNTUyODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5355286?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hrbigelow",
"html_url": "https://github.com/hrbigelow",
"followers_url": "https://api.github.com/users/hrbigelow/followers",
"following_url": "https://api.github.com/users/hrbigelow/following{/other_user}",
"gists_url": "https://api.github.com/users/hrbigelow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hrbigelow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hrbigelow/subscriptions",
"organizations_url": "https://api.github.com/users/hrbigelow/orgs",
"repos_url": "https://api.github.com/users/hrbigelow/repos",
"events_url": "https://api.github.com/users/hrbigelow/events{/privacy}",
"received_events_url": "https://api.github.com/users/hrbigelow/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @hrbigelow , I'm no expert for such a question so I'll ping @lhoestq from the `datasets` library (also this issue could be moved there if someone with permission can do it :) )",
"Hi ! Many datasets are made of several files, and how they are parsed often requires a python script. Because of that, datasets like wmt14 are not available as a single file on HF. Though you can create this file using `datasets`:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"wmt14\", \"de-en\", split=\"train\")\r\n\r\nds.to_json(\"wmt14-train.json\")\r\n# OR to parquet, which is compressed:\r\n# ds.to_parquet(\"wmt14-train.parquet\")\r\n```\r\n\r\nWe are also working on providing parquet exports for all datasets, but wmt14 is not supported yet (we're rolling it out for datasets <1GB first). They're usually available in the `refs/convert/parquet` branch (empty for wmt14):\r\n\r\n<img width=\"267\" alt=\"image\" src=\"https://user-images.githubusercontent.com/42851186/235878909-7339f5a4-be19-4ada-85d8-8a50d23acf35.png\">\r\n",
"also cc @nateraw for visibility on this (and cc @osanseviero too)",
"I've requested support for creating a Kaggle dataset from an imported HF dataset repo on their \"forum\" here: https://www.kaggle.com/discussions/product-feedback/427142 (upvotes appreciated π)"
] | 2023-05-02T21:43:33 | 2023-07-26T16:13:31 | null | NONE | null | null | null | I'm not sure whether this is more appropriately addressed with HuggingFace or Kaggle. I would like to somehow directly create a Kaggle dataset from a HuggingFace Dataset.
While Kaggle does provide the option to create a dataset from a URI, that URI must point to a single file. For example:
![image](https://user-images.githubusercontent.com/5355286/235792394-7c559d07-4aff-45b7-ad2b-9c5280c88415.png)
Is there some mechanism from huggingface to represent a dataset (such as that from `load_dataset('wmt14', 'de-en', split='train')` as a single file? Or, some other way to get that into a Kaggle dataset so that I can use the huggingface `datasets` module to process and consume it inside of a Kaggle notebook?
Thanks in advance!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5815/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5813/comments | https://api.github.com/repos/huggingface/datasets/issues/5813/events | https://github.com/huggingface/datasets/pull/5813 | 1,691,908,535 | PR_kwDODunzps5Pj0_E | 5,813 | [DO-NOT-MERGE] Debug Windows issue at #3 | {
"login": "HyukjinKwon",
"id": 6477701,
"node_id": "MDQ6VXNlcjY0Nzc3MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6477701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HyukjinKwon",
"html_url": "https://github.com/HyukjinKwon",
"followers_url": "https://api.github.com/users/HyukjinKwon/followers",
"following_url": "https://api.github.com/users/HyukjinKwon/following{/other_user}",
"gists_url": "https://api.github.com/users/HyukjinKwon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HyukjinKwon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HyukjinKwon/subscriptions",
"organizations_url": "https://api.github.com/users/HyukjinKwon/orgs",
"repos_url": "https://api.github.com/users/HyukjinKwon/repos",
"events_url": "https://api.github.com/users/HyukjinKwon/events{/privacy}",
"received_events_url": "https://api.github.com/users/HyukjinKwon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2023-05-02T07:19:34 | 2023-05-02T07:21:30 | 2023-05-02T07:21:30 | NONE | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5813",
"html_url": "https://github.com/huggingface/datasets/pull/5813",
"diff_url": "https://github.com/huggingface/datasets/pull/5813.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5813.patch",
"merged_at": null
} | TBD | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5813/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5812 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5812/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5812/comments | https://api.github.com/repos/huggingface/datasets/issues/5812/events | https://github.com/huggingface/datasets/issues/5812 | 1,691,798,169 | I_kwDODunzps5k1sqZ | 5,812 | Cannot shuffle interleaved IterableDataset with "all_exhausted" stopping strategy | {
"login": "off99555",
"id": 15215732,
"node_id": "MDQ6VXNlcjE1MjE1NzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/15215732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/off99555",
"html_url": "https://github.com/off99555",
"followers_url": "https://api.github.com/users/off99555/followers",
"following_url": "https://api.github.com/users/off99555/following{/other_user}",
"gists_url": "https://api.github.com/users/off99555/gists{/gist_id}",
"starred_url": "https://api.github.com/users/off99555/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/off99555/subscriptions",
"organizations_url": "https://api.github.com/users/off99555/orgs",
"repos_url": "https://api.github.com/users/off99555/repos",
"events_url": "https://api.github.com/users/off99555/events{/privacy}",
"received_events_url": "https://api.github.com/users/off99555/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-05-02T05:26:17 | 2023-05-04T14:24:51 | 2023-05-04T14:24:51 | NONE | null | null | null | ### Describe the bug
Shuffling interleaved `IterableDataset` with "all_exhausted" strategy yields non-exhaustive sampling.
### Steps to reproduce the bug
```py
from datasets import IterableDataset, interleave_datasets
def gen(bias, length):
for i in range(length):
yield dict(a=bias+i)
seed = 42
probabilities = [0.2, 0.6, 0.2]
d1 = IterableDataset.from_generator(lambda: gen(0, 3))
d2 = IterableDataset.from_generator(lambda: gen(10, 4))
d3 = IterableDataset.from_generator(lambda: gen(20, 3))
ds = interleave_datasets([d1, d2, d3], probabilities=probabilities, seed=seed, stopping_strategy='all_exhausted')
ds = ds.shuffle(buffer_size=1000)
for x in ds:
print(x)
```
This code produces
```
{'a': 0}
{'a': 22}
{'a': 20}
{'a': 21}
{'a': 10}
{'a': 1}
```
### Expected behavior
It should produce a longer list of examples to exhaust all the datasets.
If you comment out the shuffle line, it will exhaust all the datasets properly.
Here is the output if you comment out shuffling:
```
{'a': 10}
{'a': 11}
{'a': 20}
{'a': 12}
{'a': 0}
{'a': 21}
{'a': 13}
{'a': 10}
{'a': 1}
{'a': 11}
{'a': 12}
{'a': 22}
{'a': 13}
{'a': 20}
{'a': 10}
{'a': 11}
{'a': 12}
{'a': 2}
```
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
This was run on Google Colab. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5812/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5811 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5811/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5811/comments | https://api.github.com/repos/huggingface/datasets/issues/5811/events | https://github.com/huggingface/datasets/issues/5811 | 1,689,919,046 | I_kwDODunzps5kuh5G | 5,811 | load_dataset: TypeError: 'NoneType' object is not callable, on local dataset filename changes | {
"login": "durapensa",
"id": 50685483,
"node_id": "MDQ6VXNlcjUwNjg1NDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/50685483?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/durapensa",
"html_url": "https://github.com/durapensa",
"followers_url": "https://api.github.com/users/durapensa/followers",
"following_url": "https://api.github.com/users/durapensa/following{/other_user}",
"gists_url": "https://api.github.com/users/durapensa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/durapensa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/durapensa/subscriptions",
"organizations_url": "https://api.github.com/users/durapensa/orgs",
"repos_url": "https://api.github.com/users/durapensa/repos",
"events_url": "https://api.github.com/users/durapensa/events{/privacy}",
"received_events_url": "https://api.github.com/users/durapensa/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"This error means a `DatasetBuilder` subclass that generates the dataset could not be found inside the script, so make sure `dushowxa-characters/dushowxa-characters.py `is a valid dataset script (assuming `path_or_dataset` is `dushowxa-characters`)\r\n\r\nAlso, we should improve the error to make it more obvious what the problem is."
] | 2023-04-30T13:27:17 | 2023-05-05T17:44:03 | null | NONE | null | null | null | ### Describe the bug
I've adapted Databrick's [train_dolly.py](/databrickslabs/dolly/blob/master/train_dolly.py) to train using a local dataset, which has been working. Upon changing the filenames of the `.json` & `.py` files in my local dataset directory, `dataset = load_dataset(path_or_dataset)["train"]` throws the error:
```python
2023-04-30 09:10:52 INFO [training.trainer] Loading dataset from dushowxa-characters
Traceback (most recent call last):
File "/data/dushowxa-dolly/train_dushowxa.py", line 26, in <module>
load_training_dataset()
File "/data/dushowxa-dolly/training/trainer.py", line 89, in load_training_dataset
dataset = load_dataset(path_or_dataset)["train"]
File "/data/dushowxa-dolly/.venv/lib/python3.10/site-packages/datasets/load.py", line 1773, in load_dataset
builder_instance = load_dataset_builder(
File "/data/dushowxa-dolly/.venv/lib/python3.10/site-packages/datasets/load.py", line 1528, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
TypeError: 'NoneType' object is not callable
```
The local dataset filenames were of the form `dushowxa-characters/expanse-dushowxa-characters.json` and are now of the form `dushowxa-characters/dushowxa-characters.json` (the word `expanse-` was removed from the filenames). Is this perhaps a dataset caching issue?
I have attempted to manually clear caches, but to no effect:
```sh
rm -rfv ~/.cache/huggingface/datasets/*
rm -rfv ~/.cache/huggingface/modules/*
```
### Steps to reproduce the bug
Run `python3 train_dushowxa.py` (adapted from Databrick's [train_dolly.py](/databrickslabs/dolly/blob/master/train_dolly.py)).
### Expected behavior
Training succeeds as before local dataset filenames were changed.
### Environment info
Ubuntu 22.04, Python 3.10.6, venv
```python
accelerate>=0.16.0,<1
click>=8.0.4,<9
datasets>=2.10.0,<3
deepspeed>=0.9.0,<1
transformers[torch]>=4.28.1,<5
langchain>=0.0.139
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5811/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5810 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5810/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5810/comments | https://api.github.com/repos/huggingface/datasets/issues/5810/events | https://github.com/huggingface/datasets/pull/5810 | 1,689,917,822 | PR_kwDODunzps5PdJHI | 5,810 | Add `fn_kwargs` to `map` and `filter` of `IterableDataset` and `IterableDatasetDict` | {
"login": "yuukicammy",
"id": 3927621,
"node_id": "MDQ6VXNlcjM5Mjc2MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3927621?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuukicammy",
"html_url": "https://github.com/yuukicammy",
"followers_url": "https://api.github.com/users/yuukicammy/followers",
"following_url": "https://api.github.com/users/yuukicammy/following{/other_user}",
"gists_url": "https://api.github.com/users/yuukicammy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuukicammy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuukicammy/subscriptions",
"organizations_url": "https://api.github.com/users/yuukicammy/orgs",
"repos_url": "https://api.github.com/users/yuukicammy/repos",
"events_url": "https://api.github.com/users/yuukicammy/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuukicammy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Sorry, the local test passed because it was inadvertently testing the main branch. I am currently fixing where the test failed.",
"- I have fixed the bug and addressed the above two points.\r\n- I have tested locally and confirmed that the test passes.\r\n\r\nPlease check the contents. @lhoestq \r\n\r\n5715a7e64bdd2951e6705aee58d592392e1538d6",
"Cool ! You can run `make style` to fix code formatting to fix the ci",
"I had forgotten about it. I did it. @lhoestq \r\n00248926a37c6f1387614aa388c36fdc105a59f5",
"Thanks for putting this together @yuukicammy ! Looking forward to using this new addition ASAP. \r\n@lhoestq - sorry to bother you with this, but if this looks good to you, any chance we could get this merged in? \r\n\r\nThanks again to you both! ",
"Yup there's just one test to remove and we can merge",
"Sorry for my understanding wrong! Correspondence has been addressed. @lhoestq \r\n ca511b7b29fdde51ffd69b58bda79220472e9e94\r\n\r\nThanks for your comment! @brianhill11 ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006788 / 0.011353 (-0.004564) | 0.004372 / 0.011008 (-0.006636) | 0.097746 / 0.038508 (0.059238) | 0.034858 / 0.023109 (0.011749) | 0.298122 / 0.275898 (0.022224) | 0.335272 / 0.323480 (0.011792) | 0.005810 / 0.007986 (-0.002175) | 0.004944 / 0.004328 (0.000616) | 0.072352 / 0.004250 (0.068101) | 0.041730 / 0.037052 (0.004678) | 0.316482 / 0.258489 (0.057992) | 0.338710 / 0.293841 (0.044869) | 0.027975 / 0.128546 (-0.100571) | 0.008746 / 0.075646 (-0.066901) | 0.329336 / 0.419271 (-0.089935) | 0.051327 / 0.043533 (0.007794) | 0.300695 / 0.255139 (0.045556) | 0.322813 / 0.283200 (0.039613) | 0.101133 / 0.141683 (-0.040550) | 1.422767 / 1.452155 (-0.029388) | 1.538364 / 1.492716 (0.045648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.016698 / 0.018006 (-0.001308) | 0.447042 / 0.000490 (0.446552) | 0.007609 / 0.000200 (0.007409) | 0.000277 / 0.000054 (0.000223) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026732 / 0.037411 (-0.010679) | 0.108295 / 0.014526 (0.093769) | 0.116905 / 0.176557 (-0.059652) | 0.173166 / 0.737135 (-0.563969) | 0.122560 / 0.296338 (-0.173779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.394893 / 0.215209 (0.179683) | 3.950314 / 2.077655 (1.872659) | 1.780576 / 1.504120 (0.276456) | 1.579855 / 1.541195 (0.038660) | 1.711197 / 1.468490 (0.242707) | 0.521469 / 4.584777 (-4.063308) | 3.838850 / 3.745712 (0.093138) | 3.101095 / 5.269862 (-2.168767) | 1.531574 / 4.565676 (-3.034102) | 0.065291 / 0.424275 (-0.358984) | 0.011979 / 0.007607 (0.004372) | 0.496543 / 0.226044 (0.270498) | 4.965446 / 2.268929 (2.696517) | 2.250788 / 55.444624 (-53.193837) | 1.923231 / 6.876477 (-4.953245) | 2.075372 / 2.142072 (-0.066700) | 0.638708 / 4.805227 (-4.166519) | 0.142048 / 6.500664 (-6.358616) | 0.064225 / 0.075469 (-0.011244) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211799 / 1.841788 (-0.629989) | 14.791822 / 8.074308 (6.717514) | 14.274993 / 10.191392 (4.083601) | 0.163942 / 0.680424 (-0.516482) | 0.017541 / 0.534201 (-0.516660) | 0.396440 / 0.579283 (-0.182843) | 0.427502 / 0.434364 (-0.006861) | 0.494273 / 0.540337 (-0.046064) | 0.586877 / 1.386936 (-0.800059) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006846 / 0.011353 (-0.004506) | 0.004854 / 0.011008 (-0.006154) | 0.075654 / 0.038508 (0.037146) | 0.034295 / 0.023109 (0.011186) | 0.378095 / 0.275898 (0.102197) | 0.407833 / 0.323480 (0.084353) | 0.006155 / 0.007986 (-0.001830) | 0.004259 / 0.004328 (-0.000070) | 0.076195 / 0.004250 (0.071944) | 0.051901 / 0.037052 (0.014849) | 0.375027 / 0.258489 (0.116538) | 0.428189 / 0.293841 (0.134348) | 0.028814 / 0.128546 (-0.099733) | 0.009209 / 0.075646 (-0.066438) | 0.083681 / 0.419271 (-0.335591) | 0.049158 / 0.043533 (0.005625) | 0.366669 / 0.255139 (0.111530) | 0.388767 / 0.283200 (0.105568) | 0.107837 / 0.141683 (-0.033845) | 1.476354 / 1.452155 (0.024199) | 1.580160 / 1.492716 (0.087443) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218900 / 0.018006 (0.200894) | 0.445475 / 0.000490 (0.444985) | 0.000423 / 0.000200 (0.000223) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029740 / 0.037411 (-0.007671) | 0.115192 / 0.014526 (0.100666) | 0.122439 / 0.176557 (-0.054118) | 0.170639 / 0.737135 (-0.566496) | 0.128085 / 0.296338 (-0.168254) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437745 / 0.215209 (0.222536) | 4.385695 / 2.077655 (2.308040) | 2.189893 / 1.504120 (0.685773) | 2.023160 / 1.541195 (0.481965) | 2.112798 / 1.468490 (0.644308) | 0.522497 / 4.584777 (-4.062280) | 3.881356 / 3.745712 (0.135644) | 3.206090 / 5.269862 (-2.063772) | 1.308241 / 4.565676 (-3.257435) | 0.065635 / 0.424275 (-0.358640) | 0.012288 / 0.007607 (0.004681) | 0.537265 / 0.226044 (0.311220) | 5.361641 / 2.268929 (3.092712) | 2.638941 / 55.444624 (-52.805684) | 2.344717 / 6.876477 (-4.531759) | 2.437619 / 2.142072 (0.295546) | 0.645079 / 4.805227 (-4.160149) | 0.143852 / 6.500664 (-6.356812) | 0.065796 / 0.075469 (-0.009673) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276588 / 1.841788 (-0.565200) | 15.239396 / 8.074308 (7.165088) | 13.150591 / 10.191392 (2.959199) | 0.163635 / 0.680424 (-0.516789) | 0.017533 / 0.534201 (-0.516668) | 0.397659 / 0.579283 (-0.181624) | 0.425589 / 0.434364 (-0.008774) | 0.466570 / 0.540337 (-0.073768) | 0.563953 / 1.386936 (-0.822983) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#807d5c5ed4f8db7761b92bed498b2193acce8fb7 \"CML watermark\")\n"
] | 2023-04-30T13:23:01 | 2023-05-22T08:12:39 | 2023-05-22T08:05:31 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5810",
"html_url": "https://github.com/huggingface/datasets/pull/5810",
"diff_url": "https://github.com/huggingface/datasets/pull/5810.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5810.patch",
"merged_at": "2023-05-22T08:05:31"
} | # Overview
I've added an argument`fn_kwargs` for map and filter methods of `IterableDataset` and `IterableDatasetDict` classes.
# Details
Currently, the map and filter methods of some classes related to `IterableDataset` do not allow specifing the arguments passed to the function. This pull request adds `fn_kwargs` to pass arguments to the mapping function. This allows users to preprocess data more flexibly.
Added `fn_kwargs` to the following classes and methods (description of the argument is also added).
1. class `FilteredExamplesIterable`
2. method `filter` of class `IterableDataset`
3. method `map` of class `IterableDatasetDict`
4. method `filter` of class `IterableDatasetDict`
# Example of changes
Here's an example of how to use the new functionality:
```python
from datasets import IterableDatasetDict
def preprocess_function(example, a=None, b=None):
# do something
return example
dataset = IterableDatasetDict(...)
dataset = dataset.map(preprocess_function, fn_kwargs={"a": 1, "b": 2})
```
# Related Issues
This pull request is related to the following issue:
https://github.com/huggingface/datasets/issues/3444 .
# Testing
I have added unit tests to test the new functionality.
In test_iterable_dataset.py
- Added `test_filtered_examples_iterable_with_fn_kwargs` for [1](#details).
- Added `test_iterable_dataset_filter` for [2](#details).
- Added `test_iterable_dataset_map_with_fn_kwargs`. This is not a newly added feature, but was added because it was not tested.
In test_dataset_dict.py
- Added `_create_dummy_iterable_dataset` for [3](#details) and [4](#details).
- Added `_create_dummy_iterable_dataset_dict` for [3](#details) and [4](#details).
- Added `test_iterable_map` for [3](#details).
- Added `test_iterable_filter` for [4](#details).
Note that, there is no test for `IterableDatasetDict` at the current main branch. I thought about writing tests for `IterableDatasetDict` in a new file, but I decided to add them in the test file for `DatasetDict` (test_dataset_dict.py).
# Checklist
- [x] Format the code.
- [x] Added tests.
- [x] Passed tests locally. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5810/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5809 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5809/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5809/comments | https://api.github.com/repos/huggingface/datasets/issues/5809/events | https://github.com/huggingface/datasets/issues/5809 | 1,689,797,293 | I_kwDODunzps5kuEKt | 5,809 | wiki_dpr details for Open Domain Question Answering tasks | {
"login": "yulgok22",
"id": 64122846,
"node_id": "MDQ6VXNlcjY0MTIyODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/64122846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yulgok22",
"html_url": "https://github.com/yulgok22",
"followers_url": "https://api.github.com/users/yulgok22/followers",
"following_url": "https://api.github.com/users/yulgok22/following{/other_user}",
"gists_url": "https://api.github.com/users/yulgok22/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yulgok22/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yulgok22/subscriptions",
"organizations_url": "https://api.github.com/users/yulgok22/orgs",
"repos_url": "https://api.github.com/users/yulgok22/repos",
"events_url": "https://api.github.com/users/yulgok22/events{/privacy}",
"received_events_url": "https://api.github.com/users/yulgok22/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! I don't remember exactly how it was done, but maybe you have to embed `f\"{title}<sep>{text}\"` ?\r\n\r\nUsing a HF tokenizer it corresponds to doing\r\n```python\r\ntokenized = tokenizer(titles, texts)\r\n```"
] | 2023-04-30T06:12:04 | 2023-07-21T14:11:00 | 2023-07-21T14:11:00 | NONE | null | null | null | Hey guys!
Thanks for creating the wiki_dpr dataset!
I am currently trying to combine wiki_dpr and my own datasets. but I don't know how to make the embedding value the same way as wiki_dpr.
As an experiment, I embeds the text of id="7" of wiki_dpr, but this result was very different from wiki_dpr. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5809/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5807 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5807/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5807/comments | https://api.github.com/repos/huggingface/datasets/issues/5807/events | https://github.com/huggingface/datasets/pull/5807 | 1,688,977,237 | PR_kwDODunzps5PaKRE | 5,807 | Support parallelized downloading in load_dataset with Spark | {
"login": "es94129",
"id": 12763339,
"node_id": "MDQ6VXNlcjEyNzYzMzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/12763339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/es94129",
"html_url": "https://github.com/es94129",
"followers_url": "https://api.github.com/users/es94129/followers",
"following_url": "https://api.github.com/users/es94129/following{/other_user}",
"gists_url": "https://api.github.com/users/es94129/gists{/gist_id}",
"starred_url": "https://api.github.com/users/es94129/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/es94129/subscriptions",
"organizations_url": "https://api.github.com/users/es94129/orgs",
"repos_url": "https://api.github.com/users/es94129/repos",
"events_url": "https://api.github.com/users/es94129/events{/privacy}",
"received_events_url": "https://api.github.com/users/es94129/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq or other maintainers, this is ready for review, could you please take a look?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5807). All of your documentation changes will be reflected on that endpoint.",
"Per the discussion in #5798, will implement with `joblibspark` instead."
] | 2023-04-28T18:34:32 | 2023-05-25T16:54:14 | 2023-05-25T16:54:14 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5807",
"html_url": "https://github.com/huggingface/datasets/pull/5807",
"diff_url": "https://github.com/huggingface/datasets/pull/5807.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5807.patch",
"merged_at": null
} | As proposed in https://github.com/huggingface/datasets/issues/5798, this adds support to parallelized downloading in `load_dataset` with Spark, which can speed up the process by distributing the workload to worker nodes.
Parallelizing dataset processing is not supported in this PR. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5807/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5806 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5806/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5806/comments | https://api.github.com/repos/huggingface/datasets/issues/5806/events | https://github.com/huggingface/datasets/issues/5806 | 1,688,598,095 | I_kwDODunzps5kpfZP | 5,806 | Return the name of the currently loaded file in the load_dataset function. | {
"login": "s-JoL",
"id": 16948304,
"node_id": "MDQ6VXNlcjE2OTQ4MzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/16948304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/s-JoL",
"html_url": "https://github.com/s-JoL",
"followers_url": "https://api.github.com/users/s-JoL/followers",
"following_url": "https://api.github.com/users/s-JoL/following{/other_user}",
"gists_url": "https://api.github.com/users/s-JoL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/s-JoL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/s-JoL/subscriptions",
"organizations_url": "https://api.github.com/users/s-JoL/orgs",
"repos_url": "https://api.github.com/users/s-JoL/repos",
"events_url": "https://api.github.com/users/s-JoL/events{/privacy}",
"received_events_url": "https://api.github.com/users/s-JoL/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | open | false | {
"login": "tsabbir96",
"id": 49894149,
"node_id": "MDQ6VXNlcjQ5ODk0MTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/49894149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tsabbir96",
"html_url": "https://github.com/tsabbir96",
"followers_url": "https://api.github.com/users/tsabbir96/followers",
"following_url": "https://api.github.com/users/tsabbir96/following{/other_user}",
"gists_url": "https://api.github.com/users/tsabbir96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tsabbir96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tsabbir96/subscriptions",
"organizations_url": "https://api.github.com/users/tsabbir96/orgs",
"repos_url": "https://api.github.com/users/tsabbir96/repos",
"events_url": "https://api.github.com/users/tsabbir96/events{/privacy}",
"received_events_url": "https://api.github.com/users/tsabbir96/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "tsabbir96",
"id": 49894149,
"node_id": "MDQ6VXNlcjQ5ODk0MTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/49894149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tsabbir96",
"html_url": "https://github.com/tsabbir96",
"followers_url": "https://api.github.com/users/tsabbir96/followers",
"following_url": "https://api.github.com/users/tsabbir96/following{/other_user}",
"gists_url": "https://api.github.com/users/tsabbir96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tsabbir96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tsabbir96/subscriptions",
"organizations_url": "https://api.github.com/users/tsabbir96/orgs",
"repos_url": "https://api.github.com/users/tsabbir96/repos",
"events_url": "https://api.github.com/users/tsabbir96/events{/privacy}",
"received_events_url": "https://api.github.com/users/tsabbir96/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Implementing this makes sense (e.g., `tensorflow_datasets`' imagefolder returns image filenames). Also, in Datasets 3.0, we plan only to store the bytes of an image/audio, not its path, so this feature would be useful when the path info is still needed.",
"Hey @mariosasko, Can I work on this issue, this one seems interesting to implement. I have contributed to jupyterlab recently, and would love to contribute here as well. ",
"@tsabbir96 if you are planning to start working on this, you can take on this issue by writing a comment with only the keyword: #self-assign",
"#self-assign",
"@albertvillanova thank you for letting me contribute here. \r\n@albertvillanova @mariosasko As I am totally new to this repo, could you tell me something more about this issue or perhaps give me some idea on how I can proceed with it? Thanks!",
"Hello there, is this issue resolved? @tsabbir96 are you still working on it? Otherwise I would love to give it a try",
"@EduardoPach This issue is still relevant, so feel free to work on it."
] | 2023-04-28T13:50:15 | 2023-07-26T16:59:31 | null | NONE | null | null | null | ### Feature request
Add an optional parameter return_file_name in the load_dataset function. When it is set to True, the function will include the name of the file corresponding to the current line as a feature in the returned output.
### Motivation
When training large language models, machine problems may interrupt the training process. In such cases, it is common to load a previously saved checkpoint to resume training. I would like to be able to obtain the names of the previously trained data shards, so that I can skip these parts of the data during continued training to avoid overfitting and redundant training time.
### Your contribution
I currently use a dataset in jsonl format, so I am primarily interested in the json format. I suggest adding the file name to the returned table here https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/json/json.py#L92. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5806/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5805 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5805/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5805/comments | https://api.github.com/repos/huggingface/datasets/issues/5805/events | https://github.com/huggingface/datasets/issues/5805 | 1,688,558,577 | I_kwDODunzps5kpVvx | 5,805 | Improve `Create a dataset` tutorial | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | open | false | null | [] | null | [
"I can work on this. The link to the tutorial seems to be broken though @polinaeterna. ",
"@isunitha98selvan would be great, thank you! which link are you talking about? I think it should work: https://huggingface.co/docs/datasets/create_dataset"
] | 2023-04-28T13:26:22 | 2023-06-23T14:58:44 | null | CONTRIBUTOR | null | null | null | Our [tutorial on how to create a dataset](https://huggingface.co/docs/datasets/create_dataset) is a bit misleading.
1. In **Folder-based builders** section it says that we have two folder-based builders as standard builders, but we also have similar builders (that can be created from directory with data of required format) for `csv`, `json/jsonl`, `parquet` and `txt` files. We have info about these loaders in separate [guide for loading](https://huggingface.co/docs/datasets/loading#local-and-remote-files) but it's worth briefly mentioning them in the beginning tutorial because they are more common and for consistency. Would be helpful to add the link to the full guide.
2. **From local files** section lists methods for creating a dataset from in-memory data which are also described in [loading guide](https://huggingface.co/docs/datasets/loading#inmemory-data).
Maybe we should actually rethink and restructure this tutorial somehow. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5805/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/5805/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5804/comments | https://api.github.com/repos/huggingface/datasets/issues/5804/events | https://github.com/huggingface/datasets/pull/5804 | 1,688,285,666 | PR_kwDODunzps5PX0Dk | 5,804 | Set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5804). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006448 / 0.011353 (-0.004905) | 0.004440 / 0.011008 (-0.006568) | 0.097837 / 0.038508 (0.059328) | 0.027754 / 0.023109 (0.004645) | 0.306462 / 0.275898 (0.030564) | 0.332454 / 0.323480 (0.008975) | 0.004984 / 0.007986 (-0.003001) | 0.004703 / 0.004328 (0.000375) | 0.075213 / 0.004250 (0.070962) | 0.036524 / 0.037052 (-0.000529) | 0.310149 / 0.258489 (0.051659) | 0.346392 / 0.293841 (0.052552) | 0.031012 / 0.128546 (-0.097534) | 0.011598 / 0.075646 (-0.064049) | 0.323066 / 0.419271 (-0.096206) | 0.042945 / 0.043533 (-0.000588) | 0.302286 / 0.255139 (0.047147) | 0.327813 / 0.283200 (0.044614) | 0.092540 / 0.141683 (-0.049143) | 1.532893 / 1.452155 (0.080739) | 1.556676 / 1.492716 (0.063960) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195126 / 0.018006 (0.177120) | 0.399623 / 0.000490 (0.399133) | 0.003176 / 0.000200 (0.002976) | 0.000068 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023612 / 0.037411 (-0.013799) | 0.097794 / 0.014526 (0.083268) | 0.104665 / 0.176557 (-0.071891) | 0.167145 / 0.737135 (-0.569990) | 0.108769 / 0.296338 (-0.187570) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437818 / 0.215209 (0.222608) | 4.354896 / 2.077655 (2.277242) | 2.092832 / 1.504120 (0.588712) | 1.957630 / 1.541195 (0.416435) | 2.033135 / 1.468490 (0.564645) | 0.702316 / 4.584777 (-3.882461) | 3.448035 / 3.745712 (-0.297678) | 1.906762 / 5.269862 (-3.363100) | 1.253274 / 4.565676 (-3.312402) | 0.082486 / 0.424275 (-0.341789) | 0.012442 / 0.007607 (0.004835) | 0.532096 / 0.226044 (0.306052) | 5.366580 / 2.268929 (3.097652) | 2.441904 / 55.444624 (-53.002720) | 2.112116 / 6.876477 (-4.764361) | 2.185471 / 2.142072 (0.043398) | 0.797905 / 4.805227 (-4.007322) | 0.149811 / 6.500664 (-6.350853) | 0.066507 / 0.075469 (-0.008962) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206300 / 1.841788 (-0.635487) | 13.620851 / 8.074308 (5.546543) | 14.190666 / 10.191392 (3.999274) | 0.142343 / 0.680424 (-0.538081) | 0.016867 / 0.534201 (-0.517334) | 0.381557 / 0.579283 (-0.197726) | 0.373935 / 0.434364 (-0.060429) | 0.437856 / 0.540337 (-0.102481) | 0.525235 / 1.386936 (-0.861701) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006598 / 0.011353 (-0.004755) | 0.004487 / 0.011008 (-0.006522) | 0.077582 / 0.038508 (0.039073) | 0.028008 / 0.023109 (0.004899) | 0.341602 / 0.275898 (0.065704) | 0.377105 / 0.323480 (0.053625) | 0.004999 / 0.007986 (-0.002986) | 0.004791 / 0.004328 (0.000462) | 0.076418 / 0.004250 (0.072167) | 0.038347 / 0.037052 (0.001295) | 0.343196 / 0.258489 (0.084707) | 0.382459 / 0.293841 (0.088618) | 0.030597 / 0.128546 (-0.097950) | 0.011579 / 0.075646 (-0.064067) | 0.085876 / 0.419271 (-0.333396) | 0.043241 / 0.043533 (-0.000292) | 0.343754 / 0.255139 (0.088615) | 0.380689 / 0.283200 (0.097489) | 0.096015 / 0.141683 (-0.045668) | 1.464419 / 1.452155 (0.012264) | 1.574010 / 1.492716 (0.081294) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.156433 / 0.018006 (0.138427) | 0.403179 / 0.000490 (0.402690) | 0.002415 / 0.000200 (0.002215) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024946 / 0.037411 (-0.012465) | 0.100568 / 0.014526 (0.086042) | 0.106440 / 0.176557 (-0.070117) | 0.158457 / 0.737135 (-0.578678) | 0.110774 / 0.296338 (-0.185564) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434734 / 0.215209 (0.219525) | 4.343874 / 2.077655 (2.266220) | 2.059759 / 1.504120 (0.555639) | 1.855124 / 1.541195 (0.313930) | 1.908567 / 1.468490 (0.440077) | 0.695283 / 4.584777 (-3.889494) | 3.347724 / 3.745712 (-0.397988) | 2.979498 / 5.269862 (-2.290364) | 1.532040 / 4.565676 (-3.033636) | 0.083021 / 0.424275 (-0.341254) | 0.012522 / 0.007607 (0.004915) | 0.540934 / 0.226044 (0.314890) | 5.385690 / 2.268929 (3.116762) | 2.507409 / 55.444624 (-52.937216) | 2.160537 / 6.876477 (-4.715939) | 2.269195 / 2.142072 (0.127123) | 0.804718 / 4.805227 (-4.000509) | 0.152432 / 6.500664 (-6.348232) | 0.068783 / 0.075469 (-0.006686) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294698 / 1.841788 (-0.547090) | 14.152792 / 8.074308 (6.078484) | 14.233132 / 10.191392 (4.041740) | 0.143655 / 0.680424 (-0.536768) | 0.016844 / 0.534201 (-0.517357) | 0.380246 / 0.579283 (-0.199037) | 0.381730 / 0.434364 (-0.052633) | 0.456838 / 0.540337 (-0.083499) | 0.543677 / 1.386936 (-0.843259) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b28d5610887f2e107765f5f1557679184db08214 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008586 / 0.011353 (-0.002767) | 0.005886 / 0.011008 (-0.005122) | 0.114522 / 0.038508 (0.076014) | 0.040966 / 0.023109 (0.017857) | 0.366655 / 0.275898 (0.090757) | 0.408765 / 0.323480 (0.085285) | 0.006822 / 0.007986 (-0.001164) | 0.004508 / 0.004328 (0.000180) | 0.084715 / 0.004250 (0.080465) | 0.054007 / 0.037052 (0.016954) | 0.380500 / 0.258489 (0.122011) | 0.410377 / 0.293841 (0.116536) | 0.041040 / 0.128546 (-0.087507) | 0.013940 / 0.075646 (-0.061707) | 0.398456 / 0.419271 (-0.020816) | 0.059315 / 0.043533 (0.015782) | 0.353640 / 0.255139 (0.098501) | 0.388682 / 0.283200 (0.105482) | 0.121744 / 0.141683 (-0.019939) | 1.729306 / 1.452155 (0.277151) | 1.824768 / 1.492716 (0.332052) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228806 / 0.018006 (0.210800) | 0.492790 / 0.000490 (0.492300) | 0.010815 / 0.000200 (0.010615) | 0.000372 / 0.000054 (0.000318) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031750 / 0.037411 (-0.005662) | 0.127160 / 0.014526 (0.112635) | 0.136717 / 0.176557 (-0.039839) | 0.205590 / 0.737135 (-0.531545) | 0.142596 / 0.296338 (-0.153742) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.486419 / 0.215209 (0.271210) | 4.858572 / 2.077655 (2.780918) | 2.173867 / 1.504120 (0.669747) | 1.934619 / 1.541195 (0.393424) | 2.104185 / 1.468490 (0.635695) | 0.837913 / 4.584777 (-3.746864) | 4.552192 / 3.745712 (0.806480) | 2.565040 / 5.269862 (-2.704822) | 1.808499 / 4.565676 (-2.757178) | 0.103283 / 0.424275 (-0.320993) | 0.015040 / 0.007607 (0.007433) | 0.602325 / 0.226044 (0.376281) | 6.038655 / 2.268929 (3.769727) | 2.759789 / 55.444624 (-52.684835) | 2.330990 / 6.876477 (-4.545487) | 2.404111 / 2.142072 (0.262038) | 1.011637 / 4.805227 (-3.793590) | 0.202142 / 6.500664 (-6.298522) | 0.079496 / 0.075469 (0.004026) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.429543 / 1.841788 (-0.412245) | 18.052409 / 8.074308 (9.978101) | 16.989154 / 10.191392 (6.797762) | 0.208981 / 0.680424 (-0.471443) | 0.020490 / 0.534201 (-0.513711) | 0.502746 / 0.579283 (-0.076537) | 0.491769 / 0.434364 (0.057405) | 0.581970 / 0.540337 (0.041632) | 0.695816 / 1.386936 (-0.691120) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008449 / 0.011353 (-0.002904) | 0.006633 / 0.011008 (-0.004375) | 0.088638 / 0.038508 (0.050130) | 0.040013 / 0.023109 (0.016904) | 0.413108 / 0.275898 (0.137210) | 0.446310 / 0.323480 (0.122830) | 0.006515 / 0.007986 (-0.001471) | 0.006223 / 0.004328 (0.001894) | 0.089823 / 0.004250 (0.085573) | 0.052029 / 0.037052 (0.014977) | 0.407263 / 0.258489 (0.148774) | 0.449416 / 0.293841 (0.155576) | 0.041810 / 0.128546 (-0.086736) | 0.014604 / 0.075646 (-0.061042) | 0.103728 / 0.419271 (-0.315543) | 0.058212 / 0.043533 (0.014679) | 0.408936 / 0.255139 (0.153797) | 0.436727 / 0.283200 (0.153528) | 0.124344 / 0.141683 (-0.017339) | 1.752112 / 1.452155 (0.299957) | 1.859104 / 1.492716 (0.366387) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231172 / 0.018006 (0.213166) | 0.502974 / 0.000490 (0.502485) | 0.005586 / 0.000200 (0.005386) | 0.000137 / 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034097 / 0.037411 (-0.003314) | 0.133780 / 0.014526 (0.119254) | 0.142321 / 0.176557 (-0.034236) | 0.199807 / 0.737135 (-0.537329) | 0.150073 / 0.296338 (-0.146266) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.515658 / 0.215209 (0.300449) | 5.129783 / 2.077655 (3.052129) | 2.534767 / 1.504120 (1.030648) | 2.352468 / 1.541195 (0.811274) | 2.430708 / 1.468490 (0.962218) | 0.850087 / 4.584777 (-3.734690) | 4.529622 / 3.745712 (0.783910) | 2.451986 / 5.269862 (-2.817876) | 1.569568 / 4.565676 (-2.996109) | 0.102907 / 0.424275 (-0.321368) | 0.014420 / 0.007607 (0.006813) | 0.635124 / 0.226044 (0.409080) | 6.260496 / 2.268929 (3.991568) | 3.094984 / 55.444624 (-52.349640) | 2.780629 / 6.876477 (-4.095847) | 2.947620 / 2.142072 (0.805548) | 1.002397 / 4.805227 (-3.802830) | 0.200502 / 6.500664 (-6.300162) | 0.076577 / 0.075469 (0.001107) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.505958 / 1.841788 (-0.335829) | 18.364986 / 8.074308 (10.290678) | 16.707214 / 10.191392 (6.515822) | 0.210976 / 0.680424 (-0.469447) | 0.022077 / 0.534201 (-0.512124) | 0.516174 / 0.579283 (-0.063109) | 0.502469 / 0.434364 (0.068105) | 0.626790 / 0.540337 (0.086453) | 0.747230 / 1.386936 (-0.639706) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bc5fef5b6d91f009e4101684adcb374df2c170f6 \"CML watermark\")\n"
] | 2023-04-28T10:10:01 | 2023-04-28T10:18:51 | 2023-04-28T10:10:29 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5804",
"html_url": "https://github.com/huggingface/datasets/pull/5804",
"diff_url": "https://github.com/huggingface/datasets/pull/5804.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5804.patch",
"merged_at": "2023-04-28T10:10:29"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5804/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5803 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5803/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5803/comments | https://api.github.com/repos/huggingface/datasets/issues/5803/events | https://github.com/huggingface/datasets/pull/5803 | 1,688,256,290 | PR_kwDODunzps5PXtte | 5,803 | Release: 2.12.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5803). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008303 / 0.011353 (-0.003050) | 0.005681 / 0.011008 (-0.005327) | 0.111830 / 0.038508 (0.073322) | 0.039222 / 0.023109 (0.016112) | 0.336773 / 0.275898 (0.060875) | 0.376673 / 0.323480 (0.053193) | 0.006756 / 0.007986 (-0.001230) | 0.006078 / 0.004328 (0.001749) | 0.083552 / 0.004250 (0.079301) | 0.054430 / 0.037052 (0.017377) | 0.337310 / 0.258489 (0.078821) | 0.386138 / 0.293841 (0.092297) | 0.040068 / 0.128546 (-0.088478) | 0.013895 / 0.075646 (-0.061751) | 0.384174 / 0.419271 (-0.035097) | 0.058244 / 0.043533 (0.014711) | 0.342410 / 0.255139 (0.087271) | 0.362417 / 0.283200 (0.079217) | 0.123470 / 0.141683 (-0.018213) | 1.662938 / 1.452155 (0.210784) | 1.786488 / 1.492716 (0.293771) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232629 / 0.018006 (0.214622) | 0.478252 / 0.000490 (0.477762) | 0.008519 / 0.000200 (0.008319) | 0.000111 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031222 / 0.037411 (-0.006190) | 0.125875 / 0.014526 (0.111350) | 0.138995 / 0.176557 (-0.037562) | 0.213073 / 0.737135 (-0.524062) | 0.141848 / 0.296338 (-0.154490) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463648 / 0.215209 (0.248439) | 4.582969 / 2.077655 (2.505314) | 2.104622 / 1.504120 (0.600502) | 1.887697 / 1.541195 (0.346502) | 1.946096 / 1.468490 (0.477606) | 0.809008 / 4.584777 (-3.775769) | 4.527871 / 3.745712 (0.782159) | 4.862721 / 5.269862 (-0.407141) | 2.423257 / 4.565676 (-2.142419) | 0.101080 / 0.424275 (-0.323196) | 0.014767 / 0.007607 (0.007160) | 0.574471 / 0.226044 (0.348427) | 5.746445 / 2.268929 (3.477516) | 2.682584 / 55.444624 (-52.762040) | 2.320113 / 6.876477 (-4.556364) | 2.474530 / 2.142072 (0.332458) | 0.992979 / 4.805227 (-3.812249) | 0.200812 / 6.500664 (-6.299852) | 0.076291 / 0.075469 (0.000822) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.395533 / 1.841788 (-0.446254) | 17.418803 / 8.074308 (9.344495) | 16.584875 / 10.191392 (6.393483) | 0.167739 / 0.680424 (-0.512685) | 0.020923 / 0.534201 (-0.513278) | 0.500788 / 0.579283 (-0.078496) | 0.510270 / 0.434364 (0.075906) | 0.589608 / 0.540337 (0.049270) | 0.694233 / 1.386936 (-0.692703) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008440 / 0.011353 (-0.002913) | 0.005871 / 0.011008 (-0.005137) | 0.085805 / 0.038508 (0.047297) | 0.039324 / 0.023109 (0.016215) | 0.400587 / 0.275898 (0.124689) | 0.431729 / 0.323480 (0.108249) | 0.006557 / 0.007986 (-0.001429) | 0.005778 / 0.004328 (0.001450) | 0.084394 / 0.004250 (0.080144) | 0.055274 / 0.037052 (0.018222) | 0.410568 / 0.258489 (0.152079) | 0.439952 / 0.293841 (0.146111) | 0.040335 / 0.128546 (-0.088211) | 0.013968 / 0.075646 (-0.061679) | 0.098765 / 0.419271 (-0.320507) | 0.055897 / 0.043533 (0.012364) | 0.387584 / 0.255139 (0.132445) | 0.412568 / 0.283200 (0.129368) | 0.120393 / 0.141683 (-0.021290) | 1.730996 / 1.452155 (0.278841) | 1.821538 / 1.492716 (0.328822) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245688 / 0.018006 (0.227682) | 0.484888 / 0.000490 (0.484398) | 0.000485 / 0.000200 (0.000285) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032340 / 0.037411 (-0.005072) | 0.130819 / 0.014526 (0.116293) | 0.138491 / 0.176557 (-0.038065) | 0.196902 / 0.737135 (-0.540233) | 0.145404 / 0.296338 (-0.150935) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.487643 / 0.215209 (0.272434) | 4.818956 / 2.077655 (2.741301) | 2.332316 / 1.504120 (0.828196) | 2.102018 / 1.541195 (0.560823) | 2.156743 / 1.468490 (0.688253) | 0.803365 / 4.584777 (-3.781412) | 4.308561 / 3.745712 (0.562849) | 2.373331 / 5.269862 (-2.896530) | 1.539474 / 4.565676 (-3.026202) | 0.099081 / 0.424275 (-0.325194) | 0.014627 / 0.007607 (0.007020) | 0.609883 / 0.226044 (0.383838) | 6.092402 / 2.268929 (3.823474) | 2.858137 / 55.444624 (-52.586488) | 2.463256 / 6.876477 (-4.413220) | 2.637048 / 2.142072 (0.494976) | 0.959552 / 4.805227 (-3.845676) | 0.194170 / 6.500664 (-6.306495) | 0.075231 / 0.075469 (-0.000238) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.516502 / 1.841788 (-0.325285) | 18.077893 / 8.074308 (10.003585) | 16.507961 / 10.191392 (6.316569) | 0.171643 / 0.680424 (-0.508780) | 0.020378 / 0.534201 (-0.513823) | 0.491508 / 0.579283 (-0.087775) | 0.492136 / 0.434364 (0.057772) | 0.602258 / 0.540337 (0.061920) | 0.719882 / 1.386936 (-0.667054) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#330ac3e95fd3f2d61bac31b5b9c24399a5b54723 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006572 / 0.011353 (-0.004781) | 0.004647 / 0.011008 (-0.006362) | 0.098277 / 0.038508 (0.059769) | 0.027937 / 0.023109 (0.004828) | 0.339833 / 0.275898 (0.063935) | 0.398305 / 0.323480 (0.074825) | 0.005093 / 0.007986 (-0.002893) | 0.003374 / 0.004328 (-0.000954) | 0.075287 / 0.004250 (0.071037) | 0.037355 / 0.037052 (0.000303) | 0.339779 / 0.258489 (0.081290) | 0.403756 / 0.293841 (0.109915) | 0.030705 / 0.128546 (-0.097841) | 0.011596 / 0.075646 (-0.064050) | 0.323809 / 0.419271 (-0.095463) | 0.043357 / 0.043533 (-0.000176) | 0.342817 / 0.255139 (0.087678) | 0.386330 / 0.283200 (0.103130) | 0.088229 / 0.141683 (-0.053454) | 1.466017 / 1.452155 (0.013862) | 1.566551 / 1.492716 (0.073835) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196276 / 0.018006 (0.178269) | 0.420321 / 0.000490 (0.419831) | 0.002234 / 0.000200 (0.002034) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023999 / 0.037411 (-0.013412) | 0.095117 / 0.014526 (0.080592) | 0.102544 / 0.176557 (-0.074013) | 0.164796 / 0.737135 (-0.572340) | 0.107030 / 0.296338 (-0.189309) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429299 / 0.215209 (0.214089) | 4.272503 / 2.077655 (2.194849) | 2.101890 / 1.504120 (0.597771) | 1.978907 / 1.541195 (0.437713) | 2.008993 / 1.468490 (0.540503) | 0.695171 / 4.584777 (-3.889606) | 3.427050 / 3.745712 (-0.318662) | 1.892945 / 5.269862 (-3.376917) | 1.247156 / 4.565676 (-3.318521) | 0.082576 / 0.424275 (-0.341699) | 0.012526 / 0.007607 (0.004918) | 0.526338 / 0.226044 (0.300293) | 5.313855 / 2.268929 (3.044927) | 2.421134 / 55.444624 (-53.023490) | 2.072026 / 6.876477 (-4.804451) | 2.159846 / 2.142072 (0.017773) | 0.800753 / 4.805227 (-4.004474) | 0.150507 / 6.500664 (-6.350157) | 0.066378 / 0.075469 (-0.009091) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218709 / 1.841788 (-0.623079) | 13.649239 / 8.074308 (5.574931) | 13.952762 / 10.191392 (3.761370) | 0.141967 / 0.680424 (-0.538457) | 0.016443 / 0.534201 (-0.517758) | 0.380408 / 0.579283 (-0.198875) | 0.377693 / 0.434364 (-0.056671) | 0.439819 / 0.540337 (-0.100518) | 0.529667 / 1.386936 (-0.857269) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006722 / 0.011353 (-0.004630) | 0.004495 / 0.011008 (-0.006513) | 0.075459 / 0.038508 (0.036951) | 0.028135 / 0.023109 (0.005026) | 0.349904 / 0.275898 (0.074006) | 0.390620 / 0.323480 (0.067140) | 0.005175 / 0.007986 (-0.002810) | 0.004720 / 0.004328 (0.000392) | 0.074243 / 0.004250 (0.069993) | 0.039084 / 0.037052 (0.002032) | 0.352486 / 0.258489 (0.093997) | 0.397549 / 0.293841 (0.103708) | 0.030596 / 0.128546 (-0.097950) | 0.011627 / 0.075646 (-0.064020) | 0.083394 / 0.419271 (-0.335878) | 0.042155 / 0.043533 (-0.001378) | 0.345668 / 0.255139 (0.090529) | 0.383474 / 0.283200 (0.100275) | 0.096530 / 0.141683 (-0.045153) | 1.493360 / 1.452155 (0.041206) | 1.572259 / 1.492716 (0.079543) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.162605 / 0.018006 (0.144599) | 0.409513 / 0.000490 (0.409023) | 0.002029 / 0.000200 (0.001829) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025824 / 0.037411 (-0.011588) | 0.102439 / 0.014526 (0.087913) | 0.109515 / 0.176557 (-0.067041) | 0.160650 / 0.737135 (-0.576486) | 0.112971 / 0.296338 (-0.183367) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433293 / 0.215209 (0.218084) | 4.340286 / 2.077655 (2.262631) | 2.055857 / 1.504120 (0.551737) | 1.854451 / 1.541195 (0.313256) | 1.912752 / 1.468490 (0.444261) | 0.700076 / 4.584777 (-3.884701) | 3.361542 / 3.745712 (-0.384170) | 2.760204 / 5.269862 (-2.509658) | 1.477395 / 4.565676 (-3.088282) | 0.082868 / 0.424275 (-0.341407) | 0.012479 / 0.007607 (0.004872) | 0.532749 / 0.226044 (0.306704) | 5.323701 / 2.268929 (3.054772) | 2.509524 / 55.444624 (-52.935100) | 2.168668 / 6.876477 (-4.707809) | 2.259112 / 2.142072 (0.117040) | 0.806686 / 4.805227 (-3.998542) | 0.154620 / 6.500664 (-6.346044) | 0.068348 / 0.075469 (-0.007121) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.316512 / 1.841788 (-0.525276) | 14.158143 / 8.074308 (6.083835) | 14.110643 / 10.191392 (3.919251) | 0.143760 / 0.680424 (-0.536664) | 0.016851 / 0.534201 (-0.517350) | 0.376594 / 0.579283 (-0.202689) | 0.386957 / 0.434364 (-0.047407) | 0.466185 / 0.540337 (-0.074152) | 0.550269 / 1.386936 (-0.836667) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8e1af7b30c94ce77abd9de732f19198e197d900c \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009457 / 0.011353 (-0.001896) | 0.006453 / 0.011008 (-0.004555) | 0.136392 / 0.038508 (0.097884) | 0.038378 / 0.023109 (0.015269) | 0.413171 / 0.275898 (0.137273) | 0.451605 / 0.323480 (0.128126) | 0.007123 / 0.007986 (-0.000863) | 0.006316 / 0.004328 (0.001987) | 0.103009 / 0.004250 (0.098758) | 0.049182 / 0.037052 (0.012130) | 0.398635 / 0.258489 (0.140146) | 0.463146 / 0.293841 (0.169305) | 0.056247 / 0.128546 (-0.072299) | 0.019589 / 0.075646 (-0.056058) | 0.475882 / 0.419271 (0.056610) | 0.094918 / 0.043533 (0.051385) | 0.416502 / 0.255139 (0.161363) | 0.447129 / 0.283200 (0.163929) | 0.133314 / 0.141683 (-0.008369) | 2.132888 / 1.452155 (0.680733) | 2.073383 / 1.492716 (0.580667) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273037 / 0.018006 (0.255030) | 0.625675 / 0.000490 (0.625185) | 0.003449 / 0.000200 (0.003249) | 0.000185 / 0.000054 (0.000130) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031889 / 0.037411 (-0.005523) | 0.131673 / 0.014526 (0.117148) | 0.141575 / 0.176557 (-0.034982) | 0.214978 / 0.737135 (-0.522158) | 0.145586 / 0.296338 (-0.150752) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.711135 / 0.215209 (0.495926) | 7.162492 / 2.077655 (5.084837) | 2.906028 / 1.504120 (1.401908) | 2.488855 / 1.541195 (0.947660) | 2.574628 / 1.468490 (1.106138) | 1.587824 / 4.584777 (-2.996953) | 6.332962 / 3.745712 (2.587250) | 5.419578 / 5.269862 (0.149717) | 2.935413 / 4.565676 (-1.630263) | 0.169159 / 0.424275 (-0.255116) | 0.015358 / 0.007607 (0.007751) | 0.862036 / 0.226044 (0.635992) | 8.559256 / 2.268929 (6.290328) | 3.530756 / 55.444624 (-51.913868) | 2.626288 / 6.876477 (-4.250188) | 2.770063 / 2.142072 (0.627990) | 1.500116 / 4.805227 (-3.305112) | 0.265109 / 6.500664 (-6.235555) | 0.084944 / 0.075469 (0.009475) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631060 / 1.841788 (-0.210728) | 19.022827 / 8.074308 (10.948519) | 22.973632 / 10.191392 (12.782240) | 0.296265 / 0.680424 (-0.384158) | 0.032317 / 0.534201 (-0.501884) | 0.624171 / 0.579283 (0.044888) | 0.690643 / 0.434364 (0.256279) | 0.691206 / 0.540337 (0.150869) | 0.758855 / 1.386936 (-0.628081) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009441 / 0.011353 (-0.001912) | 0.006270 / 0.011008 (-0.004739) | 0.110284 / 0.038508 (0.071776) | 0.035952 / 0.023109 (0.012842) | 0.521894 / 0.275898 (0.245996) | 0.582624 / 0.323480 (0.259144) | 0.011400 / 0.007986 (0.003414) | 0.004677 / 0.004328 (0.000348) | 0.115721 / 0.004250 (0.111470) | 0.048521 / 0.037052 (0.011469) | 0.497142 / 0.258489 (0.238653) | 0.573733 / 0.293841 (0.279892) | 0.055788 / 0.128546 (-0.072759) | 0.020949 / 0.075646 (-0.054697) | 0.132968 / 0.419271 (-0.286303) | 0.063045 / 0.043533 (0.019512) | 0.537769 / 0.255139 (0.282630) | 0.527560 / 0.283200 (0.244361) | 0.123756 / 0.141683 (-0.017927) | 1.994111 / 1.452155 (0.541956) | 2.104623 / 1.492716 (0.611907) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.279057 / 0.018006 (0.261051) | 0.537342 / 0.000490 (0.536852) | 0.007782 / 0.000200 (0.007582) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032018 / 0.037411 (-0.005394) | 0.133456 / 0.014526 (0.118930) | 0.142039 / 0.176557 (-0.034517) | 0.213769 / 0.737135 (-0.523366) | 0.143811 / 0.296338 (-0.152527) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.680142 / 0.215209 (0.464933) | 6.450439 / 2.077655 (4.372784) | 2.820724 / 1.504120 (1.316604) | 2.520407 / 1.541195 (0.979212) | 2.568972 / 1.468490 (1.100482) | 1.250584 / 4.584777 (-3.334193) | 6.108222 / 3.745712 (2.362509) | 3.065965 / 5.269862 (-2.203897) | 2.108675 / 4.565676 (-2.457002) | 0.167870 / 0.424275 (-0.256405) | 0.015127 / 0.007607 (0.007520) | 0.849645 / 0.226044 (0.623600) | 8.508727 / 2.268929 (6.239799) | 3.707897 / 55.444624 (-51.736727) | 3.009279 / 6.876477 (-3.867198) | 3.067179 / 2.142072 (0.925106) | 1.516370 / 4.805227 (-3.288858) | 0.264845 / 6.500664 (-6.235819) | 0.095137 / 0.075469 (0.019668) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.826306 / 1.841788 (-0.015481) | 20.119641 / 8.074308 (12.045333) | 21.532158 / 10.191392 (11.340766) | 0.278631 / 0.680424 (-0.401793) | 0.029494 / 0.534201 (-0.504707) | 0.621887 / 0.579283 (0.042604) | 0.686864 / 0.434364 (0.252500) | 0.695412 / 0.540337 (0.155074) | 0.864829 / 1.386936 (-0.522108) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8e1af7b30c94ce77abd9de732f19198e197d900c \"CML watermark\")\n"
] | 2023-04-28T09:52:11 | 2023-04-28T10:18:56 | 2023-04-28T09:54:43 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5803",
"html_url": "https://github.com/huggingface/datasets/pull/5803",
"diff_url": "https://github.com/huggingface/datasets/pull/5803.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5803.patch",
"merged_at": "2023-04-28T09:54:43"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5803/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5802/comments | https://api.github.com/repos/huggingface/datasets/issues/5802/events | https://github.com/huggingface/datasets/pull/5802 | 1,686,509,799 | PR_kwDODunzps5PR199 | 5,802 | Validate non-empty data_files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007818 / 0.011353 (-0.003535) | 0.005456 / 0.011008 (-0.005552) | 0.114685 / 0.038508 (0.076177) | 0.038398 / 0.023109 (0.015289) | 0.351289 / 0.275898 (0.075391) | 0.389170 / 0.323480 (0.065690) | 0.006213 / 0.007986 (-0.001773) | 0.005796 / 0.004328 (0.001467) | 0.085315 / 0.004250 (0.081065) | 0.049251 / 0.037052 (0.012198) | 0.368119 / 0.258489 (0.109630) | 0.394725 / 0.293841 (0.100884) | 0.040390 / 0.128546 (-0.088157) | 0.014076 / 0.075646 (-0.061570) | 0.393771 / 0.419271 (-0.025500) | 0.058929 / 0.043533 (0.015397) | 0.349526 / 0.255139 (0.094387) | 0.378409 / 0.283200 (0.095210) | 0.114354 / 0.141683 (-0.027329) | 1.749244 / 1.452155 (0.297089) | 1.847946 / 1.492716 (0.355229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241648 / 0.018006 (0.223641) | 0.468419 / 0.000490 (0.467929) | 0.004311 / 0.000200 (0.004111) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029978 / 0.037411 (-0.007433) | 0.121832 / 0.014526 (0.107306) | 0.133516 / 0.176557 (-0.043041) | 0.199174 / 0.737135 (-0.537961) | 0.138181 / 0.296338 (-0.158158) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478346 / 0.215209 (0.263137) | 4.723967 / 2.077655 (2.646312) | 2.107724 / 1.504120 (0.603604) | 1.874810 / 1.541195 (0.333615) | 1.911568 / 1.468490 (0.443078) | 0.800966 / 4.584777 (-3.783811) | 4.399032 / 3.745712 (0.653320) | 2.346160 / 5.269862 (-2.923702) | 1.506673 / 4.565676 (-3.059004) | 0.099119 / 0.424275 (-0.325156) | 0.014055 / 0.007607 (0.006448) | 0.582419 / 0.226044 (0.356375) | 5.789147 / 2.268929 (3.520218) | 2.632443 / 55.444624 (-52.812182) | 2.217630 / 6.876477 (-4.658846) | 2.337709 / 2.142072 (0.195637) | 0.995345 / 4.805227 (-3.809882) | 0.200040 / 6.500664 (-6.300624) | 0.076855 / 0.075469 (0.001386) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.386104 / 1.841788 (-0.455683) | 17.109772 / 8.074308 (9.035464) | 16.147612 / 10.191392 (5.956220) | 0.162846 / 0.680424 (-0.517577) | 0.020692 / 0.534201 (-0.513509) | 0.495752 / 0.579283 (-0.083531) | 0.475715 / 0.434364 (0.041351) | 0.619826 / 0.540337 (0.079488) | 0.720745 / 1.386936 (-0.666191) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008255 / 0.011353 (-0.003098) | 0.006118 / 0.011008 (-0.004890) | 0.088004 / 0.038508 (0.049496) | 0.039225 / 0.023109 (0.016116) | 0.399290 / 0.275898 (0.123392) | 0.432272 / 0.323480 (0.108792) | 0.007382 / 0.007986 (-0.000603) | 0.004576 / 0.004328 (0.000248) | 0.086511 / 0.004250 (0.082260) | 0.050472 / 0.037052 (0.013420) | 0.404160 / 0.258489 (0.145671) | 0.445356 / 0.293841 (0.151515) | 0.041549 / 0.128546 (-0.086997) | 0.014148 / 0.075646 (-0.061498) | 0.101697 / 0.419271 (-0.317574) | 0.057474 / 0.043533 (0.013941) | 0.395093 / 0.255139 (0.139954) | 0.418613 / 0.283200 (0.135414) | 0.123217 / 0.141683 (-0.018466) | 1.726146 / 1.452155 (0.273991) | 1.852746 / 1.492716 (0.360029) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256876 / 0.018006 (0.238870) | 0.476336 / 0.000490 (0.475846) | 0.000465 / 0.000200 (0.000265) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034304 / 0.037411 (-0.003107) | 0.132617 / 0.014526 (0.118091) | 0.141712 / 0.176557 (-0.034845) | 0.198101 / 0.737135 (-0.539034) | 0.150877 / 0.296338 (-0.145461) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.504717 / 0.215209 (0.289508) | 5.035060 / 2.077655 (2.957405) | 2.494812 / 1.504120 (0.990692) | 2.306601 / 1.541195 (0.765406) | 2.481860 / 1.468490 (1.013370) | 0.826041 / 4.584777 (-3.758736) | 4.414748 / 3.745712 (0.669036) | 2.417899 / 5.269862 (-2.851963) | 1.574548 / 4.565676 (-2.991128) | 0.101712 / 0.424275 (-0.322563) | 0.014388 / 0.007607 (0.006781) | 0.616674 / 0.226044 (0.390630) | 6.180382 / 2.268929 (3.911453) | 2.969110 / 55.444624 (-52.475514) | 2.574383 / 6.876477 (-4.302094) | 2.711008 / 2.142072 (0.568935) | 0.997679 / 4.805227 (-3.807548) | 0.201241 / 6.500664 (-6.299423) | 0.076132 / 0.075469 (0.000663) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.542704 / 1.841788 (-0.299084) | 17.610700 / 8.074308 (9.536392) | 16.152973 / 10.191392 (5.961581) | 0.166040 / 0.680424 (-0.514384) | 0.020286 / 0.534201 (-0.513915) | 0.506724 / 0.579283 (-0.072559) | 0.484348 / 0.434364 (0.049984) | 0.606524 / 0.540337 (0.066187) | 0.734997 / 1.386936 (-0.651939) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a200ec9126a0879f3d38d4e9e3787633a23af42e \"CML watermark\")\n"
] | 2023-04-27T09:51:36 | 2023-04-27T14:59:47 | 2023-04-27T14:51:40 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5802",
"html_url": "https://github.com/huggingface/datasets/pull/5802",
"diff_url": "https://github.com/huggingface/datasets/pull/5802.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5802.patch",
"merged_at": "2023-04-27T14:51:40"
} | This PR adds validation of `data_files`, so that they are non-empty (str, list, or dict) or `None` (default).
See: https://github.com/huggingface/datasets/pull/5787#discussion_r1178862327 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5802/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5800 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5800/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5800/comments | https://api.github.com/repos/huggingface/datasets/issues/5800/events | https://github.com/huggingface/datasets/pull/5800 | 1,686,348,096 | PR_kwDODunzps5PRTRh | 5,800 | Change downloaded file permission based on umask | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2023-04-27T08:13:30 | 2023-04-27T09:33:05 | 2023-04-27T09:30:16 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5800",
"html_url": "https://github.com/huggingface/datasets/pull/5800",
"diff_url": "https://github.com/huggingface/datasets/pull/5800.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5800.patch",
"merged_at": "2023-04-27T09:30:16"
} | This PR changes the permission of downloaded files to cache, so that the umask is taken into account.
Related to:
- #2157
Fix #5799.
CC: @stas00 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5800/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5799 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5799/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5799/comments | https://api.github.com/repos/huggingface/datasets/issues/5799/events | https://github.com/huggingface/datasets/issues/5799 | 1,686,334,572 | I_kwDODunzps5kg2xs | 5,799 | Files downloaded to cache do not respect umask | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-04-27T08:06:05 | 2023-04-27T09:30:17 | 2023-04-27T09:30:17 | MEMBER | null | null | null | As reported by @stas00, files downloaded to the cache do not respect umask:
```bash
$ ls -l /path/to/cache/datasets/downloads/
-rw------- 1 uername username 150M Apr 25 16:41 5e646c1d600f065adaeb134e536f6f2f296a6d804bd1f0e1fdcd20ee28c185c6
```
Related to:
- #2065 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5799/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5798/comments | https://api.github.com/repos/huggingface/datasets/issues/5798/events | https://github.com/huggingface/datasets/issues/5798 | 1,685,904,526 | I_kwDODunzps5kfNyO | 5,798 | Support parallelized downloading and processing in load_dataset with Spark | {
"login": "es94129",
"id": 12763339,
"node_id": "MDQ6VXNlcjEyNzYzMzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/12763339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/es94129",
"html_url": "https://github.com/es94129",
"followers_url": "https://api.github.com/users/es94129/followers",
"following_url": "https://api.github.com/users/es94129/following{/other_user}",
"gists_url": "https://api.github.com/users/es94129/gists{/gist_id}",
"starred_url": "https://api.github.com/users/es94129/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/es94129/subscriptions",
"organizations_url": "https://api.github.com/users/es94129/orgs",
"repos_url": "https://api.github.com/users/es94129/repos",
"events_url": "https://api.github.com/users/es94129/events{/privacy}",
"received_events_url": "https://api.github.com/users/es94129/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! We're using process pools for parallelism right now. I was wondering if there's a package that implements the same API as a process pool but runs with Spark under the hood ? That or something similar would be cool because users could use whatever distributed framework they want this way.\r\n\r\nFeel free to ping us when you'd like to open PRs for this kind of things, so that we can discuss this before you start working on it ^^",
"Hi, thanks for taking a look and providing your input! I don't know of such packages, and even it exists, I don't think with the process pool API it's possible to run Spark as backend properly; otherwise I understand a unified API would be preferable.\r\n\r\nThe process pool API requires splitting the workload to a fixed number parts for multiprocessing; meanwhile distributed framework such as Spark has sophisticated scheduler to distribute the workload to the processes on multiple machines in a cluster, so the way of splitting things for `multiprocessing.pool` would not suit / be as flexible as directly calling the `sparkContext.parallelize` API.\r\n\r\nI think this could be a good addition to scale the `datasets` implementation to distributed workers, and from my benchmark results so far it looks promising compared with multiprocessing.",
"I see ! I think we only need an equivalent of `pool.map`. We use it to run download and conversion of data files on disk. That would require less changes in the internal code - and therefore less tests to write ;)\r\n\r\nWe also use `pool.apply_async` in some places with a `Queue` to get progress updates of the running jobs. I'm mentioning this in case there's a way to get a python generator from a running spark job ? This is less important though",
"For Spark, `rdd.map` (where `rdd` can be created by `sparkContext.parallelize`) is the most similar as `pool.map`, but it requires creating a Spark RDD first that is used for distributing the `iterable` and the actual parallelization is managed by the Spark framework; `pool.map` takes the splits of `iterable` that are split into `num_proc` parts by the Python code. You can also check my PR #5807 in the `src/datasets/utils/py_utils.py` file to compare the differences of the APIs, it might make more sense than the the above description.\r\n\r\nGiven the different inputs and mechanisms of calling the `map` functions, this is why I think it's not that feasible to reuse most of the `multiprocessing` code.\r\n\r\nProgress bar updating might be challenging with Spark, I'll consider it as a followup work.",
"Indeed I think the current use of multiprocessing.Pool in `map_nested` can be rewritten to work like `sparkContext.parallelize` - without splitting the iterable.\r\n\r\nMaybe from the user's perspective it's ok to let multiprocessing.Pool or spark distribute the load on their own, as long as it takes a list and runs jobs in parallel in the end :)\r\n",
"From your feedback, seems to me there are two paths to consider now for supporting spark's `map` function in `map_nested` now:\r\n1. Keep the current `pool.map` implementation, and add an if statement for the spark's `map` code (which is what I did in my current PR) -- the code change is just a few lines in the `map_nested` function, and it has been tested by unit tests + manual testing on real Spark clusters; if you have other concerns I'd also be happy to address them.\r\n2. Rewrite the current `pool.map` implementation to remove splitting the iterable, and we will still need to add an if statement to use either\r\n```python\r\nwith Pool(...) as pool:\r\n mapped = pool.map(_single_map_nested, iterable)\r\n```\r\nor\r\n```python\r\nrdd = spark.sparkContext.parallelize(iterable)\r\nmapped = rdd.map(lambda obj: _single_map_nested((function, obj, types, None, True, None))).collect()\r\n```\r\nbecause there is no unified API that supports both `pool.map` and `rdd.map`. This can be more unified and flexible in the long run, but might require more work, and it will change the existing multiprocessing behavior, which is why I'm not leaning towards this option.\r\n\r\nAm I understanding correctly?",
"Yup correct ! I think it's a nice path because it would be possible for users to define whatever parallel processing backend they want. I think we still need to discuss how that would look like in the `datasets` API : how to specify it has to use the \"spark\" parallel backend ? And how to specify the spark session parameters (number of executors etc.) ? Maybe there is something more practical than `use_spark=True`\r\n\r\nI'll check with the team internally if they have some ideas, but feel free to share your thoughts here !",
"Sure, please let me know if you have more updates regarding the API and implementation from the team.\r\n\r\nFor parameters we don't need to worry about setting them for Spark, because Spark will figure out the environment / number of worker nodes by itself, so it's preferable to just provide some parameter such as `use_spark` to use the RDD `map` function.",
"Hi! I wanted to check in to see if there is any update from the team.\r\n\r\nA potential change of API I can think of is change the argument to `distributed_backend=...`, which accepts `str`, such as `load_dataset(..., distributed_backend=\"spark\")`.\r\n\r\nImplementation wise, we can add a class / function to abstract away the details of using multiprocessing vs. spark vs. other parallel processing frameworks in `map_nested` and `_prepare_split`.",
"I found this quite interesting: https://github.com/joblib/joblib-spark with this syntax:\r\n\r\n```python\r\nwith parallel_backend('spark', n_jobs=3):\r\n ...\r\n```\r\n\r\ncc @lu-wang-dl who might know better",
"Joblib spark is providing Spark backend for joblib. We can implement a general parallel backend like\r\n```\r\nwith parallel_backend(\"<parallel-backedn>\", n_jobs=..):\r\n```\r\n\r\nIt can support multiprocessing , spark, ray, and etc. https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend",
"Thank you @lhoestq for finding this repo. I validated that it can distribute downloading jobs with Spark to arbitrary cluster worker nodes evenly with `n_jobs=-1`.\r\n\r\nFor the API, I think it makes sense to define it as\r\n```python\r\nload_dataset(..., parallel_backend=<str>)\r\n```\r\nwhere `parallel_backend` can be `spark`, `multiprocessing`, and potentially other supported joblib backends including `ray` and `dask`.\r\n\r\nImplementation-wise, do you think it is better to just use `joblib` for `spark` backend in `map_nested`, or also migrate the `multiprocessing.Pool` code to use `joblib`?",
"Hello @lhoestq, I wanted to follow up on my previous comment with some prototyping code that demonstrates how `map_nested` would be like if we unify `multiprocessing` and `spark` with `joblib`. The snippet hasn't hashed out the details such as dealing with `tqdm` yet.\r\n\r\nIn terms of API, the way of using multiprocessing is still the same; for Spark, the user sets `parallel_backend='spark'` can reuse the `num_proc` argument to pass in the number of executors, or preferably, just set `num_proc=-1` and joblib is able to decide it (I've validated it by running it on a Spark cluster).\r\n\r\n```python\r\ndef map_nested(\r\n # ... same args\r\n parallel_backend: Optional[str] = None, # proposed new argument\r\n):\r\n\r\n # ... same code\r\n\r\n # allow user to specify num_proc=-1, so that joblib will optimize it\r\n if (num_proc <= 1 and num_proc != -1) or len(iterable) < parallel_min_length:\r\n # same code\r\n mapped = [\r\n _single_map_nested((function, obj, types, None, True, None))\r\n for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n ]\r\n else:\r\n if not parallel_backend:\r\n parallel_backend = 'loky' # 'loky' is joblib's own implementation of robust multiprocessing\r\n \r\n n_jobs = min(num_proc, len(iterable))\r\n\r\n if parallel_backend == 'spark':\r\n n_jobs = -1 # 'loky' is joblib's own implementation of robust multiprocessing\r\n from joblibspark import register_spark\r\n register_spark()\r\n\r\n # parallelized with the same API\r\n with joblib.parallel_backend(parallel_backend, n_jobs=n_jobs):\r\n mapped = joblib.Parallel()(\r\n joblib.delayed(\r\n _single_map_nested((function, obj, types, None, True, None))\r\n )(obj) for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n )\r\n \r\n # ... same code\r\n```\r\nWe can always `joblib` for Spark and other distributed backends such as Ray if people want to support them later. It's worth noting that some distributed backends do not currently have `joblib` implementations.\r\n\r\nI would appreciate your thoughts on this proposed new API. We can also discuss the pros and cons of migrating the `multiprocessing` code to `joblib` later.",
"Nice ! It should be quite easy to make the change then :)\r\n\r\nI think adding spark support can actually be less than 20 lines of code and would roughly require one line of code to change in map_nested:\r\n\r\nMaybe we can define a new `datasets.parallel` submodule that has the `parallel_backend()` context manager and a `parallel_map()` function that uses `Pool.map` by default and `joblib` otherwise.\r\n\r\n`joblib` would be an optional dependency, and `joblib-spark` as well.\r\n\r\nThen whenever someone wants to use Spark, they can do something like this (similar to scikit-learn parallel_backend):\r\n\r\n```python\r\nfrom datasets.parallel import parallel_backend\r\n\r\nwith parallel_backend(\"spark\"):\r\n ds = load_dataset(...)\r\n```\r\n\r\nWhat do you think ?",
"Although until we've switched to all the steps in `load_dataset` to use `datasets.parallel`, I would require the user to explicitly say which step should use Spark. Maybe something like this, but I'm not sure yet:\r\n\r\n```python\r\nfrom datasets.parallel import parallel_backend\r\n\r\nwith parallel_backend(\"spark\", steps=[\"download\"]):\r\n ds = load_dataset(...)\r\n```\r\nfor now some steps can be NotImplemented:\r\n```python\r\nfrom datasets.parallel import parallel_backend\r\n\r\nwith parallel_backend(\"spark\", steps=[\"download\", \"prepare\"]):\r\n# NotImplementedError: the \"prepare\" step that converts the raw data files to Arrow is not compatible with the \"spark\" backend yet\r\n```\r\n\r\nThis way we can progressively roll out Spark support for the other data loading/processing steps without breaking changes between `datasets` versions",
"Sounds good! I like the partial rollout idea.\r\nSo for example `map_nested` would call `parallel_map` under the hood if `num_proc != 1` or `parallel_backend` is specified right?\r\nI would be happy to start a PR next week to explore this path.",
"Awesome ! I think map_nested can call `parallel_map()` if num_proc > 1, and `parallel_map` can be responsible to use Pool.map by default or joblib."
] | 2023-04-27T00:16:11 | 2023-05-25T14:11:41 | null | CONTRIBUTOR | null | null | null | ### Feature request
When calling `load_dataset` for datasets that have multiple files, support using Spark to distribute the downloading and processing job to worker nodes when `cache_dir` is a cloud file system shared among nodes.
```python
load_dataset(..., use_spark=True)
```
### Motivation
Further speed up `dl_manager.download` and `_prepare_split` by distributing the workloads to worker nodes.
### Your contribution
I can submit a PR to support this. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5798/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5798/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5797 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5797/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5797/comments | https://api.github.com/repos/huggingface/datasets/issues/5797/events | https://github.com/huggingface/datasets/issues/5797 | 1,685,501,199 | I_kwDODunzps5kdrUP | 5,797 | load_dataset is case sentitive? | {
"login": "haonan-li",
"id": 34729065,
"node_id": "MDQ6VXNlcjM0NzI5MDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/34729065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haonan-li",
"html_url": "https://github.com/haonan-li",
"followers_url": "https://api.github.com/users/haonan-li/followers",
"following_url": "https://api.github.com/users/haonan-li/following{/other_user}",
"gists_url": "https://api.github.com/users/haonan-li/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haonan-li/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haonan-li/subscriptions",
"organizations_url": "https://api.github.com/users/haonan-li/orgs",
"repos_url": "https://api.github.com/users/haonan-li/repos",
"events_url": "https://api.github.com/users/haonan-li/events{/privacy}",
"received_events_url": "https://api.github.com/users/haonan-li/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @haonan-li , thank you for the report! It seems to be a bug on the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) site, there is even no such dataset as `mbzuai/bactrian-x` on the Hub. I opened and [issue](https://github.com/huggingface/huggingface_hub/issues/1453) there.",
"I think `load_dataset(\"mbzuai/bactrian-x\")` shouldn't be loaded at all and raise an error but because of [this fallback](https://github.com/huggingface/datasets/blob/main/src/datasets/load.py#L1194) to packaged loaders when no other options are applicable, it loads the dataset with standard `json` loader instead of the custom loading script."
] | 2023-04-26T18:19:04 | 2023-04-27T11:56:58 | null | NONE | null | null | null | ### Describe the bug
load_dataset() function is case sensitive?
### Steps to reproduce the bug
The following two code, get totally different behavior.
1. load_dataset('mbzuai/bactrian-x','en')
2. load_dataset('MBZUAI/Bactrian-X','en')
### Expected behavior
Compare 1 and 2.
1 will download all 52 subsets, shell output:
```Downloading and preparing dataset json/MBZUAI--bactrian-X to xxx```
2 will only download single subset, shell output
```Downloading and preparing dataset bactrian-x/en to xxx```
### Environment info
Python 3.10.11
datasets Version: 2.11.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5797/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5796 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5796/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5796/comments | https://api.github.com/repos/huggingface/datasets/issues/5796/events | https://github.com/huggingface/datasets/pull/5796 | 1,685,451,919 | PR_kwDODunzps5PORm- | 5,796 | Spark docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010480 / 0.011353 (-0.000872) | 0.006743 / 0.011008 (-0.004265) | 0.126503 / 0.038508 (0.087995) | 0.036918 / 0.023109 (0.013808) | 0.387372 / 0.275898 (0.111474) | 0.456930 / 0.323480 (0.133450) | 0.008038 / 0.007986 (0.000052) | 0.005082 / 0.004328 (0.000753) | 0.093312 / 0.004250 (0.089062) | 0.065440 / 0.037052 (0.028387) | 0.378172 / 0.258489 (0.119683) | 0.430049 / 0.293841 (0.136208) | 0.054372 / 0.128546 (-0.074174) | 0.021875 / 0.075646 (-0.053772) | 0.441722 / 0.419271 (0.022450) | 0.063716 / 0.043533 (0.020183) | 0.375718 / 0.255139 (0.120579) | 0.413688 / 0.283200 (0.130488) | 0.122583 / 0.141683 (-0.019100) | 1.835992 / 1.452155 (0.383838) | 1.915862 / 1.492716 (0.423145) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275305 / 0.018006 (0.257299) | 0.617170 / 0.000490 (0.616680) | 0.006467 / 0.000200 (0.006267) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031057 / 0.037411 (-0.006354) | 0.135178 / 0.014526 (0.120653) | 0.139265 / 0.176557 (-0.037292) | 0.221597 / 0.737135 (-0.515538) | 0.147632 / 0.296338 (-0.148706) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.640621 / 0.215209 (0.425411) | 6.354359 / 2.077655 (4.276704) | 2.748945 / 1.504120 (1.244825) | 2.396637 / 1.541195 (0.855442) | 2.395193 / 1.468490 (0.926703) | 1.209604 / 4.584777 (-3.375173) | 5.626901 / 3.745712 (1.881189) | 3.300941 / 5.269862 (-1.968920) | 2.123598 / 4.565676 (-2.442078) | 0.144270 / 0.424275 (-0.280005) | 0.015114 / 0.007607 (0.007507) | 0.812352 / 0.226044 (0.586307) | 8.024250 / 2.268929 (5.755322) | 3.557589 / 55.444624 (-51.887036) | 2.840632 / 6.876477 (-4.035845) | 3.152319 / 2.142072 (1.010246) | 1.447232 / 4.805227 (-3.357995) | 0.251740 / 6.500664 (-6.248924) | 0.083725 / 0.075469 (0.008256) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.568032 / 1.841788 (-0.273755) | 18.463860 / 8.074308 (10.389552) | 21.217395 / 10.191392 (11.026003) | 0.228457 / 0.680424 (-0.451967) | 0.031398 / 0.534201 (-0.502803) | 0.547627 / 0.579283 (-0.031656) | 0.642921 / 0.434364 (0.208557) | 0.687857 / 0.540337 (0.147520) | 0.800940 / 1.386936 (-0.585996) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009933 / 0.011353 (-0.001420) | 0.006065 / 0.011008 (-0.004943) | 0.102556 / 0.038508 (0.064048) | 0.034646 / 0.023109 (0.011537) | 0.437951 / 0.275898 (0.162053) | 0.482439 / 0.323480 (0.158959) | 0.007715 / 0.007986 (-0.000271) | 0.007426 / 0.004328 (0.003098) | 0.096427 / 0.004250 (0.092177) | 0.052983 / 0.037052 (0.015930) | 0.464533 / 0.258489 (0.206044) | 0.484848 / 0.293841 (0.191007) | 0.050415 / 0.128546 (-0.078131) | 0.021001 / 0.075646 (-0.054645) | 0.121214 / 0.419271 (-0.298058) | 0.061658 / 0.043533 (0.018125) | 0.431898 / 0.255139 (0.176759) | 0.482106 / 0.283200 (0.198907) | 0.128524 / 0.141683 (-0.013159) | 1.775714 / 1.452155 (0.323559) | 1.904738 / 1.492716 (0.412021) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287641 / 0.018006 (0.269635) | 0.600667 / 0.000490 (0.600178) | 0.005097 / 0.000200 (0.004897) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032836 / 0.037411 (-0.004575) | 0.133114 / 0.014526 (0.118588) | 0.150874 / 0.176557 (-0.025683) | 0.217069 / 0.737135 (-0.520066) | 0.160387 / 0.296338 (-0.135951) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.668444 / 0.215209 (0.453235) | 6.240015 / 2.077655 (4.162360) | 2.808661 / 1.504120 (1.304542) | 2.336550 / 1.541195 (0.795356) | 2.538973 / 1.468490 (1.070483) | 1.189292 / 4.584777 (-3.395485) | 5.781028 / 3.745712 (2.035315) | 3.149895 / 5.269862 (-2.119967) | 2.130646 / 4.565676 (-2.435030) | 0.144944 / 0.424275 (-0.279331) | 0.014650 / 0.007607 (0.007043) | 0.792313 / 0.226044 (0.566269) | 7.933108 / 2.268929 (5.664180) | 3.527527 / 55.444624 (-51.917098) | 2.864271 / 6.876477 (-4.012205) | 3.098330 / 2.142072 (0.956258) | 1.421208 / 4.805227 (-3.384019) | 0.255638 / 6.500664 (-6.245026) | 0.086971 / 0.075469 (0.011502) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.585317 / 1.841788 (-0.256471) | 18.643133 / 8.074308 (10.568825) | 21.921256 / 10.191392 (11.729864) | 0.215493 / 0.680424 (-0.464931) | 0.028348 / 0.534201 (-0.505853) | 0.556925 / 0.579283 (-0.022358) | 0.631480 / 0.434364 (0.197116) | 0.654026 / 0.540337 (0.113689) | 0.799727 / 1.386936 (-0.587209) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#62520514b524b5904c7e4f0beddab1971212a96a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006516 / 0.011353 (-0.004837) | 0.004500 / 0.011008 (-0.006509) | 0.097639 / 0.038508 (0.059131) | 0.028336 / 0.023109 (0.005227) | 0.377263 / 0.275898 (0.101365) | 0.409209 / 0.323480 (0.085729) | 0.004832 / 0.007986 (-0.003154) | 0.004629 / 0.004328 (0.000301) | 0.075046 / 0.004250 (0.070795) | 0.034080 / 0.037052 (-0.002972) | 0.377565 / 0.258489 (0.119076) | 0.419204 / 0.293841 (0.125363) | 0.030343 / 0.128546 (-0.098203) | 0.011465 / 0.075646 (-0.064182) | 0.322777 / 0.419271 (-0.096494) | 0.043774 / 0.043533 (0.000241) | 0.375808 / 0.255139 (0.120669) | 0.402665 / 0.283200 (0.119465) | 0.086811 / 0.141683 (-0.054872) | 1.518686 / 1.452155 (0.066531) | 1.540381 / 1.492716 (0.047664) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197730 / 0.018006 (0.179724) | 0.409285 / 0.000490 (0.408795) | 0.004739 / 0.000200 (0.004539) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022974 / 0.037411 (-0.014437) | 0.096843 / 0.014526 (0.082317) | 0.103241 / 0.176557 (-0.073316) | 0.163691 / 0.737135 (-0.573444) | 0.107905 / 0.296338 (-0.188433) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.449408 / 0.215209 (0.234199) | 4.501375 / 2.077655 (2.423720) | 2.181491 / 1.504120 (0.677371) | 1.986153 / 1.541195 (0.444958) | 2.024735 / 1.468490 (0.556245) | 0.695368 / 4.584777 (-3.889409) | 3.416912 / 3.745712 (-0.328800) | 1.893343 / 5.269862 (-3.376519) | 1.275535 / 4.565676 (-3.290142) | 0.082772 / 0.424275 (-0.341503) | 0.012365 / 0.007607 (0.004758) | 0.553859 / 0.226044 (0.327814) | 5.540014 / 2.268929 (3.271085) | 2.634298 / 55.444624 (-52.810326) | 2.286686 / 6.876477 (-4.589790) | 2.384402 / 2.142072 (0.242330) | 0.806413 / 4.805227 (-3.998814) | 0.151757 / 6.500664 (-6.348907) | 0.067155 / 0.075469 (-0.008314) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198776 / 1.841788 (-0.643012) | 13.517434 / 8.074308 (5.443126) | 13.926300 / 10.191392 (3.734908) | 0.141887 / 0.680424 (-0.538537) | 0.016571 / 0.534201 (-0.517630) | 0.383179 / 0.579283 (-0.196104) | 0.395189 / 0.434364 (-0.039175) | 0.479635 / 0.540337 (-0.060702) | 0.570576 / 1.386936 (-0.816360) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006691 / 0.011353 (-0.004662) | 0.004634 / 0.011008 (-0.006375) | 0.077087 / 0.038508 (0.038579) | 0.028281 / 0.023109 (0.005172) | 0.340108 / 0.275898 (0.064210) | 0.370611 / 0.323480 (0.047131) | 0.004997 / 0.007986 (-0.002988) | 0.003336 / 0.004328 (-0.000992) | 0.074814 / 0.004250 (0.070563) | 0.039001 / 0.037052 (0.001948) | 0.344225 / 0.258489 (0.085736) | 0.380621 / 0.293841 (0.086780) | 0.030858 / 0.128546 (-0.097689) | 0.011623 / 0.075646 (-0.064023) | 0.085016 / 0.419271 (-0.334256) | 0.042378 / 0.043533 (-0.001155) | 0.341428 / 0.255139 (0.086289) | 0.364823 / 0.283200 (0.081624) | 0.096695 / 0.141683 (-0.044988) | 1.527683 / 1.452155 (0.075528) | 1.585361 / 1.492716 (0.092645) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184280 / 0.018006 (0.166274) | 0.397845 / 0.000490 (0.397355) | 0.004415 / 0.000200 (0.004215) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024296 / 0.037411 (-0.013115) | 0.101053 / 0.014526 (0.086527) | 0.108968 / 0.176557 (-0.067589) | 0.155732 / 0.737135 (-0.581403) | 0.112604 / 0.296338 (-0.183735) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440819 / 0.215209 (0.225609) | 4.394017 / 2.077655 (2.316363) | 2.092456 / 1.504120 (0.588336) | 1.880186 / 1.541195 (0.338991) | 1.918035 / 1.468490 (0.449545) | 0.698059 / 4.584777 (-3.886718) | 3.422598 / 3.745712 (-0.323114) | 1.860465 / 5.269862 (-3.409396) | 1.157788 / 4.565676 (-3.407889) | 0.083566 / 0.424275 (-0.340709) | 0.012440 / 0.007607 (0.004832) | 0.549526 / 0.226044 (0.323481) | 5.500623 / 2.268929 (3.231694) | 2.546980 / 55.444624 (-52.897644) | 2.199527 / 6.876477 (-4.676949) | 2.297276 / 2.142072 (0.155203) | 0.801580 / 4.805227 (-4.003648) | 0.151842 / 6.500664 (-6.348822) | 0.067165 / 0.075469 (-0.008305) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.329097 / 1.841788 (-0.512691) | 13.830354 / 8.074308 (5.756046) | 14.155250 / 10.191392 (3.963858) | 0.144517 / 0.680424 (-0.535907) | 0.016738 / 0.534201 (-0.517463) | 0.379337 / 0.579283 (-0.199946) | 0.391382 / 0.434364 (-0.042982) | 0.459153 / 0.540337 (-0.081184) | 0.547287 / 1.386936 (-0.839649) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2efb0289c887ec60d54e0715cd85c111cb45f9ee \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007176 / 0.011353 (-0.004177) | 0.005125 / 0.011008 (-0.005883) | 0.096060 / 0.038508 (0.057552) | 0.033262 / 0.023109 (0.010152) | 0.311461 / 0.275898 (0.035563) | 0.340673 / 0.323480 (0.017193) | 0.005700 / 0.007986 (-0.002286) | 0.005223 / 0.004328 (0.000894) | 0.072812 / 0.004250 (0.068561) | 0.042078 / 0.037052 (0.005025) | 0.320042 / 0.258489 (0.061553) | 0.346539 / 0.293841 (0.052698) | 0.035284 / 0.128546 (-0.093262) | 0.012021 / 0.075646 (-0.063625) | 0.331555 / 0.419271 (-0.087717) | 0.051058 / 0.043533 (0.007525) | 0.303001 / 0.255139 (0.047862) | 0.328431 / 0.283200 (0.045231) | 0.100954 / 0.141683 (-0.040729) | 1.407445 / 1.452155 (-0.044710) | 1.512826 / 1.492716 (0.020110) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216442 / 0.018006 (0.198436) | 0.446298 / 0.000490 (0.445809) | 0.004701 / 0.000200 (0.004501) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028088 / 0.037411 (-0.009324) | 0.108669 / 0.014526 (0.094144) | 0.119597 / 0.176557 (-0.056960) | 0.178249 / 0.737135 (-0.558886) | 0.123914 / 0.296338 (-0.172424) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413437 / 0.215209 (0.198228) | 4.136602 / 2.077655 (2.058947) | 1.875872 / 1.504120 (0.371752) | 1.680783 / 1.541195 (0.139588) | 1.757059 / 1.468490 (0.288569) | 0.711080 / 4.584777 (-3.873697) | 3.791701 / 3.745712 (0.045989) | 2.111612 / 5.269862 (-3.158250) | 1.351204 / 4.565676 (-3.214473) | 0.086477 / 0.424275 (-0.337798) | 0.012359 / 0.007607 (0.004752) | 0.504984 / 0.226044 (0.278940) | 5.040456 / 2.268929 (2.771527) | 2.266946 / 55.444624 (-53.177679) | 1.957827 / 6.876477 (-4.918650) | 2.120490 / 2.142072 (-0.021583) | 0.856148 / 4.805227 (-3.949079) | 0.172414 / 6.500664 (-6.328250) | 0.066833 / 0.075469 (-0.008636) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198163 / 1.841788 (-0.643625) | 14.944930 / 8.074308 (6.870622) | 14.317196 / 10.191392 (4.125804) | 0.166104 / 0.680424 (-0.514320) | 0.017443 / 0.534201 (-0.516758) | 0.423025 / 0.579283 (-0.156258) | 0.437476 / 0.434364 (0.003112) | 0.500156 / 0.540337 (-0.040181) | 0.606226 / 1.386936 (-0.780710) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007417 / 0.011353 (-0.003936) | 0.005143 / 0.011008 (-0.005865) | 0.076401 / 0.038508 (0.037893) | 0.034818 / 0.023109 (0.011709) | 0.339633 / 0.275898 (0.063735) | 0.373839 / 0.323480 (0.050359) | 0.006004 / 0.007986 (-0.001982) | 0.005403 / 0.004328 (0.001075) | 0.074150 / 0.004250 (0.069899) | 0.050489 / 0.037052 (0.013436) | 0.343357 / 0.258489 (0.084868) | 0.377009 / 0.293841 (0.083168) | 0.035921 / 0.128546 (-0.092625) | 0.012197 / 0.075646 (-0.063449) | 0.087992 / 0.419271 (-0.331279) | 0.049452 / 0.043533 (0.005919) | 0.340495 / 0.255139 (0.085356) | 0.360277 / 0.283200 (0.077077) | 0.111114 / 0.141683 (-0.030569) | 1.463888 / 1.452155 (0.011734) | 1.548320 / 1.492716 (0.055604) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228437 / 0.018006 (0.210431) | 0.445120 / 0.000490 (0.444631) | 0.000392 / 0.000200 (0.000192) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029965 / 0.037411 (-0.007446) | 0.113484 / 0.014526 (0.098958) | 0.125249 / 0.176557 (-0.051308) | 0.177201 / 0.737135 (-0.559934) | 0.128750 / 0.296338 (-0.167589) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420089 / 0.215209 (0.204880) | 4.195772 / 2.077655 (2.118117) | 2.021539 / 1.504120 (0.517419) | 1.825118 / 1.541195 (0.283924) | 1.904090 / 1.468490 (0.435600) | 0.716276 / 4.584777 (-3.868501) | 3.742257 / 3.745712 (-0.003455) | 3.368880 / 5.269862 (-1.900981) | 1.728285 / 4.565676 (-2.837392) | 0.087656 / 0.424275 (-0.336619) | 0.012263 / 0.007607 (0.004656) | 0.524321 / 0.226044 (0.298277) | 5.217610 / 2.268929 (2.948682) | 2.474670 / 55.444624 (-52.969955) | 2.135452 / 6.876477 (-4.741025) | 2.292578 / 2.142072 (0.150505) | 0.852109 / 4.805227 (-3.953119) | 0.172031 / 6.500664 (-6.328633) | 0.065230 / 0.075469 (-0.010240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260494 / 1.841788 (-0.581293) | 15.019167 / 8.074308 (6.944859) | 14.647586 / 10.191392 (4.456193) | 0.170578 / 0.680424 (-0.509846) | 0.017619 / 0.534201 (-0.516582) | 0.423116 / 0.579283 (-0.156167) | 0.426680 / 0.434364 (-0.007684) | 0.519563 / 0.540337 (-0.020775) | 0.619335 / 1.386936 (-0.767601) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e210dc20c19b5e6af05df9ca6e82984dfb42465f \"CML watermark\")\n"
] | 2023-04-26T17:39:43 | 2023-04-27T16:41:50 | 2023-04-27T16:34:45 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5796",
"html_url": "https://github.com/huggingface/datasets/pull/5796",
"diff_url": "https://github.com/huggingface/datasets/pull/5796.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5796.patch",
"merged_at": "2023-04-27T16:34:45"
} | Added a "Use with Spark" doc page to document `Dataset.from_spark` following https://github.com/huggingface/datasets/pull/5701
cc @maddiedawson | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5796/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5795 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5795/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5795/comments | https://api.github.com/repos/huggingface/datasets/issues/5795/events | https://github.com/huggingface/datasets/pull/5795 | 1,685,414,505 | PR_kwDODunzps5POJo8 | 5,795 | Fix spark imports | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010844 / 0.011353 (-0.000509) | 0.007329 / 0.011008 (-0.003680) | 0.133764 / 0.038508 (0.095256) | 0.040213 / 0.023109 (0.017103) | 0.413466 / 0.275898 (0.137568) | 0.452860 / 0.323480 (0.129380) | 0.008109 / 0.007986 (0.000123) | 0.005773 / 0.004328 (0.001444) | 0.109969 / 0.004250 (0.105718) | 0.053001 / 0.037052 (0.015949) | 0.416377 / 0.258489 (0.157888) | 0.477486 / 0.293841 (0.183645) | 0.056556 / 0.128546 (-0.071990) | 0.024322 / 0.075646 (-0.051324) | 0.437750 / 0.419271 (0.018479) | 0.087732 / 0.043533 (0.044199) | 0.421540 / 0.255139 (0.166401) | 0.429143 / 0.283200 (0.145944) | 0.144864 / 0.141683 (0.003181) | 1.882785 / 1.452155 (0.430631) | 1.980721 / 1.492716 (0.488005) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.285497 / 0.018006 (0.267491) | 0.601820 / 0.000490 (0.601331) | 0.005003 / 0.000200 (0.004804) | 0.000122 / 0.000054 (0.000067) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030673 / 0.037411 (-0.006739) | 0.126883 / 0.014526 (0.112357) | 0.137677 / 0.176557 (-0.038880) | 0.211504 / 0.737135 (-0.525632) | 0.144752 / 0.296338 (-0.151587) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.665845 / 0.215209 (0.450636) | 6.369040 / 2.077655 (4.291385) | 2.708979 / 1.504120 (1.204859) | 2.370842 / 1.541195 (0.829647) | 2.445987 / 1.468490 (0.977497) | 1.260806 / 4.584777 (-3.323971) | 5.979216 / 3.745712 (2.233504) | 3.334350 / 5.269862 (-1.935512) | 2.187298 / 4.565676 (-2.378379) | 0.155494 / 0.424275 (-0.268781) | 0.017351 / 0.007607 (0.009744) | 0.853626 / 0.226044 (0.627581) | 8.375001 / 2.268929 (6.106072) | 3.528312 / 55.444624 (-51.916313) | 2.890509 / 6.876477 (-3.985968) | 3.051016 / 2.142072 (0.908944) | 1.529811 / 4.805227 (-3.275416) | 0.273883 / 6.500664 (-6.226781) | 0.086617 / 0.075469 (0.011148) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.648231 / 1.841788 (-0.193557) | 19.487109 / 8.074308 (11.412801) | 23.474621 / 10.191392 (13.283229) | 0.221392 / 0.680424 (-0.459032) | 0.028878 / 0.534201 (-0.505323) | 0.582302 / 0.579283 (0.003019) | 0.615059 / 0.434364 (0.180695) | 0.656082 / 0.540337 (0.115745) | 0.740544 / 1.386936 (-0.646392) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010687 / 0.011353 (-0.000665) | 0.007114 / 0.011008 (-0.003894) | 0.135426 / 0.038508 (0.096918) | 0.041027 / 0.023109 (0.017918) | 0.466441 / 0.275898 (0.190543) | 0.503545 / 0.323480 (0.180065) | 0.009418 / 0.007986 (0.001432) | 0.004976 / 0.004328 (0.000647) | 0.101342 / 0.004250 (0.097092) | 0.058289 / 0.037052 (0.021237) | 0.473715 / 0.258489 (0.215226) | 0.539556 / 0.293841 (0.245715) | 0.063138 / 0.128546 (-0.065408) | 0.020429 / 0.075646 (-0.055217) | 0.124179 / 0.419271 (-0.295093) | 0.066400 / 0.043533 (0.022867) | 0.450793 / 0.255139 (0.195654) | 0.494163 / 0.283200 (0.210964) | 0.131179 / 0.141683 (-0.010504) | 1.876396 / 1.452155 (0.424241) | 1.974148 / 1.492716 (0.481432) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313362 / 0.018006 (0.295356) | 0.602618 / 0.000490 (0.602129) | 0.008279 / 0.000200 (0.008079) | 0.000155 / 0.000054 (0.000101) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037250 / 0.037411 (-0.000161) | 0.144151 / 0.014526 (0.129625) | 0.155733 / 0.176557 (-0.020824) | 0.214334 / 0.737135 (-0.522801) | 0.167124 / 0.296338 (-0.129214) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.686471 / 0.215209 (0.471262) | 6.749174 / 2.077655 (4.671520) | 3.024941 / 1.504120 (1.520821) | 2.553363 / 1.541195 (1.012168) | 2.679107 / 1.468490 (1.210617) | 1.317212 / 4.584777 (-3.267565) | 5.917575 / 3.745712 (2.171862) | 3.412715 / 5.269862 (-1.857146) | 2.203478 / 4.565676 (-2.362198) | 0.150387 / 0.424275 (-0.273888) | 0.015977 / 0.007607 (0.008370) | 0.862999 / 0.226044 (0.636954) | 8.706459 / 2.268929 (6.437530) | 3.762648 / 55.444624 (-51.681977) | 2.992544 / 6.876477 (-3.883933) | 3.135796 / 2.142072 (0.993724) | 1.504140 / 4.805227 (-3.301088) | 0.268265 / 6.500664 (-6.232399) | 0.083297 / 0.075469 (0.007828) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.690193 / 1.841788 (-0.151594) | 19.912854 / 8.074308 (11.838546) | 23.568217 / 10.191392 (13.376825) | 0.285125 / 0.680424 (-0.395299) | 0.030593 / 0.534201 (-0.503608) | 0.565305 / 0.579283 (-0.013978) | 0.659283 / 0.434364 (0.224919) | 0.678864 / 0.540337 (0.138527) | 0.793634 / 1.386936 (-0.593302) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9d0edbe3f3258b7e580d1b58c0eea6637b5e22b2 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011615 / 0.011353 (0.000262) | 0.006716 / 0.011008 (-0.004292) | 0.146868 / 0.038508 (0.108360) | 0.037621 / 0.023109 (0.014512) | 0.425563 / 0.275898 (0.149664) | 0.483217 / 0.323480 (0.159737) | 0.007830 / 0.007986 (-0.000156) | 0.005940 / 0.004328 (0.001612) | 0.100771 / 0.004250 (0.096521) | 0.063907 / 0.037052 (0.026854) | 0.422993 / 0.258489 (0.164503) | 0.496514 / 0.293841 (0.202673) | 0.056004 / 0.128546 (-0.072542) | 0.021441 / 0.075646 (-0.054206) | 0.453589 / 0.419271 (0.034317) | 0.067555 / 0.043533 (0.024022) | 0.442490 / 0.255139 (0.187351) | 0.503941 / 0.283200 (0.220742) | 0.134023 / 0.141683 (-0.007660) | 1.886329 / 1.452155 (0.434175) | 2.030867 / 1.492716 (0.538150) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288063 / 0.018006 (0.270057) | 0.627177 / 0.000490 (0.626687) | 0.006335 / 0.000200 (0.006135) | 0.000171 / 0.000054 (0.000116) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032424 / 0.037411 (-0.004987) | 0.132749 / 0.014526 (0.118223) | 0.144727 / 0.176557 (-0.031829) | 0.232577 / 0.737135 (-0.504558) | 0.157315 / 0.296338 (-0.139024) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.623058 / 0.215209 (0.407849) | 6.272447 / 2.077655 (4.194792) | 2.506778 / 1.504120 (1.002658) | 2.203094 / 1.541195 (0.661899) | 2.346972 / 1.468490 (0.878482) | 1.358498 / 4.584777 (-3.226279) | 5.879670 / 3.745712 (2.133958) | 5.818406 / 5.269862 (0.548545) | 3.231936 / 4.565676 (-1.333741) | 0.154013 / 0.424275 (-0.270263) | 0.021541 / 0.007607 (0.013934) | 0.823746 / 0.226044 (0.597702) | 8.140304 / 2.268929 (5.871375) | 3.366911 / 55.444624 (-52.077714) | 2.696856 / 6.876477 (-4.179621) | 2.845743 / 2.142072 (0.703671) | 1.522363 / 4.805227 (-3.282864) | 0.278938 / 6.500664 (-6.221726) | 0.085044 / 0.075469 (0.009575) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.681348 / 1.841788 (-0.160440) | 19.686703 / 8.074308 (11.612395) | 22.995655 / 10.191392 (12.804263) | 0.218876 / 0.680424 (-0.461548) | 0.029334 / 0.534201 (-0.504867) | 0.560846 / 0.579283 (-0.018438) | 0.645210 / 0.434364 (0.210846) | 0.697842 / 0.540337 (0.157505) | 0.832875 / 1.386936 (-0.554061) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009509 / 0.011353 (-0.001844) | 0.006471 / 0.011008 (-0.004537) | 0.101477 / 0.038508 (0.062969) | 0.035281 / 0.023109 (0.012171) | 0.470032 / 0.275898 (0.194134) | 0.501475 / 0.323480 (0.177995) | 0.007641 / 0.007986 (-0.000344) | 0.006784 / 0.004328 (0.002455) | 0.096111 / 0.004250 (0.091861) | 0.055199 / 0.037052 (0.018146) | 0.470095 / 0.258489 (0.211606) | 0.530955 / 0.293841 (0.237114) | 0.056161 / 0.128546 (-0.072385) | 0.022055 / 0.075646 (-0.053591) | 0.121585 / 0.419271 (-0.297686) | 0.063736 / 0.043533 (0.020203) | 0.470771 / 0.255139 (0.215632) | 0.490546 / 0.283200 (0.207346) | 0.128825 / 0.141683 (-0.012858) | 1.898639 / 1.452155 (0.446484) | 2.052305 / 1.492716 (0.559589) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.322526 / 0.018006 (0.304520) | 0.628096 / 0.000490 (0.627607) | 0.006837 / 0.000200 (0.006637) | 0.000199 / 0.000054 (0.000145) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033830 / 0.037411 (-0.003581) | 0.136217 / 0.014526 (0.121691) | 0.147006 / 0.176557 (-0.029551) | 0.203950 / 0.737135 (-0.533185) | 0.150327 / 0.296338 (-0.146011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.654287 / 0.215209 (0.439078) | 6.430306 / 2.077655 (4.352651) | 2.881750 / 1.504120 (1.377630) | 2.489505 / 1.541195 (0.948310) | 2.543037 / 1.468490 (1.074547) | 1.226682 / 4.584777 (-3.358094) | 5.902076 / 3.745712 (2.156364) | 3.335344 / 5.269862 (-1.934518) | 2.156738 / 4.565676 (-2.408939) | 0.151804 / 0.424275 (-0.272472) | 0.015238 / 0.007607 (0.007631) | 0.816364 / 0.226044 (0.590319) | 8.126367 / 2.268929 (5.857438) | 3.653222 / 55.444624 (-51.791402) | 2.886667 / 6.876477 (-3.989809) | 3.120852 / 2.142072 (0.978779) | 1.421423 / 4.805227 (-3.383804) | 0.264590 / 6.500664 (-6.236074) | 0.085716 / 0.075469 (0.010247) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.745258 / 1.841788 (-0.096530) | 19.379253 / 8.074308 (11.304945) | 23.827046 / 10.191392 (13.635654) | 0.267702 / 0.680424 (-0.412722) | 0.030253 / 0.534201 (-0.503948) | 0.542037 / 0.579283 (-0.037246) | 0.655946 / 0.434364 (0.221582) | 0.683525 / 0.540337 (0.143188) | 0.831333 / 1.386936 (-0.555603) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5b011a258329375aa4dc7b414bd4e7b6363c5357 \"CML watermark\")\n"
] | 2023-04-26T17:09:32 | 2023-04-26T17:49:03 | 2023-04-26T17:39:12 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5795",
"html_url": "https://github.com/huggingface/datasets/pull/5795",
"diff_url": "https://github.com/huggingface/datasets/pull/5795.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5795.patch",
"merged_at": "2023-04-26T17:39:12"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5795/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5794 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5794/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5794/comments | https://api.github.com/repos/huggingface/datasets/issues/5794/events | https://github.com/huggingface/datasets/issues/5794 | 1,685,196,061 | I_kwDODunzps5kcg0d | 5,794 | CI ZeroDivisionError | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 2023-04-26T14:55:23 | 2023-04-26T14:55:23 | null | MEMBER | null | null | null | Sometimes when running our CI on Windows, we get a ZeroDivisionError:
```
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore - ZeroDivisionError: float division by zero
```
See for example:
- https://github.com/huggingface/datasets/actions/runs/4809358266/jobs/8560513110
- https://github.com/huggingface/datasets/actions/runs/4798359836/jobs/8536573688
```
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
split = 'test', start_time = 1682516718.8236516, num_samples = 2, num_steps = 1
def speed_metrics(split, start_time, num_samples=None, num_steps=None):
"""
Measure and return speed performance metrics.
This function requires a time snapshot `start_time` before the operation to be measured starts and this function
should be run immediately after the operation to be measured has completed.
Args:
- split: name to prefix metric (like train, eval, test...)
- start_time: operation start time
- num_samples: number of samples processed
"""
runtime = time.time() - start_time
result = {f"{split}_runtime": round(runtime, 4)}
if num_samples is not None:
> samples_per_second = num_samples / runtime
E ZeroDivisionError: float division by zero
C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\transformers\trainer_utils.py:354: ZeroDivisionError
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5794/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5793 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5793/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5793/comments | https://api.github.com/repos/huggingface/datasets/issues/5793/events | https://github.com/huggingface/datasets/issues/5793 | 1,684,777,320 | I_kwDODunzps5ka6lo | 5,793 | IterableDataset.with_format("torch") not working | {
"login": "jiangwy99",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwy99",
"html_url": "https://github.com/jiangwy99",
"followers_url": "https://api.github.com/users/jiangwy99/followers",
"following_url": "https://api.github.com/users/jiangwy99/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwy99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwy99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwy99/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwy99/orgs",
"repos_url": "https://api.github.com/users/jiangwy99/repos",
"events_url": "https://api.github.com/users/jiangwy99/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwy99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! Thanks for reporting, I'm working on it ;)"
] | 2023-04-26T10:50:23 | 2023-06-13T15:57:06 | 2023-06-13T15:57:06 | NONE | null | null | null | ### Describe the bug
After calling the with_format("torch") method on an IterableDataset instance, the data format is unchanged.
### Steps to reproduce the bug
```python
from datasets import IterableDataset
def gen():
for i in range(4):
yield {"a": [i] * 4}
dataset = IterableDataset.from_generator(gen).with_format("torch")
next(iter(dataset))
```
### Expected behavior
`{"a": torch.tensor([0, 0, 0, 0])}` is expected, but `{"a": [0, 0, 0, 0]}` is observed.
### Environment info
```bash
platform==ubuntu 22.04.01
python==3.10.9
datasets==2.11.0
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5793/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5791 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5791/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5791/comments | https://api.github.com/repos/huggingface/datasets/issues/5791/events | https://github.com/huggingface/datasets/issues/5791 | 1,683,473,943 | I_kwDODunzps5kV8YX | 5,791 | TIFF/TIF support | {
"login": "sebasmos",
"id": 31293221,
"node_id": "MDQ6VXNlcjMxMjkzMjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/31293221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sebasmos",
"html_url": "https://github.com/sebasmos",
"followers_url": "https://api.github.com/users/sebasmos/followers",
"following_url": "https://api.github.com/users/sebasmos/following{/other_user}",
"gists_url": "https://api.github.com/users/sebasmos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sebasmos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sebasmos/subscriptions",
"organizations_url": "https://api.github.com/users/sebasmos/orgs",
"repos_url": "https://api.github.com/users/sebasmos/repos",
"events_url": "https://api.github.com/users/sebasmos/events{/privacy}",
"received_events_url": "https://api.github.com/users/sebasmos/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"The issue with multichannel TIFF images has already been reported in Pillow (https://github.com/python-pillow/Pillow/issues/1888). We can't do much about it on our side.\r\n\r\nStill, to avoid the error, you can bypass the default Pillow decoding and define a custom one as follows:\r\n```python\r\nimport tifffile # pip install tifffile\r\n\r\ndset = dset.cast_column(\"image\", datasets.Image(decode=False))\r\n\r\ndef decode_mutlichannel_tiff(batch):\r\n batch[\"image\"] = [tifffile.imread(image[\"path\"]) for image in batch[\"image\"]]\r\n return batch\r\n\r\ndset.set_transform(decode_mutlichannel_tiff)\r\n```\r\n\r\nRegarding the annotations, in which format are they? In the COCO format? I think this is a bit too specific to have a built-in loader for it."
] | 2023-04-25T16:14:18 | 2023-05-05T16:22:50 | null | NONE | null | null | null | ### Feature request
I currently have a dataset (with tiff and json files) where I have to do this:
`wget path_to_data/images.zip && unzip images.zip`
`wget path_to_data/annotations.zip && unzip annotations.zip`
Would it make sense a contribution that supports these type of files?
### Motivation
instead of using `load_dataset` have to use wget as these files are not supported for annotations with JSON and images with TIFF files.
Additionally to this, the PIL formatting from datasets does not read correctly the image channels with TIFF format, besides multichannel adaptation might be necessary as well (as my data e.g has more than 3 channels)
### Your contribution
1. Support TIFF images over multi channel format
2. Support JSON annotations | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5791/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5790/comments | https://api.github.com/repos/huggingface/datasets/issues/5790/events | https://github.com/huggingface/datasets/pull/5790 | 1,683,229,126 | PR_kwDODunzps5PG0mJ | 5,790 | Allow to run CI on push to ci-branch | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007852 / 0.011353 (-0.003500) | 0.005804 / 0.011008 (-0.005204) | 0.098268 / 0.038508 (0.059760) | 0.036440 / 0.023109 (0.013331) | 0.299952 / 0.275898 (0.024054) | 0.335590 / 0.323480 (0.012111) | 0.006332 / 0.007986 (-0.001653) | 0.004218 / 0.004328 (-0.000110) | 0.074733 / 0.004250 (0.070483) | 0.055252 / 0.037052 (0.018200) | 0.300854 / 0.258489 (0.042365) | 0.353442 / 0.293841 (0.059601) | 0.036447 / 0.128546 (-0.092099) | 0.012638 / 0.075646 (-0.063009) | 0.336680 / 0.419271 (-0.082591) | 0.052436 / 0.043533 (0.008903) | 0.292606 / 0.255139 (0.037467) | 0.319676 / 0.283200 (0.036476) | 0.111137 / 0.141683 (-0.030546) | 1.449569 / 1.452155 (-0.002586) | 1.558110 / 1.492716 (0.065394) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.306043 / 0.018006 (0.288037) | 0.563174 / 0.000490 (0.562684) | 0.032227 / 0.000200 (0.032027) | 0.000491 / 0.000054 (0.000436) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029874 / 0.037411 (-0.007537) | 0.109330 / 0.014526 (0.094805) | 0.122579 / 0.176557 (-0.053978) | 0.181398 / 0.737135 (-0.555737) | 0.127124 / 0.296338 (-0.169215) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417950 / 0.215209 (0.202741) | 4.163883 / 2.077655 (2.086228) | 1.985209 / 1.504120 (0.481089) | 1.793660 / 1.541195 (0.252465) | 1.895193 / 1.468490 (0.426703) | 0.694331 / 4.584777 (-3.890446) | 3.820170 / 3.745712 (0.074458) | 2.180556 / 5.269862 (-3.089305) | 1.490671 / 4.565676 (-3.075006) | 0.086132 / 0.424275 (-0.338143) | 0.012289 / 0.007607 (0.004682) | 0.511182 / 0.226044 (0.285137) | 5.117855 / 2.268929 (2.848927) | 2.403914 / 55.444624 (-53.040710) | 2.071107 / 6.876477 (-4.805369) | 2.184108 / 2.142072 (0.042036) | 0.835028 / 4.805227 (-3.970199) | 0.167707 / 6.500664 (-6.332957) | 0.066724 / 0.075469 (-0.008746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.203921 / 1.841788 (-0.637867) | 15.214676 / 8.074308 (7.140368) | 14.971337 / 10.191392 (4.779945) | 0.170225 / 0.680424 (-0.510199) | 0.017924 / 0.534201 (-0.516277) | 0.428532 / 0.579283 (-0.150751) | 0.449157 / 0.434364 (0.014793) | 0.507723 / 0.540337 (-0.032614) | 0.615331 / 1.386936 (-0.771605) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008172 / 0.011353 (-0.003181) | 0.005405 / 0.011008 (-0.005603) | 0.074684 / 0.038508 (0.036176) | 0.039133 / 0.023109 (0.016024) | 0.342598 / 0.275898 (0.066700) | 0.377752 / 0.323480 (0.054272) | 0.006655 / 0.007986 (-0.001331) | 0.005788 / 0.004328 (0.001459) | 0.074014 / 0.004250 (0.069763) | 0.056225 / 0.037052 (0.019173) | 0.342330 / 0.258489 (0.083841) | 0.381052 / 0.293841 (0.087211) | 0.036574 / 0.128546 (-0.091973) | 0.012472 / 0.075646 (-0.063174) | 0.087574 / 0.419271 (-0.331698) | 0.050178 / 0.043533 (0.006646) | 0.351116 / 0.255139 (0.095977) | 0.363772 / 0.283200 (0.080572) | 0.118313 / 0.141683 (-0.023370) | 1.436691 / 1.452155 (-0.015463) | 1.551397 / 1.492716 (0.058680) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265201 / 0.018006 (0.247195) | 0.561855 / 0.000490 (0.561366) | 0.000463 / 0.000200 (0.000263) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030540 / 0.037411 (-0.006871) | 0.118815 / 0.014526 (0.104289) | 0.127689 / 0.176557 (-0.048868) | 0.176211 / 0.737135 (-0.560924) | 0.133130 / 0.296338 (-0.163208) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416318 / 0.215209 (0.201109) | 4.146806 / 2.077655 (2.069151) | 1.983437 / 1.504120 (0.479317) | 1.799733 / 1.541195 (0.258539) | 1.889026 / 1.468490 (0.420536) | 0.723330 / 4.584777 (-3.861447) | 3.817795 / 3.745712 (0.072083) | 2.158449 / 5.269862 (-3.111413) | 1.377348 / 4.565676 (-3.188328) | 0.088504 / 0.424275 (-0.335771) | 0.012560 / 0.007607 (0.004953) | 0.530382 / 0.226044 (0.304337) | 5.308529 / 2.268929 (3.039600) | 2.469655 / 55.444624 (-52.974970) | 2.136209 / 6.876477 (-4.740267) | 2.322997 / 2.142072 (0.180924) | 0.861396 / 4.805227 (-3.943831) | 0.172747 / 6.500664 (-6.327917) | 0.067617 / 0.075469 (-0.007852) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263225 / 1.841788 (-0.578563) | 15.878025 / 8.074308 (7.803717) | 14.815627 / 10.191392 (4.624235) | 0.148722 / 0.680424 (-0.531702) | 0.018071 / 0.534201 (-0.516130) | 0.428389 / 0.579283 (-0.150894) | 0.428635 / 0.434364 (-0.005729) | 0.496953 / 0.540337 (-0.043385) | 0.592783 / 1.386936 (-0.794153) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d2e5568dc7a47f9a99678d2889bd2e3c33afdd00 \"CML watermark\")\n"
] | 2023-04-25T13:57:26 | 2023-04-26T13:43:08 | 2023-04-26T13:35:47 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5790",
"html_url": "https://github.com/huggingface/datasets/pull/5790",
"diff_url": "https://github.com/huggingface/datasets/pull/5790.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5790.patch",
"merged_at": "2023-04-26T13:35:47"
} | This PR allows to run the CI on push to a branch named "ci-*", without needing to open a PR.
- This will allow to make CI tests without opening a PR, e.g., for future `huggingface-hub` releases, future dependency releases (like `fsspec`, `pandas`,...)
Note that to build the documentation, we already allow it on push to a branch named "doc-builder*".
See:
- #5788
CC: @Wauplin | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5790/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5789 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5789/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5789/comments | https://api.github.com/repos/huggingface/datasets/issues/5789/events | https://github.com/huggingface/datasets/issues/5789 | 1,682,611,179 | I_kwDODunzps5kSpvr | 5,789 | Support streaming datasets that use jsonlines | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-04-25T07:40:02 | 2023-04-25T07:40:03 | null | MEMBER | null | null | null | Extend support for streaming datasets that use `jsonlines.open`.
Currently, if `jsonlines` is installed, `datasets` raises a `FileNotFoundError`:
```
FileNotFoundError: [Errno 2] No such file or directory: 'https://...'
```
See:
- https://huggingface.co/datasets/masakhane/afriqa/discussions/1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5789/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5789/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5788 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5788/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5788/comments | https://api.github.com/repos/huggingface/datasets/issues/5788/events | https://github.com/huggingface/datasets/pull/5788 | 1,681,136,256 | PR_kwDODunzps5O_v4B | 5,788 | Prepare tests for hfh 0.14 | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007343 / 0.011353 (-0.004010) | 0.005145 / 0.011008 (-0.005863) | 0.099820 / 0.038508 (0.061312) | 0.033487 / 0.023109 (0.010378) | 0.313069 / 0.275898 (0.037171) | 0.335420 / 0.323480 (0.011940) | 0.005959 / 0.007986 (-0.002027) | 0.005373 / 0.004328 (0.001044) | 0.076568 / 0.004250 (0.072317) | 0.048702 / 0.037052 (0.011650) | 0.322957 / 0.258489 (0.064468) | 0.363044 / 0.293841 (0.069203) | 0.035070 / 0.128546 (-0.093476) | 0.012029 / 0.075646 (-0.063618) | 0.334664 / 0.419271 (-0.084607) | 0.050549 / 0.043533 (0.007017) | 0.310113 / 0.255139 (0.054974) | 0.324405 / 0.283200 (0.041205) | 0.097596 / 0.141683 (-0.044087) | 1.440741 / 1.452155 (-0.011414) | 1.531194 / 1.492716 (0.038478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220799 / 0.018006 (0.202793) | 0.438158 / 0.000490 (0.437668) | 0.007737 / 0.000200 (0.007537) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026888 / 0.037411 (-0.010523) | 0.106281 / 0.014526 (0.091755) | 0.117419 / 0.176557 (-0.059138) | 0.179144 / 0.737135 (-0.557992) | 0.122477 / 0.296338 (-0.173861) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412667 / 0.215209 (0.197458) | 4.108784 / 2.077655 (2.031129) | 1.834300 / 1.504120 (0.330180) | 1.627256 / 1.541195 (0.086061) | 1.691036 / 1.468490 (0.222546) | 0.713405 / 4.584777 (-3.871372) | 3.839262 / 3.745712 (0.093550) | 2.108453 / 5.269862 (-3.161408) | 1.340740 / 4.565676 (-3.224936) | 0.087776 / 0.424275 (-0.336499) | 0.012730 / 0.007607 (0.005123) | 0.505323 / 0.226044 (0.279279) | 5.085176 / 2.268929 (2.816247) | 2.307165 / 55.444624 (-53.137459) | 1.936771 / 6.876477 (-4.939706) | 2.097391 / 2.142072 (-0.044681) | 0.856215 / 4.805227 (-3.949012) | 0.171826 / 6.500664 (-6.328838) | 0.066603 / 0.075469 (-0.008866) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.202126 / 1.841788 (-0.639661) | 15.173598 / 8.074308 (7.099290) | 15.012645 / 10.191392 (4.821253) | 0.162187 / 0.680424 (-0.518237) | 0.017462 / 0.534201 (-0.516739) | 0.423895 / 0.579283 (-0.155388) | 0.432010 / 0.434364 (-0.002354) | 0.503234 / 0.540337 (-0.037104) | 0.598948 / 1.386936 (-0.787988) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007099 / 0.011353 (-0.004254) | 0.005167 / 0.011008 (-0.005841) | 0.075551 / 0.038508 (0.037043) | 0.033050 / 0.023109 (0.009940) | 0.339629 / 0.275898 (0.063731) | 0.380486 / 0.323480 (0.057006) | 0.005776 / 0.007986 (-0.002209) | 0.004029 / 0.004328 (-0.000299) | 0.075074 / 0.004250 (0.070823) | 0.046709 / 0.037052 (0.009656) | 0.340203 / 0.258489 (0.081714) | 0.380849 / 0.293841 (0.087008) | 0.035027 / 0.128546 (-0.093519) | 0.012226 / 0.075646 (-0.063420) | 0.087525 / 0.419271 (-0.331747) | 0.049361 / 0.043533 (0.005828) | 0.341854 / 0.255139 (0.086715) | 0.359590 / 0.283200 (0.076390) | 0.100102 / 0.141683 (-0.041581) | 1.482759 / 1.452155 (0.030605) | 1.569905 / 1.492716 (0.077189) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213615 / 0.018006 (0.195609) | 0.441117 / 0.000490 (0.440628) | 0.004932 / 0.000200 (0.004732) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031313 / 0.037411 (-0.006098) | 0.110191 / 0.014526 (0.095665) | 0.125320 / 0.176557 (-0.051237) | 0.177658 / 0.737135 (-0.559477) | 0.127928 / 0.296338 (-0.168410) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426952 / 0.215209 (0.211743) | 4.247731 / 2.077655 (2.170076) | 2.107318 / 1.504120 (0.603198) | 1.843845 / 1.541195 (0.302650) | 1.894822 / 1.468490 (0.426332) | 0.696232 / 4.584777 (-3.888545) | 3.826516 / 3.745712 (0.080804) | 2.126688 / 5.269862 (-3.143174) | 1.327062 / 4.565676 (-3.238615) | 0.085693 / 0.424275 (-0.338582) | 0.012226 / 0.007607 (0.004619) | 0.521904 / 0.226044 (0.295859) | 5.219798 / 2.268929 (2.950869) | 2.524908 / 55.444624 (-52.919716) | 2.212078 / 6.876477 (-4.664399) | 2.373944 / 2.142072 (0.231871) | 0.833846 / 4.805227 (-3.971381) | 0.169639 / 6.500664 (-6.331025) | 0.064538 / 0.075469 (-0.010931) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254930 / 1.841788 (-0.586858) | 15.585277 / 8.074308 (7.510969) | 14.762857 / 10.191392 (4.571465) | 0.146959 / 0.680424 (-0.533465) | 0.017451 / 0.534201 (-0.516750) | 0.424469 / 0.579283 (-0.154814) | 0.422359 / 0.434364 (-0.012004) | 0.489930 / 0.540337 (-0.050408) | 0.595856 / 1.386936 (-0.791080) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#213c72f52ae52b662f967d3218f66c70a3043048 \"CML watermark\")\n",
"@albertvillanova thanks for the review. As you prefer for the github CI config. I just took it from @lhoestq's branch when testing hfh==0.14.0. I think it's still relevant for next releases. In any case, I let you handle merging the PR :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008371 / 0.011353 (-0.002982) | 0.005210 / 0.011008 (-0.005798) | 0.105639 / 0.038508 (0.067131) | 0.045903 / 0.023109 (0.022794) | 0.391231 / 0.275898 (0.115333) | 0.438824 / 0.323480 (0.115345) | 0.006270 / 0.007986 (-0.001715) | 0.005950 / 0.004328 (0.001621) | 0.079685 / 0.004250 (0.075434) | 0.052121 / 0.037052 (0.015069) | 0.387787 / 0.258489 (0.129298) | 0.434322 / 0.293841 (0.140481) | 0.032598 / 0.128546 (-0.095948) | 0.012126 / 0.075646 (-0.063520) | 0.359658 / 0.419271 (-0.059613) | 0.046686 / 0.043533 (0.003154) | 0.391973 / 0.255139 (0.136834) | 0.421149 / 0.283200 (0.137949) | 0.105920 / 0.141683 (-0.035763) | 1.483008 / 1.452155 (0.030854) | 1.617010 / 1.492716 (0.124294) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199111 / 0.018006 (0.181105) | 0.407995 / 0.000490 (0.407505) | 0.006706 / 0.000200 (0.006506) | 0.000229 / 0.000054 (0.000175) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030247 / 0.037411 (-0.007164) | 0.115977 / 0.014526 (0.101451) | 0.118112 / 0.176557 (-0.058444) | 0.182710 / 0.737135 (-0.554426) | 0.122483 / 0.296338 (-0.173855) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430455 / 0.215209 (0.215246) | 4.314298 / 2.077655 (2.236643) | 1.898124 / 1.504120 (0.394005) | 1.734909 / 1.541195 (0.193715) | 1.802400 / 1.468490 (0.333910) | 0.717237 / 4.584777 (-3.867539) | 4.004705 / 3.745712 (0.258993) | 2.138901 / 5.269862 (-3.130960) | 1.254037 / 4.565676 (-3.311640) | 0.085594 / 0.424275 (-0.338681) | 0.013774 / 0.007607 (0.006166) | 0.535218 / 0.226044 (0.309174) | 5.373730 / 2.268929 (3.104801) | 2.371194 / 55.444624 (-53.073430) | 2.111206 / 6.876477 (-4.765270) | 2.225137 / 2.142072 (0.083064) | 0.838325 / 4.805227 (-3.966902) | 0.159176 / 6.500664 (-6.341488) | 0.072285 / 0.075469 (-0.003184) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.352232 / 1.841788 (-0.489555) | 16.926722 / 8.074308 (8.852414) | 16.709531 / 10.191392 (6.518139) | 0.159249 / 0.680424 (-0.521175) | 0.017667 / 0.534201 (-0.516534) | 0.426894 / 0.579283 (-0.152390) | 0.539903 / 0.434364 (0.105539) | 0.537471 / 0.540337 (-0.002866) | 0.619592 / 1.386936 (-0.767344) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008354 / 0.011353 (-0.002999) | 0.005366 / 0.011008 (-0.005642) | 0.080961 / 0.038508 (0.042453) | 0.046574 / 0.023109 (0.023465) | 0.345949 / 0.275898 (0.070051) | 0.394041 / 0.323480 (0.070562) | 0.006209 / 0.007986 (-0.001777) | 0.005980 / 0.004328 (0.001651) | 0.076235 / 0.004250 (0.071984) | 0.051833 / 0.037052 (0.014780) | 0.348786 / 0.258489 (0.090297) | 0.397421 / 0.293841 (0.103580) | 0.033026 / 0.128546 (-0.095520) | 0.012217 / 0.075646 (-0.063429) | 0.087439 / 0.419271 (-0.331832) | 0.045488 / 0.043533 (0.001955) | 0.352160 / 0.255139 (0.097021) | 0.379079 / 0.283200 (0.095879) | 0.116111 / 0.141683 (-0.025572) | 1.470177 / 1.452155 (0.018022) | 1.587499 / 1.492716 (0.094783) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296149 / 0.018006 (0.278143) | 0.592362 / 0.000490 (0.591872) | 0.000492 / 0.000200 (0.000292) | 0.000064 / 0.000054 (0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036599 / 0.037411 (-0.000813) | 0.113768 / 0.014526 (0.099242) | 0.116198 / 0.176557 (-0.060358) | 0.180329 / 0.737135 (-0.556806) | 0.123942 / 0.296338 (-0.172396) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452445 / 0.215209 (0.237236) | 4.504330 / 2.077655 (2.426675) | 2.275645 / 1.504120 (0.771525) | 2.107765 / 1.541195 (0.566571) | 2.086363 / 1.468490 (0.617873) | 0.723721 / 4.584777 (-3.861056) | 3.825330 / 3.745712 (0.079618) | 2.162743 / 5.269862 (-3.107119) | 1.255953 / 4.565676 (-3.309724) | 0.085860 / 0.424275 (-0.338415) | 0.013790 / 0.007607 (0.006183) | 0.560257 / 0.226044 (0.334213) | 5.618180 / 2.268929 (3.349251) | 2.625423 / 55.444624 (-52.819202) | 2.374381 / 6.876477 (-4.502095) | 2.496560 / 2.142072 (0.354488) | 0.841120 / 4.805227 (-3.964107) | 0.161541 / 6.500664 (-6.339123) | 0.075270 / 0.075469 (-0.000199) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.432916 / 1.841788 (-0.408872) | 14.858534 / 8.074308 (6.784226) | 14.973521 / 10.191392 (4.782129) | 0.148312 / 0.680424 (-0.532112) | 0.016811 / 0.534201 (-0.517390) | 0.382623 / 0.579283 (-0.196660) | 0.389767 / 0.434364 (-0.044596) | 0.449657 / 0.540337 (-0.090680) | 0.533723 / 1.386936 (-0.853214) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f8344350f15265a585188ac986ae49a8ed8289fe \"CML watermark\")\n",
"I agree it is good to have a way to run the CI on push, without needing to open a PR.\r\n\r\nBut I think the branch name should be more generic (and this is not specific to this PR). See:\r\n- #5790 ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007208 / 0.011353 (-0.004145) | 0.005600 / 0.011008 (-0.005408) | 0.096129 / 0.038508 (0.057621) | 0.027834 / 0.023109 (0.004725) | 0.295106 / 0.275898 (0.019208) | 0.323983 / 0.323480 (0.000503) | 0.005164 / 0.007986 (-0.002822) | 0.003962 / 0.004328 (-0.000366) | 0.078339 / 0.004250 (0.074089) | 0.036974 / 0.037052 (-0.000078) | 0.310315 / 0.258489 (0.051826) | 0.338036 / 0.293841 (0.044195) | 0.042124 / 0.128546 (-0.086422) | 0.015886 / 0.075646 (-0.059760) | 0.337961 / 0.419271 (-0.081310) | 0.051507 / 0.043533 (0.007974) | 0.297505 / 0.255139 (0.042366) | 0.310728 / 0.283200 (0.027528) | 0.086312 / 0.141683 (-0.055371) | 1.356923 / 1.452155 (-0.095232) | 1.429366 / 1.492716 (-0.063350) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205495 / 0.018006 (0.187489) | 0.460639 / 0.000490 (0.460149) | 0.003996 / 0.000200 (0.003796) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021970 / 0.037411 (-0.015442) | 0.090283 / 0.014526 (0.075757) | 0.098579 / 0.176557 (-0.077978) | 0.160437 / 0.737135 (-0.576699) | 0.102738 / 0.296338 (-0.193600) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.494474 / 0.215209 (0.279265) | 4.967453 / 2.077655 (2.889799) | 2.045852 / 1.504120 (0.541732) | 1.858022 / 1.541195 (0.316827) | 1.771874 / 1.468490 (0.303384) | 1.186368 / 4.584777 (-3.398408) | 4.974762 / 3.745712 (1.229050) | 2.616225 / 5.269862 (-2.653636) | 1.702971 / 4.565676 (-2.862705) | 0.124929 / 0.424275 (-0.299346) | 0.011774 / 0.007607 (0.004167) | 0.569643 / 0.226044 (0.343598) | 5.793114 / 2.268929 (3.524186) | 2.441561 / 55.444624 (-53.003064) | 1.862233 / 6.876477 (-5.014243) | 1.931142 / 2.142072 (-0.210931) | 1.148915 / 4.805227 (-3.656313) | 0.203914 / 6.500664 (-6.296750) | 0.062468 / 0.075469 (-0.013001) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188708 / 1.841788 (-0.653080) | 13.710830 / 8.074308 (5.636522) | 15.695153 / 10.191392 (5.503761) | 0.171467 / 0.680424 (-0.508957) | 0.024509 / 0.534201 (-0.509692) | 0.450270 / 0.579283 (-0.129014) | 0.500712 / 0.434364 (0.066348) | 0.488632 / 0.540337 (-0.051706) | 0.574893 / 1.386936 (-0.812043) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007254 / 0.011353 (-0.004099) | 0.006199 / 0.011008 (-0.004809) | 0.072079 / 0.038508 (0.033571) | 0.026909 / 0.023109 (0.003800) | 0.355538 / 0.275898 (0.079640) | 0.358625 / 0.323480 (0.035145) | 0.005564 / 0.007986 (-0.002421) | 0.005278 / 0.004328 (0.000950) | 0.076469 / 0.004250 (0.072219) | 0.038269 / 0.037052 (0.001216) | 0.355214 / 0.258489 (0.096725) | 0.383219 / 0.293841 (0.089378) | 0.046516 / 0.128546 (-0.082030) | 0.015393 / 0.075646 (-0.060254) | 0.088506 / 0.419271 (-0.330765) | 0.050326 / 0.043533 (0.006793) | 0.327265 / 0.255139 (0.072126) | 0.370176 / 0.283200 (0.086976) | 0.102438 / 0.141683 (-0.039245) | 1.378969 / 1.452155 (-0.073186) | 1.441998 / 1.492716 (-0.050719) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209044 / 0.018006 (0.191038) | 0.455733 / 0.000490 (0.455243) | 0.005856 / 0.000200 (0.005656) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025336 / 0.037411 (-0.012075) | 0.097449 / 0.014526 (0.082923) | 0.106301 / 0.176557 (-0.070255) | 0.153053 / 0.737135 (-0.584082) | 0.107938 / 0.296338 (-0.188401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.491070 / 0.215209 (0.275861) | 5.049637 / 2.077655 (2.971982) | 2.064709 / 1.504120 (0.560589) | 1.782266 / 1.541195 (0.241072) | 1.798570 / 1.468490 (0.330080) | 0.988886 / 4.584777 (-3.595891) | 4.690324 / 3.745712 (0.944612) | 4.317355 / 5.269862 (-0.952507) | 2.347596 / 4.565676 (-2.218081) | 0.117249 / 0.424275 (-0.307026) | 0.011614 / 0.007607 (0.004007) | 0.630033 / 0.226044 (0.403988) | 6.140108 / 2.268929 (3.871180) | 2.638080 / 55.444624 (-52.806545) | 2.133017 / 6.876477 (-4.743459) | 2.123392 / 2.142072 (-0.018680) | 1.178056 / 4.805227 (-3.627171) | 0.209465 / 6.500664 (-6.291199) | 0.063234 / 0.075469 (-0.012235) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.238089 / 1.841788 (-0.603699) | 14.066866 / 8.074308 (5.992558) | 16.225480 / 10.191392 (6.034088) | 0.206466 / 0.680424 (-0.473958) | 0.027279 / 0.534201 (-0.506922) | 0.443006 / 0.579283 (-0.136277) | 0.509512 / 0.434364 (0.075148) | 0.479075 / 0.540337 (-0.061263) | 0.573546 / 1.386936 (-0.813390) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c6015a070c66a5bbd84603d415ccc57cb668b44b \"CML watermark\")\n"
] | 2023-04-24T12:13:03 | 2023-04-25T14:32:56 | 2023-04-25T14:25:30 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5788",
"html_url": "https://github.com/huggingface/datasets/pull/5788",
"diff_url": "https://github.com/huggingface/datasets/pull/5788.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5788.patch",
"merged_at": "2023-04-25T14:25:30"
} | Related to the coming release of `huggingface_hub==0.14.0`. It will break some internal tests. The PR fixes these tests. Let's double-check the CI but I expect the fixed tests to be running fine with both `hfh<=0.13.4` and `hfh==0.14`. Worth case scenario, existing PRs will have to be rebased once this fix is merged.
See related [discussion](https://huggingface.slack.com/archives/C02V5EA0A95/p1682337463368609?thread_ts=1681994202.635609&cid=C02V5EA0A95) (private slack).
cc @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5788/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5787/comments | https://api.github.com/repos/huggingface/datasets/issues/5787/events | https://github.com/huggingface/datasets/pull/5787 | 1,680,965,959 | PR_kwDODunzps5O_KNU | 5,787 | Fix inferring module for unsupported data files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I think you can revert the last commit - it should fail if data_files={} IMO",
"The validation of non-empty data_files is addressed in this PR:\r\n- #5802",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008622 / 0.011353 (-0.002730) | 0.005970 / 0.011008 (-0.005038) | 0.117797 / 0.038508 (0.079289) | 0.040955 / 0.023109 (0.017846) | 0.419538 / 0.275898 (0.143640) | 0.455816 / 0.323480 (0.132336) | 0.006481 / 0.007986 (-0.001505) | 0.004507 / 0.004328 (0.000178) | 0.089073 / 0.004250 (0.084822) | 0.052389 / 0.037052 (0.015337) | 0.420053 / 0.258489 (0.161564) | 0.466886 / 0.293841 (0.173045) | 0.042660 / 0.128546 (-0.085886) | 0.014673 / 0.075646 (-0.060973) | 0.411229 / 0.419271 (-0.008042) | 0.076993 / 0.043533 (0.033460) | 0.431693 / 0.255139 (0.176554) | 0.446283 / 0.283200 (0.163084) | 0.131408 / 0.141683 (-0.010275) | 1.820339 / 1.452155 (0.368184) | 1.952946 / 1.492716 (0.460230) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246543 / 0.018006 (0.228537) | 0.489806 / 0.000490 (0.489317) | 0.013999 / 0.000200 (0.013800) | 0.000323 / 0.000054 (0.000269) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032541 / 0.037411 (-0.004870) | 0.130569 / 0.014526 (0.116043) | 0.139630 / 0.176557 (-0.036926) | 0.217018 / 0.737135 (-0.520118) | 0.147914 / 0.296338 (-0.148425) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.494767 / 0.215209 (0.279558) | 4.949313 / 2.077655 (2.871658) | 2.277023 / 1.504120 (0.772903) | 2.036677 / 1.541195 (0.495482) | 2.064461 / 1.468490 (0.595970) | 0.842484 / 4.584777 (-3.742293) | 4.720646 / 3.745712 (0.974934) | 4.025673 / 5.269862 (-1.244189) | 2.198606 / 4.565676 (-2.367070) | 0.103042 / 0.424275 (-0.321233) | 0.014794 / 0.007607 (0.007187) | 0.617867 / 0.226044 (0.391822) | 6.197146 / 2.268929 (3.928218) | 2.804927 / 55.444624 (-52.639697) | 2.426420 / 6.876477 (-4.450057) | 2.515182 / 2.142072 (0.373109) | 1.008098 / 4.805227 (-3.797129) | 0.204982 / 6.500664 (-6.295682) | 0.078643 / 0.075469 (0.003174) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.490790 / 1.841788 (-0.350997) | 17.268042 / 8.074308 (9.193734) | 17.129647 / 10.191392 (6.938255) | 0.170351 / 0.680424 (-0.510073) | 0.021317 / 0.534201 (-0.512884) | 0.517068 / 0.579283 (-0.062215) | 0.500200 / 0.434364 (0.065836) | 0.641974 / 0.540337 (0.101637) | 0.763984 / 1.386936 (-0.622952) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008358 / 0.011353 (-0.002995) | 0.005710 / 0.011008 (-0.005298) | 0.091077 / 0.038508 (0.052569) | 0.040413 / 0.023109 (0.017303) | 0.416634 / 0.275898 (0.140736) | 0.451122 / 0.323480 (0.127642) | 0.006417 / 0.007986 (-0.001569) | 0.004360 / 0.004328 (0.000032) | 0.089543 / 0.004250 (0.085292) | 0.051137 / 0.037052 (0.014085) | 0.420228 / 0.258489 (0.161739) | 0.458649 / 0.293841 (0.164808) | 0.041828 / 0.128546 (-0.086718) | 0.014268 / 0.075646 (-0.061379) | 0.105301 / 0.419271 (-0.313970) | 0.058931 / 0.043533 (0.015398) | 0.413445 / 0.255139 (0.158306) | 0.443882 / 0.283200 (0.160682) | 0.124946 / 0.141683 (-0.016737) | 1.842259 / 1.452155 (0.390104) | 1.948162 / 1.492716 (0.455445) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235799 / 0.018006 (0.217792) | 0.487667 / 0.000490 (0.487177) | 0.001112 / 0.000200 (0.000912) | 0.000094 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034233 / 0.037411 (-0.003178) | 0.136593 / 0.014526 (0.122068) | 0.145598 / 0.176557 (-0.030959) | 0.206545 / 0.737135 (-0.530590) | 0.150781 / 0.296338 (-0.145558) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.522345 / 0.215209 (0.307136) | 5.192092 / 2.077655 (3.114438) | 2.543182 / 1.504120 (1.039062) | 2.285212 / 1.541195 (0.744018) | 2.312803 / 1.468490 (0.844313) | 0.859334 / 4.584777 (-3.725443) | 4.620235 / 3.745712 (0.874523) | 3.964060 / 5.269862 (-1.305802) | 2.046347 / 4.565676 (-2.519330) | 0.105284 / 0.424275 (-0.318991) | 0.015051 / 0.007607 (0.007444) | 0.646530 / 0.226044 (0.420485) | 6.386396 / 2.268929 (4.117467) | 3.131833 / 55.444624 (-52.312791) | 2.761898 / 6.876477 (-4.114579) | 2.833216 / 2.142072 (0.691143) | 1.026024 / 4.805227 (-3.779204) | 0.206776 / 6.500664 (-6.293888) | 0.078845 / 0.075469 (0.003376) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.580851 / 1.841788 (-0.260937) | 17.826213 / 8.074308 (9.751905) | 16.929460 / 10.191392 (6.738068) | 0.232483 / 0.680424 (-0.447941) | 0.021123 / 0.534201 (-0.513078) | 0.522196 / 0.579283 (-0.057087) | 0.503495 / 0.434364 (0.069131) | 0.622777 / 0.540337 (0.082440) | 0.753272 / 1.386936 (-0.633664) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3f9dfbd93707665132abc862b14bb9b50597b739 \"CML watermark\")\n"
] | 2023-04-24T10:44:50 | 2023-04-27T13:06:01 | 2023-04-27T12:57:28 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5787",
"html_url": "https://github.com/huggingface/datasets/pull/5787",
"diff_url": "https://github.com/huggingface/datasets/pull/5787.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5787.patch",
"merged_at": "2023-04-27T12:57:28"
} | This PR raises a FileNotFoundError instead:
```
FileNotFoundError: No (supported) data files or dataset script found in <dataset_name>
```
Fix #5785. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5787/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5786 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5786/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5786/comments | https://api.github.com/repos/huggingface/datasets/issues/5786/events | https://github.com/huggingface/datasets/issues/5786 | 1,680,957,070 | I_kwDODunzps5kMV6O | 5,786 | Multiprocessing in a `filter` or `map` function with a Pytorch model | {
"login": "HugoLaurencon",
"id": 44556846,
"node_id": "MDQ6VXNlcjQ0NTU2ODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/44556846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HugoLaurencon",
"html_url": "https://github.com/HugoLaurencon",
"followers_url": "https://api.github.com/users/HugoLaurencon/followers",
"following_url": "https://api.github.com/users/HugoLaurencon/following{/other_user}",
"gists_url": "https://api.github.com/users/HugoLaurencon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HugoLaurencon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HugoLaurencon/subscriptions",
"organizations_url": "https://api.github.com/users/HugoLaurencon/orgs",
"repos_url": "https://api.github.com/users/HugoLaurencon/repos",
"events_url": "https://api.github.com/users/HugoLaurencon/events{/privacy}",
"received_events_url": "https://api.github.com/users/HugoLaurencon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! PyTorch may hang when calling `load_state_dict()` in a subprocess. To fix that, set the multiprocessing start method to \"spawn\". Since `datasets` uses `multiprocess`, you should do:\r\n\r\n```python\r\n# Required to avoid issues with pytorch (otherwise hangs during load_state_dict in multiprocessing)\r\nimport multiprocess.context as ctx\r\nctx._force_start_method('spawn')\r\n```\r\n\r\nAlso make sure to run your main code in `if __name__ == \"__main__\":` to avoid issues with python multiprocesing",
"Thanks!",
"@lhoestq Hello, I also encountered this problem but maybe with another reason. Here is my code:\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir, model_max_length=training_args.model_max_length)\r\ndata = load_dataset(\"json\", data_files=data_args.train_file, cache_dir=data_args.data_cache_dir)\r\ndef func(samples):\r\n # main operation\r\n for sentence_value in samples:\r\n sentence_ids = tokenizer.encode(sentence_value, add_special_tokens=False, max_length=tokenizer.model_max_length, truncation=True)\r\n ... ...\r\ntrain_data = data[\"train\"].shuffle().map(func, num_proc=os.cpu_count())\r\n```\r\nIt hangs after the progress reaches 100%. Could you help me point out the reason?",
"@SkyAndCloud your issue doesn't seem related to the original post - could you open a new issue and provide more details ? (size of the dataset, number of cpus, how much time it took to run, `datasets` version)",
"@lhoestq Hi, I just solved this problem. Because the input is extremely long and the tokenizer requests a large amount of memory, which leads to a OOM error and may eventually causes the hang. I didn't filter those too-long sentences because I thought `tokenizer` would stop once the length exceeds the `max_length`. However, it actually firstly complete the tokenization of entire sentence and then truncate it."
] | 2023-04-24T10:38:07 | 2023-05-30T09:56:30 | 2023-04-24T10:43:58 | MEMBER | null | null | null | ### Describe the bug
I am trying to use a Pytorch model loaded on CPUs with multiple processes with a `.map` or a `.filter` method.
Usually, when dealing with models that are non-pickable, creating a class such that the `map` function is the method `__call__`, and adding `reduce` helps to solve the problem.
However, here, the command hangs without throwing an error.
### Steps to reproduce the bug
```
from datasets import Dataset
import torch
from torch import nn
from torchvision import models
β
β
class FilterFunction:
#__slots__ = ("path_model", "model") # Doesn't change anything uncommented
def __init__(self, path_model):
self.path_model = path_model
model = models.resnet50()
model.fc = nn.Sequential(
nn.Linear(2048, 512),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(512, 10),
nn.LogSoftmax(dim=1)
)
model.load_state_dict(torch.load(path_model, map_location=torch.device("cpu")))
model.eval()
self.model = model
def __call__(self, batch):
return [True] * len(batch["id"])
# Comment this to have an error
def __reduce__(self):
return (self.__class__, (self.path_model,))
β
β
dataset = Dataset.from_dict({"id": [0, 1, 2, 4]})
β
# Download (100 MB) at https://github.com/emiliantolo/pytorch_nsfw_model/raw/master/ResNet50_nsfw_model.pth
path_model = "/fsx/hugo/nsfw_image/ResNet50_nsfw_model.pth"
β
filter_function = FilterFunction(path_model=path_model)
β
# Works
filtered_dataset = dataset.filter(filter_function, num_proc=1, batched=True, batch_size=2)
# Doesn't work
filtered_dataset = dataset.filter(filter_function, num_proc=2, batched=True, batch_size=2)
```
### Expected behavior
The command `filtered_dataset = dataset.filter(filter_function, num_proc=2, batched=True, batch_size=2)` should work and not hang.
### Environment info
Datasets: 2.11.0
Pyarrow: 11.0.0
Ubuntu | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5786/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5785 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5785/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5785/comments | https://api.github.com/repos/huggingface/datasets/issues/5785/events | https://github.com/huggingface/datasets/issues/5785 | 1,680,956,964 | I_kwDODunzps5kMV4k | 5,785 | Unsupported data files raise TypeError: 'NoneType' object is not iterable | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-04-24T10:38:03 | 2023-04-27T12:57:30 | 2023-04-27T12:57:30 | MEMBER | null | null | null | Currently, we raise a TypeError for unsupported data files:
```
TypeError: 'NoneType' object is not iterable
```
See:
- https://github.com/huggingface/datasets-server/issues/1073
We should give a more informative error message. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5785/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5784 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5784/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5784/comments | https://api.github.com/repos/huggingface/datasets/issues/5784/events | https://github.com/huggingface/datasets/pull/5784 | 1,680,950,726 | PR_kwDODunzps5O_G9S | 5,784 | Raise subprocesses traceback when interrupting | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008959 / 0.011353 (-0.002394) | 0.005804 / 0.011008 (-0.005204) | 0.112663 / 0.038508 (0.074155) | 0.043406 / 0.023109 (0.020297) | 0.348582 / 0.275898 (0.072684) | 0.382332 / 0.323480 (0.058852) | 0.007469 / 0.007986 (-0.000517) | 0.006211 / 0.004328 (0.001883) | 0.086576 / 0.004250 (0.082326) | 0.059223 / 0.037052 (0.022170) | 0.361051 / 0.258489 (0.102562) | 0.411359 / 0.293841 (0.117518) | 0.043640 / 0.128546 (-0.084906) | 0.014239 / 0.075646 (-0.061408) | 0.389729 / 0.419271 (-0.029542) | 0.072319 / 0.043533 (0.028786) | 0.351025 / 0.255139 (0.095886) | 0.371893 / 0.283200 (0.088693) | 0.125994 / 0.141683 (-0.015688) | 1.675249 / 1.452155 (0.223094) | 1.808740 / 1.492716 (0.316024) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255172 / 0.018006 (0.237166) | 0.536003 / 0.000490 (0.535514) | 0.000365 / 0.000200 (0.000165) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031989 / 0.037411 (-0.005423) | 0.126854 / 0.014526 (0.112328) | 0.142458 / 0.176557 (-0.034098) | 0.207821 / 0.737135 (-0.529314) | 0.145610 / 0.296338 (-0.150728) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.468924 / 0.215209 (0.253715) | 4.696677 / 2.077655 (2.619023) | 2.183133 / 1.504120 (0.679013) | 1.994219 / 1.541195 (0.453024) | 2.101375 / 1.468490 (0.632885) | 0.827168 / 4.584777 (-3.757609) | 4.710167 / 3.745712 (0.964455) | 2.377062 / 5.269862 (-2.892800) | 1.712245 / 4.565676 (-2.853431) | 0.100620 / 0.424275 (-0.323655) | 0.014302 / 0.007607 (0.006695) | 0.590813 / 0.226044 (0.364769) | 5.871991 / 2.268929 (3.603063) | 2.722229 / 55.444624 (-52.722395) | 2.323585 / 6.876477 (-4.552892) | 2.503289 / 2.142072 (0.361217) | 0.983644 / 4.805227 (-3.821583) | 0.193942 / 6.500664 (-6.306722) | 0.076493 / 0.075469 (0.001024) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.463107 / 1.841788 (-0.378681) | 17.876918 / 8.074308 (9.802610) | 16.755740 / 10.191392 (6.564348) | 0.167556 / 0.680424 (-0.512868) | 0.020514 / 0.534201 (-0.513687) | 0.508385 / 0.579283 (-0.070898) | 0.505873 / 0.434364 (0.071509) | 0.603630 / 0.540337 (0.063293) | 0.708856 / 1.386936 (-0.678080) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008504 / 0.011353 (-0.002849) | 0.005894 / 0.011008 (-0.005114) | 0.085523 / 0.038508 (0.047015) | 0.038780 / 0.023109 (0.015671) | 0.402869 / 0.275898 (0.126971) | 0.423819 / 0.323480 (0.100339) | 0.006427 / 0.007986 (-0.001559) | 0.004598 / 0.004328 (0.000269) | 0.079807 / 0.004250 (0.075556) | 0.050852 / 0.037052 (0.013799) | 0.403232 / 0.258489 (0.144743) | 0.452489 / 0.293841 (0.158648) | 0.041501 / 0.128546 (-0.087045) | 0.014996 / 0.075646 (-0.060650) | 0.101548 / 0.419271 (-0.317724) | 0.056993 / 0.043533 (0.013461) | 0.403153 / 0.255139 (0.148014) | 0.424587 / 0.283200 (0.141388) | 0.114507 / 0.141683 (-0.027176) | 1.707098 / 1.452155 (0.254943) | 1.799008 / 1.492716 (0.306291) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288003 / 0.018006 (0.269996) | 0.496526 / 0.000490 (0.496036) | 0.010923 / 0.000200 (0.010723) | 0.000159 / 0.000054 (0.000105) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033948 / 0.037411 (-0.003463) | 0.142343 / 0.014526 (0.127817) | 0.143862 / 0.176557 (-0.032695) | 0.202655 / 0.737135 (-0.534480) | 0.151177 / 0.296338 (-0.145162) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.508003 / 0.215209 (0.292794) | 5.320394 / 2.077655 (3.242740) | 2.409854 / 1.504120 (0.905734) | 2.190656 / 1.541195 (0.649462) | 2.272171 / 1.468490 (0.803681) | 0.809492 / 4.584777 (-3.775285) | 4.554412 / 3.745712 (0.808699) | 4.413643 / 5.269862 (-0.856218) | 2.374034 / 4.565676 (-2.191642) | 0.099458 / 0.424275 (-0.324817) | 0.014553 / 0.007607 (0.006946) | 0.613916 / 0.226044 (0.387871) | 6.121430 / 2.268929 (3.852502) | 2.945661 / 55.444624 (-52.498964) | 2.595247 / 6.876477 (-4.281230) | 2.734047 / 2.142072 (0.591975) | 0.952217 / 4.805227 (-3.853010) | 0.196933 / 6.500664 (-6.303731) | 0.073391 / 0.075469 (-0.002078) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.475666 / 1.841788 (-0.366122) | 18.564281 / 8.074308 (10.489973) | 16.865259 / 10.191392 (6.673867) | 0.166494 / 0.680424 (-0.513930) | 0.020655 / 0.534201 (-0.513546) | 0.495120 / 0.579283 (-0.084163) | 0.502602 / 0.434364 (0.068238) | 0.622448 / 0.540337 (0.082110) | 0.721036 / 1.386936 (-0.665900) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#40c204c777793d64e8bb8ce357e9c07b3b303e41 \"CML watermark\")\n",
"Whoops mario you're off this week sorry. I'm taking the liberty to merge this one",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009079 / 0.011353 (-0.002274) | 0.005960 / 0.011008 (-0.005049) | 0.116530 / 0.038508 (0.078022) | 0.046649 / 0.023109 (0.023540) | 0.391906 / 0.275898 (0.116008) | 0.438892 / 0.323480 (0.115412) | 0.007134 / 0.007986 (-0.000851) | 0.004997 / 0.004328 (0.000668) | 0.085947 / 0.004250 (0.081697) | 0.059814 / 0.037052 (0.022762) | 0.396423 / 0.258489 (0.137934) | 0.455941 / 0.293841 (0.162100) | 0.042535 / 0.128546 (-0.086011) | 0.014667 / 0.075646 (-0.060980) | 0.402023 / 0.419271 (-0.017249) | 0.060381 / 0.043533 (0.016848) | 0.393829 / 0.255139 (0.138690) | 0.426557 / 0.283200 (0.143358) | 0.131519 / 0.141683 (-0.010163) | 1.758098 / 1.452155 (0.305943) | 1.848194 / 1.492716 (0.355478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236405 / 0.018006 (0.218399) | 0.611442 / 0.000490 (0.610952) | 0.005143 / 0.000200 (0.004943) | 0.000146 / 0.000054 (0.000092) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034317 / 0.037411 (-0.003094) | 0.182485 / 0.014526 (0.167959) | 0.183149 / 0.176557 (0.006592) | 0.293592 / 0.737135 (-0.443543) | 0.197137 / 0.296338 (-0.099202) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.475690 / 0.215209 (0.260481) | 4.757344 / 2.077655 (2.679690) | 2.184079 / 1.504120 (0.679959) | 1.956599 / 1.541195 (0.415404) | 2.043041 / 1.468490 (0.574551) | 0.817602 / 4.584777 (-3.767175) | 6.432267 / 3.745712 (2.686555) | 5.999402 / 5.269862 (0.729541) | 3.095970 / 4.565676 (-1.469706) | 0.181589 / 0.424275 (-0.242686) | 0.023286 / 0.007607 (0.015679) | 1.090318 / 0.226044 (0.864274) | 7.919330 / 2.268929 (5.650401) | 2.702821 / 55.444624 (-52.741804) | 2.375442 / 6.876477 (-4.501034) | 2.543075 / 2.142072 (0.401003) | 1.011763 / 4.805227 (-3.793464) | 0.203676 / 6.500664 (-6.296988) | 0.080075 / 0.075469 (0.004606) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.875420 / 1.841788 (0.033632) | 23.059278 / 8.074308 (14.984970) | 19.250807 / 10.191392 (9.059415) | 0.323678 / 0.680424 (-0.356746) | 0.028682 / 0.534201 (-0.505519) | 0.698231 / 0.579283 (0.118948) | 0.668129 / 0.434364 (0.233765) | 0.831218 / 0.540337 (0.290880) | 0.941191 / 1.386936 (-0.445745) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.013122 / 0.011353 (0.001769) | 0.006123 / 0.011008 (-0.004886) | 0.090493 / 0.038508 (0.051985) | 0.070660 / 0.023109 (0.047551) | 0.413486 / 0.275898 (0.137588) | 0.450364 / 0.323480 (0.126884) | 0.010288 / 0.007986 (0.002302) | 0.006590 / 0.004328 (0.002261) | 0.087174 / 0.004250 (0.082923) | 0.077304 / 0.037052 (0.040252) | 0.428480 / 0.258489 (0.169991) | 0.459872 / 0.293841 (0.166032) | 0.060477 / 0.128546 (-0.068069) | 0.014859 / 0.075646 (-0.060788) | 0.103915 / 0.419271 (-0.315356) | 0.087466 / 0.043533 (0.043933) | 0.418644 / 0.255139 (0.163505) | 0.433409 / 0.283200 (0.150209) | 0.166716 / 0.141683 (0.025033) | 1.712068 / 1.452155 (0.259914) | 1.827869 / 1.492716 (0.335153) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.372491 / 0.018006 (0.354484) | 0.493426 / 0.000490 (0.492937) | 0.005497 / 0.000200 (0.005297) | 0.000129 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036531 / 0.037411 (-0.000880) | 0.142152 / 0.014526 (0.127626) | 0.148183 / 0.176557 (-0.028373) | 0.212918 / 0.737135 (-0.524217) | 0.154092 / 0.296338 (-0.142246) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.551733 / 0.215209 (0.336524) | 5.421498 / 2.077655 (3.343843) | 2.418848 / 1.504120 (0.914728) | 2.213185 / 1.541195 (0.671991) | 2.294881 / 1.468490 (0.826391) | 0.827031 / 4.584777 (-3.757746) | 6.365622 / 3.745712 (2.619910) | 4.927996 / 5.269862 (-0.341866) | 2.756133 / 4.565676 (-1.809544) | 0.101474 / 0.424275 (-0.322801) | 0.014523 / 0.007607 (0.006916) | 0.619082 / 0.226044 (0.393037) | 6.200132 / 2.268929 (3.931204) | 3.015590 / 55.444624 (-52.429034) | 2.711181 / 6.876477 (-4.165296) | 2.857157 / 2.142072 (0.715084) | 0.993329 / 4.805227 (-3.811898) | 0.203364 / 6.500664 (-6.297301) | 0.079167 / 0.075469 (0.003698) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.709881 / 1.841788 (-0.131907) | 24.867536 / 8.074308 (16.793228) | 21.755361 / 10.191392 (11.563969) | 0.295837 / 0.680424 (-0.384586) | 0.031934 / 0.534201 (-0.502267) | 0.709994 / 0.579283 (0.130711) | 0.779656 / 0.434364 (0.345293) | 0.780669 / 0.540337 (0.240331) | 0.712808 / 1.386936 (-0.674128) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cf4a1951bdca7175adac9c8b85550e89dcceb6fa \"CML watermark\")\n"
] | 2023-04-24T10:34:03 | 2023-04-26T16:04:42 | 2023-04-26T15:54:44 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5784",
"html_url": "https://github.com/huggingface/datasets/pull/5784",
"diff_url": "https://github.com/huggingface/datasets/pull/5784.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5784.patch",
"merged_at": "2023-04-26T15:54:44"
} | When a subprocess hangs in `filter` or `map`, one should be able to get the subprocess' traceback when interrupting the main process. Right now it shows nothing.
To do so I `.get()` the subprocesses async results even the main process is stopped with e.g. `KeyboardInterrupt`. I added a timeout in case the subprocess is hanging or crashed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5784/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5783 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5783/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5783/comments | https://api.github.com/repos/huggingface/datasets/issues/5783/events | https://github.com/huggingface/datasets/issues/5783 | 1,679,664,393 | I_kwDODunzps5kHaUJ | 5,783 | Offset overflow while doing regex on a text column | {
"login": "nishanthcgit",
"id": 5066268,
"node_id": "MDQ6VXNlcjUwNjYyNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5066268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nishanthcgit",
"html_url": "https://github.com/nishanthcgit",
"followers_url": "https://api.github.com/users/nishanthcgit/followers",
"following_url": "https://api.github.com/users/nishanthcgit/following{/other_user}",
"gists_url": "https://api.github.com/users/nishanthcgit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nishanthcgit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nishanthcgit/subscriptions",
"organizations_url": "https://api.github.com/users/nishanthcgit/orgs",
"repos_url": "https://api.github.com/users/nishanthcgit/repos",
"events_url": "https://api.github.com/users/nishanthcgit/events{/privacy}",
"received_events_url": "https://api.github.com/users/nishanthcgit/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi! This looks like an Arrow bug, but it can be avoided by reducing the `writer_batch_size`.\r\n\r\n(`ds = ds.map(get_text_caption, writer_batch_size=100)` in Colab runs without issues)\r\n"
] | 2023-04-22T19:12:03 | 2023-05-05T15:57:41 | null | NONE | null | null | null | ### Describe the bug
`ArrowInvalid: offset overflow while concatenating arrays`
Same error as [here](https://github.com/huggingface/datasets/issues/615)
### Steps to reproduce the bug
Steps to reproduce: (dataset is a few GB big so try in colab maybe)
```
import datasets
import re
ds = datasets.load_dataset('nishanthc/dnd_map_dataset_v0.1', split = 'train')
def get_text_caption(example):
regex_pattern = r'\s\d+x\d+|,\sLQ|,\sgrid|\.\w+$'
example['text_caption'] = re.sub(regex_pattern, '', example['picture_text'])
return example
ds = ds.map(get_text_caption)
```
I am trying to apply a regex to remove certain patterns from a text column. Not sure why this error is showing up.
### Expected behavior
Dataset should have a new column with processed text
### Environment info
Datasets version - 2.11.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5783/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5782 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5782/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5782/comments | https://api.github.com/repos/huggingface/datasets/issues/5782/events | https://github.com/huggingface/datasets/issues/5782 | 1,679,622,367 | I_kwDODunzps5kHQDf | 5,782 | Support for various audio-loading backends instead of always relying on SoundFile | {
"login": "BoringDonut",
"id": 129098876,
"node_id": "U_kgDOB7HkfA",
"avatar_url": "https://avatars.githubusercontent.com/u/129098876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BoringDonut",
"html_url": "https://github.com/BoringDonut",
"followers_url": "https://api.github.com/users/BoringDonut/followers",
"following_url": "https://api.github.com/users/BoringDonut/following{/other_user}",
"gists_url": "https://api.github.com/users/BoringDonut/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BoringDonut/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BoringDonut/subscriptions",
"organizations_url": "https://api.github.com/users/BoringDonut/orgs",
"repos_url": "https://api.github.com/users/BoringDonut/repos",
"events_url": "https://api.github.com/users/BoringDonut/events{/privacy}",
"received_events_url": "https://api.github.com/users/BoringDonut/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! \r\n\r\nYou can use `set_transform`/`with_transform` to define a custom decoding for audio formats not supported by `soundfile`:\r\n```python\r\naudio_dataset_amr = Dataset.from_dict({\"audio\": [\"audio_samples/audio.amr\"]})\r\n\r\ndef decode_audio(batch):\r\n batch[\"audio\"] = [read_ffmpeg(audio_path) for audio_path in batch[\"audio\"]]\r\n return batch\r\n\r\naudio_dataset_amr.set_transform(decode_amr) \r\n```\r\n\r\nSupporting multiple backends is more work to maintain, but we could consider this if we get more requests such as this one.",
"Could it be put somewhere as an example tip or something?",
"Considering the number of times a custom decoding transform has been suggested as a solution, an example in the [docs](https://huggingface.co/docs/datasets/process#format-transform) would be nice.\r\n\r\ncc @stevhliu "
] | 2023-04-22T17:09:25 | 2023-05-10T20:23:04 | 2023-05-10T20:23:04 | NONE | null | null | null | ### Feature request
Introduce an option to select from a variety of audio-loading backends rather than solely relying on the SoundFile library. For instance, if the ffmpeg library is installed, it can serve as a fallback loading option.
### Motivation
- The SoundFile library, used in [features/audio.py](https://github.com/huggingface/datasets/blob/649d5a3315f9e7666713b6affe318ee00c7163a0/src/datasets/features/audio.py#L185), supports only a [limited number of audio formats](https://pysoundfile.readthedocs.io/en/latest/index.html?highlight=supported#soundfile.available_formats).
- However, current methods for creating audio datasets permit the inclusion of audio files in formats not supported by SoundFile.
- As a result, developers may potentially create a dataset they cannot read back.
In my most recent project, I dealt with phone call recordings in `.amr` or `.gsm` formats and was genuinely surprised when I couldn't read the dataset I had just packaged a minute prior. Nonetheless, I can still accurately read these files using the librosa library, which employs the audioread library that internally leverages ffmpeg to read such files.
Example:
```python
audio_dataset_amr = Dataset.from_dict({"audio": ["audio_samples/audio.amr"]}).cast_column("audio", Audio())
audio_dataset_amr.save_to_disk("audio_dataset_amr")
audio_dataset_amr = Dataset.load_from_disk("audio_dataset_amr")
print(audio_dataset_amr[0])
```
Results in:
```
Traceback (most recent call last):
...
raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
soundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7f316323e4d0>: Format not recognised.
```
While I acknowledge that support for these rare file types may not be a priority, I believe it's quite unfortunate that it's possible to create an unreadable dataset in this manner.
### Your contribution
I've created a [simple demo repository](https://github.com/BoringDonut/hf-datasets-ffmpeg-audio) that highlights the mentioned issue. It demonstrates how to create an .amr dataset that results in an error when attempting to read it just a few lines later.
Additionally, I've made a [fork with a rudimentary solution](https://github.com/BoringDonut/datasets/blob/fea73a8fbbc8876467c7e6422c9360546c6372d8/src/datasets/features/audio.py#L189) that utilizes ffmpeg to load files not supported by SoundFile.
Here you may see github actions fails to read `.amr` dataset using the version of the current dataset, but will work with the patched version:
- https://github.com/BoringDonut/hf-datasets-ffmpeg-audio/actions/runs/4773780420/jobs/8487063785
- https://github.com/BoringDonut/hf-datasets-ffmpeg-audio/actions/runs/4773780420/jobs/8487063829
As evident from the GitHub action above, this solution resolves the previously mentioned problem.
I'd be happy to create a proper pull request, provide runtime benchmarks and tests if you could offer some guidance on the following:
- Where should I incorporate the ffmpeg (or other backends) code? For example, should I create a new file or simply add a function within the Audio class?
- Is it feasible to pass the audio-loading function as an argument within the current architecture? This would be useful if I know in advance that I'll be reading files not supported by SoundFile.
A few more notes:
- In theory, it's possible to load audio using librosa/audioread since librosa is already expected to be installed. However, librosa [will soon discontinue audioread support](https://github.com/librosa/librosa/blob/aacb4c134002903ae56bbd4b4a330519a5abacc0/librosa/core/audio.py#L227). Moreover, using audioread on its own seems inconvenient because it requires a file [path as input](https://github.com/beetbox/audioread/blob/ff9535df934c48038af7be9617fdebb12078cc07/audioread/__init__.py#L108) and cannot work with bytes already loaded into memory or an open file descriptor (as mentioned in [librosa docs](https://librosa.org/doc/main/generated/librosa.load.html#librosa.load), only SoundFile backend supports an open file descriptor as an input). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5782/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5782/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5781 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5781/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5781/comments | https://api.github.com/repos/huggingface/datasets/issues/5781/events | https://github.com/huggingface/datasets/issues/5781 | 1,679,580,460 | I_kwDODunzps5kHF0s | 5,781 | Error using `load_datasets` | {
"login": "gjyoungjr",
"id": 61463108,
"node_id": "MDQ6VXNlcjYxNDYzMTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/61463108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gjyoungjr",
"html_url": "https://github.com/gjyoungjr",
"followers_url": "https://api.github.com/users/gjyoungjr/followers",
"following_url": "https://api.github.com/users/gjyoungjr/following{/other_user}",
"gists_url": "https://api.github.com/users/gjyoungjr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gjyoungjr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gjyoungjr/subscriptions",
"organizations_url": "https://api.github.com/users/gjyoungjr/orgs",
"repos_url": "https://api.github.com/users/gjyoungjr/repos",
"events_url": "https://api.github.com/users/gjyoungjr/events{/privacy}",
"received_events_url": "https://api.github.com/users/gjyoungjr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"It looks like an issue with your installation of scipy, can you try reinstalling it ?",
"Sorry for the late reply, but that worked @lhoestq . Thanks for the assist."
] | 2023-04-22T15:10:44 | 2023-05-02T23:41:25 | 2023-05-02T23:41:25 | NONE | null | null | null | ### Describe the bug
I tried to load a dataset using the `datasets` library in a conda jupyter notebook and got the below error.
```
ImportError: dlopen(/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/_iterative.cpython-38-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib
Referenced from: <65B094A2-59D7-31AC-A966-4DB9E11D2A15> /Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/_iterative.cpython-38-darwin.so
Reason: tried: '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/../../../../../../liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/../../../../../../liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/bin/../lib/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file, not in dyld cache)
```
### Steps to reproduce the bug
Run the `load_datasets` function
### Expected behavior
I expected the dataset to be loaded into my notebook.
### Environment info
name: review_sense
channels:
- apple
- conda-forge
dependencies:
- python=3.8
- pip>=19.0
- jupyter
- tensorflow-deps
#- scikit-learn
#- scipy
- pandas
- pandas-datareader
- matplotlib
- pillow
- tqdm
- requests
- h5py
- pyyaml
- flask
- boto3
- ipykernel
- seaborn
- pip:
- tensorflow-macos==2.9
- tensorflow-metal==0.5.0
- bayesian-optimization
- gym
- kaggle
- huggingface_hub
- datasets
- numpy
- huggingface
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5781/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5780 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5780/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5780/comments | https://api.github.com/repos/huggingface/datasets/issues/5780/events | https://github.com/huggingface/datasets/issues/5780 | 1,679,367,149 | I_kwDODunzps5kGRvt | 5,780 | TypeError: 'NoneType' object does not support item assignment | {
"login": "v-yunbin",
"id": 38179632,
"node_id": "MDQ6VXNlcjM4MTc5NjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/38179632?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/v-yunbin",
"html_url": "https://github.com/v-yunbin",
"followers_url": "https://api.github.com/users/v-yunbin/followers",
"following_url": "https://api.github.com/users/v-yunbin/following{/other_user}",
"gists_url": "https://api.github.com/users/v-yunbin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/v-yunbin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/v-yunbin/subscriptions",
"organizations_url": "https://api.github.com/users/v-yunbin/orgs",
"repos_url": "https://api.github.com/users/v-yunbin/repos",
"events_url": "https://api.github.com/users/v-yunbin/events{/privacy}",
"received_events_url": "https://api.github.com/users/v-yunbin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2023-04-22T06:22:43 | 2023-04-23T08:49:18 | 2023-04-23T08:49:18 | NONE | null | null | null | commandοΌ
```
def load_datasets(formats, data_dir=datadir, data_files=datafileοΌοΌ
dataset = load_dataset(formats, data_dir=datadir, data_files=datafile, split=split, streaming=True, **kwargs)
return dataset
raw_datasets = DatasetDict()
raw_datasets["train"] = load_datasets(βcsvβ, args.datadir, "train.csv", split=train_split)
raw_datasets["test"] = load_datasets(βcsvβ, args.datadir, "dev.csv", split=test_split)
raw_datasets = raw_datasets.cast_column("audio", Audio(sampling_rate=16000))
```
errorοΌ
```
main()
File "peft_adalora_whisper_large_training.py", line 502, in main
raw_datasets = raw_datasets.cast_column("audio", Audio(sampling_rate=16000))
File "/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/datasets/dataset_dict.py", line 2015, in cast_column
info.features[column] = feature
TypeError: 'NoneType' object does not support item assignment
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5780/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5779 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5779/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5779/comments | https://api.github.com/repos/huggingface/datasets/issues/5779/events | https://github.com/huggingface/datasets/pull/5779 | 1,678,669,865 | PR_kwDODunzps5O3sHp | 5,779 | Call fs.makedirs in save_to_disk | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007490 / 0.011353 (-0.003862) | 0.004957 / 0.011008 (-0.006051) | 0.096952 / 0.038508 (0.058444) | 0.034125 / 0.023109 (0.011016) | 0.301926 / 0.275898 (0.026028) | 0.330538 / 0.323480 (0.007058) | 0.005999 / 0.007986 (-0.001987) | 0.003948 / 0.004328 (-0.000380) | 0.073024 / 0.004250 (0.068773) | 0.050020 / 0.037052 (0.012967) | 0.299987 / 0.258489 (0.041498) | 0.336077 / 0.293841 (0.042237) | 0.035781 / 0.128546 (-0.092765) | 0.012159 / 0.075646 (-0.063487) | 0.333311 / 0.419271 (-0.085960) | 0.059925 / 0.043533 (0.016392) | 0.297772 / 0.255139 (0.042633) | 0.313447 / 0.283200 (0.030247) | 0.100991 / 0.141683 (-0.040692) | 1.472182 / 1.452155 (0.020027) | 1.553010 / 1.492716 (0.060294) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214222 / 0.018006 (0.196216) | 0.441579 / 0.000490 (0.441090) | 0.001030 / 0.000200 (0.000830) | 0.000194 / 0.000054 (0.000140) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026149 / 0.037411 (-0.011262) | 0.107324 / 0.014526 (0.092798) | 0.113390 / 0.176557 (-0.063167) | 0.170282 / 0.737135 (-0.566854) | 0.120601 / 0.296338 (-0.175737) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411795 / 0.215209 (0.196585) | 4.091412 / 2.077655 (2.013757) | 1.819597 / 1.504120 (0.315477) | 1.623413 / 1.541195 (0.082218) | 1.658959 / 1.468490 (0.190469) | 0.697671 / 4.584777 (-3.887106) | 3.868855 / 3.745712 (0.123143) | 3.220448 / 5.269862 (-2.049414) | 1.796472 / 4.565676 (-2.769204) | 0.085817 / 0.424275 (-0.338458) | 0.012422 / 0.007607 (0.004815) | 0.520302 / 0.226044 (0.294258) | 5.062477 / 2.268929 (2.793548) | 2.275065 / 55.444624 (-53.169560) | 1.936717 / 6.876477 (-4.939759) | 2.069924 / 2.142072 (-0.072148) | 0.838964 / 4.805227 (-3.966264) | 0.170632 / 6.500664 (-6.330032) | 0.066011 / 0.075469 (-0.009458) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.190673 / 1.841788 (-0.651114) | 14.679478 / 8.074308 (6.605169) | 14.099743 / 10.191392 (3.908351) | 0.142556 / 0.680424 (-0.537868) | 0.017601 / 0.534201 (-0.516600) | 0.421301 / 0.579283 (-0.157982) | 0.418035 / 0.434364 (-0.016329) | 0.503799 / 0.540337 (-0.036539) | 0.588809 / 1.386936 (-0.798127) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007556 / 0.011353 (-0.003797) | 0.005283 / 0.011008 (-0.005725) | 0.075616 / 0.038508 (0.037107) | 0.034127 / 0.023109 (0.011018) | 0.345145 / 0.275898 (0.069247) | 0.377490 / 0.323480 (0.054010) | 0.006532 / 0.007986 (-0.001454) | 0.004145 / 0.004328 (-0.000183) | 0.074724 / 0.004250 (0.070473) | 0.048658 / 0.037052 (0.011605) | 0.339989 / 0.258489 (0.081500) | 0.398240 / 0.293841 (0.104399) | 0.037433 / 0.128546 (-0.091114) | 0.012410 / 0.075646 (-0.063237) | 0.088110 / 0.419271 (-0.331162) | 0.050635 / 0.043533 (0.007103) | 0.351878 / 0.255139 (0.096739) | 0.365707 / 0.283200 (0.082508) | 0.104342 / 0.141683 (-0.037341) | 1.438009 / 1.452155 (-0.014145) | 1.533616 / 1.492716 (0.040900) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225570 / 0.018006 (0.207563) | 0.442482 / 0.000490 (0.441992) | 0.000402 / 0.000200 (0.000202) | 0.000063 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030348 / 0.037411 (-0.007063) | 0.111402 / 0.014526 (0.096877) | 0.123365 / 0.176557 (-0.053192) | 0.175604 / 0.737135 (-0.561531) | 0.128458 / 0.296338 (-0.167881) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426054 / 0.215209 (0.210845) | 4.255050 / 2.077655 (2.177395) | 2.039568 / 1.504120 (0.535448) | 1.856842 / 1.541195 (0.315647) | 1.923792 / 1.468490 (0.455301) | 0.701023 / 4.584777 (-3.883754) | 3.746632 / 3.745712 (0.000920) | 2.055563 / 5.269862 (-3.214298) | 1.308068 / 4.565676 (-3.257608) | 0.085524 / 0.424275 (-0.338751) | 0.012103 / 0.007607 (0.004496) | 0.522929 / 0.226044 (0.296885) | 5.258133 / 2.268929 (2.989205) | 2.458440 / 55.444624 (-52.986185) | 2.141681 / 6.876477 (-4.734796) | 2.258667 / 2.142072 (0.116595) | 0.842533 / 4.805227 (-3.962694) | 0.168089 / 6.500664 (-6.332575) | 0.063707 / 0.075469 (-0.011762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.312252 / 1.841788 (-0.529536) | 14.939185 / 8.074308 (6.864877) | 14.479845 / 10.191392 (4.288453) | 0.162557 / 0.680424 (-0.517867) | 0.017660 / 0.534201 (-0.516541) | 0.423261 / 0.579283 (-0.156023) | 0.417693 / 0.434364 (-0.016671) | 0.495440 / 0.540337 (-0.044897) | 0.589932 / 1.386936 (-0.797004) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4e3c86574155961097b367d5cddda5bd13c42b09 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008796 / 0.011353 (-0.002557) | 0.005828 / 0.011008 (-0.005180) | 0.118629 / 0.038508 (0.080121) | 0.042435 / 0.023109 (0.019326) | 0.383780 / 0.275898 (0.107882) | 0.420344 / 0.323480 (0.096864) | 0.006855 / 0.007986 (-0.001130) | 0.006290 / 0.004328 (0.001962) | 0.087160 / 0.004250 (0.082910) | 0.057568 / 0.037052 (0.020516) | 0.378761 / 0.258489 (0.120272) | 0.426496 / 0.293841 (0.132655) | 0.041772 / 0.128546 (-0.086774) | 0.014226 / 0.075646 (-0.061420) | 0.400097 / 0.419271 (-0.019174) | 0.060402 / 0.043533 (0.016870) | 0.381955 / 0.255139 (0.126816) | 0.399110 / 0.283200 (0.115911) | 0.124608 / 0.141683 (-0.017075) | 1.737856 / 1.452155 (0.285702) | 1.829034 / 1.492716 (0.336318) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219941 / 0.018006 (0.201934) | 0.497156 / 0.000490 (0.496666) | 0.005094 / 0.000200 (0.004894) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032144 / 0.037411 (-0.005268) | 0.131782 / 0.014526 (0.117256) | 0.141543 / 0.176557 (-0.035014) | 0.211419 / 0.737135 (-0.525716) | 0.147338 / 0.296338 (-0.149001) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478345 / 0.215209 (0.263136) | 4.749506 / 2.077655 (2.671851) | 2.195794 / 1.504120 (0.691674) | 1.978126 / 1.541195 (0.436932) | 2.059941 / 1.468490 (0.591451) | 0.821959 / 4.584777 (-3.762818) | 5.737479 / 3.745712 (1.991767) | 2.507125 / 5.269862 (-2.762737) | 2.051772 / 4.565676 (-2.513905) | 0.100619 / 0.424275 (-0.323656) | 0.014437 / 0.007607 (0.006830) | 0.599484 / 0.226044 (0.373440) | 5.977579 / 2.268929 (3.708651) | 2.708143 / 55.444624 (-52.736482) | 2.320279 / 6.876477 (-4.556198) | 2.510172 / 2.142072 (0.368100) | 1.006279 / 4.805227 (-3.798948) | 0.199812 / 6.500664 (-6.300853) | 0.077967 / 0.075469 (0.002498) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.510171 / 1.841788 (-0.331616) | 21.099446 / 8.074308 (13.025138) | 17.634225 / 10.191392 (7.442833) | 0.223506 / 0.680424 (-0.456918) | 0.023845 / 0.534201 (-0.510356) | 0.613489 / 0.579283 (0.034206) | 0.685735 / 0.434364 (0.251371) | 0.652485 / 0.540337 (0.112148) | 0.734756 / 1.386936 (-0.652180) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008444 / 0.011353 (-0.002909) | 0.005789 / 0.011008 (-0.005220) | 0.088297 / 0.038508 (0.049789) | 0.040847 / 0.023109 (0.017737) | 0.411748 / 0.275898 (0.135850) | 0.452320 / 0.323480 (0.128841) | 0.006689 / 0.007986 (-0.001296) | 0.006029 / 0.004328 (0.001701) | 0.086080 / 0.004250 (0.081830) | 0.053310 / 0.037052 (0.016257) | 0.402568 / 0.258489 (0.144079) | 0.459047 / 0.293841 (0.165206) | 0.041203 / 0.128546 (-0.087343) | 0.014216 / 0.075646 (-0.061431) | 0.102729 / 0.419271 (-0.316543) | 0.057170 / 0.043533 (0.013637) | 0.407137 / 0.255139 (0.151998) | 0.429703 / 0.283200 (0.146503) | 0.123528 / 0.141683 (-0.018155) | 1.690026 / 1.452155 (0.237872) | 1.797793 / 1.492716 (0.305077) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.264581 / 0.018006 (0.246575) | 0.498981 / 0.000490 (0.498492) | 0.000462 / 0.000200 (0.000262) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034613 / 0.037411 (-0.002798) | 0.136596 / 0.014526 (0.122070) | 0.142183 / 0.176557 (-0.034374) | 0.201816 / 0.737135 (-0.535320) | 0.148843 / 0.296338 (-0.147496) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.506708 / 0.215209 (0.291499) | 5.042829 / 2.077655 (2.965175) | 2.448414 / 1.504120 (0.944295) | 2.213251 / 1.541195 (0.672056) | 2.255805 / 1.468490 (0.787315) | 0.829929 / 4.584777 (-3.754848) | 5.145717 / 3.745712 (1.400004) | 2.493947 / 5.269862 (-2.775915) | 1.676171 / 4.565676 (-2.889506) | 0.102097 / 0.424275 (-0.322178) | 0.014545 / 0.007607 (0.006938) | 0.635473 / 0.226044 (0.409429) | 6.306767 / 2.268929 (4.037839) | 3.050284 / 55.444624 (-52.394341) | 2.653175 / 6.876477 (-4.223302) | 2.850569 / 2.142072 (0.708496) | 1.355280 / 4.805227 (-3.449947) | 0.248112 / 6.500664 (-6.252552) | 0.091993 / 0.075469 (0.016524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.837509 / 1.841788 (-0.004279) | 21.268838 / 8.074308 (13.194530) | 17.338053 / 10.191392 (7.146660) | 0.232263 / 0.680424 (-0.448161) | 0.029093 / 0.534201 (-0.505108) | 0.651056 / 0.579283 (0.071773) | 0.617623 / 0.434364 (0.183259) | 0.773921 / 0.540337 (0.233584) | 0.705118 / 1.386936 (-0.681818) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#35846fd54fa16aa72ff344d15c98b5e08c5effe0 \"CML watermark\")\n"
] | 2023-04-21T15:04:28 | 2023-04-26T12:20:01 | 2023-04-26T12:11:15 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5779",
"html_url": "https://github.com/huggingface/datasets/pull/5779",
"diff_url": "https://github.com/huggingface/datasets/pull/5779.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5779.patch",
"merged_at": "2023-04-26T12:11:15"
} | We need to call `fs.makedirs` when saving a dataset using `save_to_disk`, because some fs implementations have actual directories (S3 and others don't)
Close https://github.com/huggingface/datasets/issues/5775 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5779/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5778 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5778/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5778/comments | https://api.github.com/repos/huggingface/datasets/issues/5778/events | https://github.com/huggingface/datasets/issues/5778 | 1,678,125,951 | I_kwDODunzps5kBit_ | 5,778 | SchrΓΆdinger's dataset_dict | {
"login": "liujuncn",
"id": 902005,
"node_id": "MDQ6VXNlcjkwMjAwNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/902005?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liujuncn",
"html_url": "https://github.com/liujuncn",
"followers_url": "https://api.github.com/users/liujuncn/followers",
"following_url": "https://api.github.com/users/liujuncn/following{/other_user}",
"gists_url": "https://api.github.com/users/liujuncn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liujuncn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liujuncn/subscriptions",
"organizations_url": "https://api.github.com/users/liujuncn/orgs",
"repos_url": "https://api.github.com/users/liujuncn/repos",
"events_url": "https://api.github.com/users/liujuncn/events{/privacy}",
"received_events_url": "https://api.github.com/users/liujuncn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Passing `data_files=\"path/test.json\"` is equivalent to `data_files={\"train\": [\"path/test.json\"]}`, that's why you end up with a train split. If you don't pass `data_files=`, then split names are inferred from the data files names"
] | 2023-04-21T08:38:12 | 2023-07-24T15:15:14 | 2023-07-24T15:15:14 | NONE | null | null | null | ### Describe the bug
If you use load_dataset('json', data_files="path/test.json"), it will return DatasetDict({train:...}).
And if you use load_dataset("path"), it will return DatasetDict({test:...}).
Why can't the output behavior be unified?
### Steps to reproduce the bug
as description above.
### Expected behavior
consistent predictable output.
### Environment info
'2.11.0' | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5778/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5777 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5777/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5777/comments | https://api.github.com/repos/huggingface/datasets/issues/5777/events | https://github.com/huggingface/datasets/issues/5777 | 1,677,655,969 | I_kwDODunzps5j_v-h | 5,777 | datasets.load_dataset("code_search_net", "python") : NotADirectoryError: [Errno 20] Not a directory | {
"login": "jason-brian-anderson",
"id": 34688597,
"node_id": "MDQ6VXNlcjM0Njg4NTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/34688597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jason-brian-anderson",
"html_url": "https://github.com/jason-brian-anderson",
"followers_url": "https://api.github.com/users/jason-brian-anderson/followers",
"following_url": "https://api.github.com/users/jason-brian-anderson/following{/other_user}",
"gists_url": "https://api.github.com/users/jason-brian-anderson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jason-brian-anderson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jason-brian-anderson/subscriptions",
"organizations_url": "https://api.github.com/users/jason-brian-anderson/orgs",
"repos_url": "https://api.github.com/users/jason-brian-anderson/repos",
"events_url": "https://api.github.com/users/jason-brian-anderson/events{/privacy}",
"received_events_url": "https://api.github.com/users/jason-brian-anderson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Note:\r\nI listed the datasets and grepped around to find what appears to be an alternative source for this:\r\n\r\nraw_datasets = load_dataset(\"espejelomar/code_search_net_python_10000_examples\", \"python\")",
"Thanks for reporting, @jason-brian-anderson.\r\n\r\nYes, this is a known issue: the [CodeSearchNet](https://github.com/github/CodeSearchNet) repo has been archived (Apr 11, 2023) and their source data files are no longer accessible in their S3: e.g. https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/python.zip gives 403 Forbidden error. See:\r\n- https://huggingface.co/datasets/code_search_net/discussions/3\r\n\r\nWe have contacted one of the authors of the dataset to find a solution. I'll keep you informed.\r\n\r\nCC: @hamelsmu",
"cc: @julianeagu",
"This issue is fixed because we are hosting the CodeSearchNet data files in the Hugging Face Hub. See: https://huggingface.co/datasets/code_search_net/discussions/7",
"I learned that @mallamanis has uploaded the dataset [here as well](https://zenodo.org/record/7908468) ",
"Thanks @hamelsmu for the Zenodo link. I am adding it to the dataset card on the Hugging Face Hub, so that the community knows about this \"official\" source data. I guess this link is not well known, because some community members already hosted an \"unofficial\" version of the data on Zenodo as well: https://zenodo.org/record/7857872\r\n\r\n"
] | 2023-04-21T02:08:07 | 2023-06-05T05:49:52 | 2023-05-11T11:51:56 | NONE | null | null | null | ### Describe the bug
While checking out the [tokenizer tutorial](https://huggingface.co/course/chapter6/2?fw=pt), i noticed getting an error while initially downloading the python dataset used in the examples.
The [collab with the error is here](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter6/section2.ipynb#scrollTo=hGb69Yo3eV8S)
```
from datasets import load_dataset
import os
os.environ["HF_DATASETS_CACHE"] = "/workspace"
# This can take a few minutes to load, so grab a coffee or tea while you wait!
raw_datasets = load_dataset("code_search_net", "python")
```
yeilds:
```
ile /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:524, in xlistdir(path, use_auth_token)
522 main_hop, *rest_hops = _as_str(path).split("::")
523 if is_local_path(main_hop):
--> 524 return os.listdir(path)
525 else:
526 # globbing inside a zip in a private repo requires authentication
527 if not rest_hops and (main_hop.startswith("http://") or main_hop.startswith("https://")):
NotADirectoryError: [Errno 20] Not a directory: '/workspace/downloads/25ceeb4c25ab737d688bd56ea92bfbb1f199fe572470456cf2d675479f342ac7/python/final/jsonl/train'
```
I was able to reproduce this erro both in the collab and on my own pytorch/pytorch container pulled from the dockerhub official pytorch image, so i think it may be a server side thing.
### Steps to reproduce the bug
Steps to reproduce the issue:
1. run `raw_datasets = load_dataset("code_search_net", "python")`
### Expected behavior
expect the code to not exception during dataset pull.
### Environment info
i tried both the default HF_DATASETS_CACHE on Collab, and on my local container. i then pointed to the HF_DATASETS_CACHE to a large capacity local storage and the problem was consisten across all 3 scenarios. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5777/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5776 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5776/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5776/comments | https://api.github.com/repos/huggingface/datasets/issues/5776/events | https://github.com/huggingface/datasets/issues/5776 | 1,677,116,100 | I_kwDODunzps5j9sLE | 5,776 | Use Pandas' `read_json` in the JSON builder | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-04-20T17:15:49 | 2023-04-20T17:15:49 | null | CONTRIBUTOR | null | null | null | Instead of PyArrow's `read_json`, we should use `pd.read_json` in the JSON builder for consistency with the CSV and SQL builders (e.g., to address https://github.com/huggingface/datasets/issues/5725).
In Pandas2.0, to get the same performance, we can set the `engine` to "pyarrow". The issue is that Colab still doesn't install Pandas 2.0 by default, so I think it's best to wait for this to be resolved on their side to avoid downgrading decoding performance in scenarios when Pandas 2.0 is not installed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5776/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5775/comments | https://api.github.com/repos/huggingface/datasets/issues/5775/events | https://github.com/huggingface/datasets/issues/5775 | 1,677,089,901 | I_kwDODunzps5j9lxt | 5,775 | ArrowDataset.save_to_disk lost some logic of remote | {
"login": "Zoupers",
"id": 29817738,
"node_id": "MDQ6VXNlcjI5ODE3NzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/29817738?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zoupers",
"html_url": "https://github.com/Zoupers",
"followers_url": "https://api.github.com/users/Zoupers/followers",
"following_url": "https://api.github.com/users/Zoupers/following{/other_user}",
"gists_url": "https://api.github.com/users/Zoupers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zoupers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zoupers/subscriptions",
"organizations_url": "https://api.github.com/users/Zoupers/orgs",
"repos_url": "https://api.github.com/users/Zoupers/repos",
"events_url": "https://api.github.com/users/Zoupers/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zoupers/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"We just fixed this on `main` and will do a new release soon :)"
] | 2023-04-20T16:58:01 | 2023-04-26T12:11:36 | 2023-04-26T12:11:17 | NONE | null | null | null | ### Describe the bug
https://github.com/huggingface/datasets/blob/e7ce0ac60c7efc10886471932854903a7c19f172/src/datasets/arrow_dataset.py#L1371
Here is the bug point, when I want to save from a `DatasetDict` class and the items of the instance is like `[('train', Dataset({features: ..., num_rows: ...}))]` , there is no guarantee that there exists a directory name `train` under `dataset_dict_path`.
### Steps to reproduce the bug
1. Mock a DatasetDict with items like what I said.
2. using save_to_disk with storage_options, u can use local sftp. code may like below
```python
from datasets import load_dataset
dataset = load_dataset(...)
dataset.save_to_disk('sftp:///tmp', storage_options={'host': 'localhost', 'username': 'admin'})
```
I suppose u can reproduce the bug by these steps.
### Expected behavior
Should create the folder if it does not exists, just like we do locally.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-6.2.10-arch1-1-x86_64-with-glibc2.35
- Python version: 3.10.9
- Huggingface_hub version: 0.13.2
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5775/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5774 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5774/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5774/comments | https://api.github.com/repos/huggingface/datasets/issues/5774/events | https://github.com/huggingface/datasets/pull/5774 | 1,676,716,662 | PR_kwDODunzps5OxIMe | 5,774 | Fix style | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010336 / 0.011353 (-0.001017) | 0.007085 / 0.011008 (-0.003924) | 0.135577 / 0.038508 (0.097069) | 0.038301 / 0.023109 (0.015192) | 0.427919 / 0.275898 (0.152021) | 0.461451 / 0.323480 (0.137971) | 0.008929 / 0.007986 (0.000944) | 0.005260 / 0.004328 (0.000931) | 0.103481 / 0.004250 (0.099231) | 0.054885 / 0.037052 (0.017833) | 0.434956 / 0.258489 (0.176467) | 0.466915 / 0.293841 (0.173074) | 0.052403 / 0.128546 (-0.076144) | 0.021128 / 0.075646 (-0.054518) | 0.466847 / 0.419271 (0.047576) | 0.085096 / 0.043533 (0.041563) | 0.439935 / 0.255139 (0.184796) | 0.453613 / 0.283200 (0.170413) | 0.123913 / 0.141683 (-0.017769) | 1.930114 / 1.452155 (0.477959) | 2.052083 / 1.492716 (0.559366) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280612 / 0.018006 (0.262606) | 0.583937 / 0.000490 (0.583447) | 0.004542 / 0.000200 (0.004342) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035901 / 0.037411 (-0.001510) | 0.160357 / 0.014526 (0.145831) | 0.141661 / 0.176557 (-0.034896) | 0.234915 / 0.737135 (-0.502220) | 0.164110 / 0.296338 (-0.132228) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.659901 / 0.215209 (0.444692) | 6.529102 / 2.077655 (4.451447) | 2.635324 / 1.504120 (1.131204) | 2.275777 / 1.541195 (0.734583) | 2.343205 / 1.468490 (0.874715) | 1.241310 / 4.584777 (-3.343467) | 5.683784 / 3.745712 (1.938072) | 3.377162 / 5.269862 (-1.892700) | 2.176404 / 4.565676 (-2.389273) | 0.144303 / 0.424275 (-0.279972) | 0.016352 / 0.007607 (0.008745) | 0.817383 / 0.226044 (0.591339) | 8.148356 / 2.268929 (5.879428) | 3.489277 / 55.444624 (-51.955347) | 2.848086 / 6.876477 (-4.028391) | 2.973304 / 2.142072 (0.831232) | 1.517821 / 4.805227 (-3.287407) | 0.278794 / 6.500664 (-6.221870) | 0.096385 / 0.075469 (0.020916) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631693 / 1.841788 (-0.210095) | 19.564716 / 8.074308 (11.490408) | 23.583081 / 10.191392 (13.391689) | 0.252363 / 0.680424 (-0.428061) | 0.027644 / 0.534201 (-0.506557) | 0.579634 / 0.579283 (0.000351) | 0.645702 / 0.434364 (0.211338) | 0.667302 / 0.540337 (0.126965) | 0.766425 / 1.386936 (-0.620511) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011186 / 0.011353 (-0.000167) | 0.007327 / 0.011008 (-0.003681) | 0.105441 / 0.038508 (0.066933) | 0.040293 / 0.023109 (0.017184) | 0.480557 / 0.275898 (0.204659) | 0.522049 / 0.323480 (0.198569) | 0.007779 / 0.007986 (-0.000207) | 0.007338 / 0.004328 (0.003009) | 0.104744 / 0.004250 (0.100494) | 0.059463 / 0.037052 (0.022411) | 0.494055 / 0.258489 (0.235566) | 0.534340 / 0.293841 (0.240499) | 0.062800 / 0.128546 (-0.065746) | 0.020687 / 0.075646 (-0.054959) | 0.135833 / 0.419271 (-0.283439) | 0.087472 / 0.043533 (0.043939) | 0.465019 / 0.255139 (0.209880) | 0.526713 / 0.283200 (0.243513) | 0.131424 / 0.141683 (-0.010259) | 1.884759 / 1.452155 (0.432605) | 2.015817 / 1.492716 (0.523101) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237032 / 0.018006 (0.219026) | 0.605209 / 0.000490 (0.604719) | 0.006653 / 0.000200 (0.006453) | 0.000264 / 0.000054 (0.000210) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034982 / 0.037411 (-0.002430) | 0.141409 / 0.014526 (0.126883) | 0.151635 / 0.176557 (-0.024922) | 0.217298 / 0.737135 (-0.519837) | 0.171945 / 0.296338 (-0.124393) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.678596 / 0.215209 (0.463387) | 6.802432 / 2.077655 (4.724777) | 3.021617 / 1.504120 (1.517497) | 2.722508 / 1.541195 (1.181313) | 2.728194 / 1.468490 (1.259704) | 1.245863 / 4.584777 (-3.338914) | 5.762676 / 3.745712 (2.016963) | 5.497855 / 5.269862 (0.227994) | 2.855764 / 4.565676 (-1.709912) | 0.157359 / 0.424275 (-0.266916) | 0.015562 / 0.007607 (0.007955) | 0.865559 / 0.226044 (0.639515) | 8.553052 / 2.268929 (6.284123) | 3.905544 / 55.444624 (-51.539081) | 3.272528 / 6.876477 (-3.603949) | 3.399481 / 2.142072 (1.257408) | 1.540155 / 4.805227 (-3.265072) | 0.275871 / 6.500664 (-6.224793) | 0.092346 / 0.075469 (0.016877) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.753646 / 1.841788 (-0.088142) | 20.074050 / 8.074308 (11.999742) | 23.920391 / 10.191392 (13.728999) | 0.257161 / 0.680424 (-0.423263) | 0.027805 / 0.534201 (-0.506396) | 0.565605 / 0.579283 (-0.013678) | 0.643277 / 0.434364 (0.208914) | 0.633504 / 0.540337 (0.093167) | 0.754317 / 1.386936 (-0.632619) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2d34c7968ea1a3fe7d4fa7cdf23673e0354f69ac \"CML watermark\")\n"
] | 2023-04-20T13:21:32 | 2023-04-20T13:34:26 | 2023-04-20T13:24:28 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5774",
"html_url": "https://github.com/huggingface/datasets/pull/5774",
"diff_url": "https://github.com/huggingface/datasets/pull/5774.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5774.patch",
"merged_at": "2023-04-20T13:24:28"
} | Fix C419 issues | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5774/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5773 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5773/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5773/comments | https://api.github.com/repos/huggingface/datasets/issues/5773/events | https://github.com/huggingface/datasets/issues/5773 | 1,675,984,633 | I_kwDODunzps5j5X75 | 5,773 | train_dataset does not implement __len__ | {
"login": "v-yunbin",
"id": 38179632,
"node_id": "MDQ6VXNlcjM4MTc5NjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/38179632?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/v-yunbin",
"html_url": "https://github.com/v-yunbin",
"followers_url": "https://api.github.com/users/v-yunbin/followers",
"following_url": "https://api.github.com/users/v-yunbin/following{/other_user}",
"gists_url": "https://api.github.com/users/v-yunbin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/v-yunbin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/v-yunbin/subscriptions",
"organizations_url": "https://api.github.com/users/v-yunbin/orgs",
"repos_url": "https://api.github.com/users/v-yunbin/repos",
"events_url": "https://api.github.com/users/v-yunbin/events{/privacy}",
"received_events_url": "https://api.github.com/users/v-yunbin/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Thanks for reporting, @v-yunbin.\r\n\r\nCould you please give more details, the steps to reproduce the bug, the complete error back trace and the environment information (`datasets-cli env`)?",
"this is a detail error info from transformersοΌ\r\n```\r\nTraceback (most recent call last):\r\n File \"finetune.py\", line 177, in <module>\r\n whisper_finetune(traindir,devdir,outdir)\r\n File \"finetune.py\", line 161, in whisper_finetune\r\n trainer = Seq2SeqTrainer(\r\n File \"/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/trainer_seq2seq.py\", line 56, in __init__\r\n super().__init__(\r\n File \"/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/trainer.py\", line 567, in __init__\r\n raise ValueError(\r\nValueError: The train_dataset does not implement __len__, max_steps has to be specified. The number of steps needs to be known in advance for the learning rate scheduler.\r\n```\r\n",
"How did you create `train_dataset`? The `datasets` library does not appear in your stack trace.\r\n\r\nWe need more information in order to reproduce the issue...",
"```\r\ndef asr_dataset(traindir,devdir):\r\n we_voice = IterableDatasetDict()\r\n #we_voice[\"train\"] = load_from_disk(traindir,streaming=True)\r\n #we_voice[\"test\"]= load_from_disk(devdir,streaming=True)\r\n we_voice[\"train\"] = load_dataset(\"csv\",data_files=os.path.join(traindir,\"train.csv\"),split=\"train\",streaming=True)\r\n #print(load_dataset(\"csv\",data_files=os.path.join(traindir,\"train.csv\"),split=\"train\"))\r\n we_voice[\"test\"] = load_dataset(\"csv\",data_files=os.path.join(devdir,\"dev.csv\"), split=\"train\",streaming=True)\r\n we_voice = we_voice.remove_columns([\"id\"])\r\n we_voice = we_voice.cast_column(\"audio\", Audio(sampling_rate=16000))\r\n return we_voice\r\n\r\n```",
"As you are using iterable datasets (`streaming=True`), their length is not defined.\r\n\r\nYou should:\r\n- Either use non-iterable datasets, which have a defined length: use `DatasetDict` and not passing `streaming=True`\r\n- Or pass `args.max_steps` to the `Trainer`",
"I don't know how to give a reasonable args.max_steps...........................",
"Then you should not use streaming.",
"@albertvillanova I think @v-yunbin, myself, and others might be slightly confused about max_steps and how it differs from num_train_epochs.",
"@lkurlandski A **step** is referring to optimizer's update after back propagation, and it's associated with a batch of data. For example, if a dataset contains 64 examples and you have an overall batch size of 4, then an epoch will have 64/4=16 batches. Therefore, in one epoch, you will have 16 optimizer **steps**."
] | 2023-04-20T04:37:05 | 2023-07-19T20:33:13 | null | NONE | null | null | null | when train using data precessored by the datasets, I get follow warning and it leads to that I can not set epoch numbers:
`ValueError: The train_dataset does not implement __len__, max_steps has to be specified. The number of steps needs to be known in advance for the learning rate scheduler.` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5773/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5772/comments | https://api.github.com/repos/huggingface/datasets/issues/5772/events | https://github.com/huggingface/datasets/pull/5772 | 1,675,033,510 | PR_kwDODunzps5OreXV | 5,772 | Fix JSON builder when missing keys in first row | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009262 / 0.011353 (-0.002091) | 0.006157 / 0.011008 (-0.004851) | 0.125960 / 0.038508 (0.087451) | 0.036213 / 0.023109 (0.013104) | 0.399331 / 0.275898 (0.123433) | 0.453597 / 0.323480 (0.130117) | 0.006990 / 0.007986 (-0.000995) | 0.007320 / 0.004328 (0.002991) | 0.100321 / 0.004250 (0.096070) | 0.048870 / 0.037052 (0.011818) | 0.396284 / 0.258489 (0.137795) | 0.475619 / 0.293841 (0.181778) | 0.052329 / 0.128546 (-0.076217) | 0.019564 / 0.075646 (-0.056083) | 0.430942 / 0.419271 (0.011670) | 0.063224 / 0.043533 (0.019692) | 0.391717 / 0.255139 (0.136578) | 0.448342 / 0.283200 (0.165142) | 0.114055 / 0.141683 (-0.027628) | 1.793204 / 1.452155 (0.341049) | 1.895151 / 1.492716 (0.402435) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.283699 / 0.018006 (0.265693) | 0.597194 / 0.000490 (0.596704) | 0.007143 / 0.000200 (0.006944) | 0.000602 / 0.000054 (0.000548) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034761 / 0.037411 (-0.002651) | 0.124555 / 0.014526 (0.110030) | 0.149126 / 0.176557 (-0.027430) | 0.220335 / 0.737135 (-0.516801) | 0.153109 / 0.296338 (-0.143229) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.620210 / 0.215209 (0.405001) | 6.229937 / 2.077655 (4.152282) | 2.615203 / 1.504120 (1.111083) | 2.239337 / 1.541195 (0.698143) | 2.262138 / 1.468490 (0.793648) | 1.196498 / 4.584777 (-3.388279) | 5.609932 / 3.745712 (1.864220) | 3.031347 / 5.269862 (-2.238515) | 2.025530 / 4.565676 (-2.540146) | 0.139828 / 0.424275 (-0.284447) | 0.015476 / 0.007607 (0.007869) | 0.768964 / 0.226044 (0.542920) | 7.728677 / 2.268929 (5.459748) | 3.336407 / 55.444624 (-52.108217) | 2.700055 / 6.876477 (-4.176422) | 2.765223 / 2.142072 (0.623151) | 1.409073 / 4.805227 (-3.396155) | 0.246849 / 6.500664 (-6.253815) | 0.081231 / 0.075469 (0.005762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.593836 / 1.841788 (-0.247952) | 18.020525 / 8.074308 (9.946216) | 21.766822 / 10.191392 (11.575430) | 0.258615 / 0.680424 (-0.421809) | 0.026895 / 0.534201 (-0.507306) | 0.529823 / 0.579283 (-0.049460) | 0.623470 / 0.434364 (0.189106) | 0.628171 / 0.540337 (0.087833) | 0.745249 / 1.386936 (-0.641687) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008624 / 0.011353 (-0.002729) | 0.006317 / 0.011008 (-0.004691) | 0.097315 / 0.038508 (0.058807) | 0.035217 / 0.023109 (0.012108) | 0.440197 / 0.275898 (0.164299) | 0.473863 / 0.323480 (0.150383) | 0.006722 / 0.007986 (-0.001264) | 0.006444 / 0.004328 (0.002116) | 0.102056 / 0.004250 (0.097806) | 0.047142 / 0.037052 (0.010089) | 0.452476 / 0.258489 (0.193986) | 0.487619 / 0.293841 (0.193778) | 0.052456 / 0.128546 (-0.076090) | 0.018735 / 0.075646 (-0.056911) | 0.114656 / 0.419271 (-0.304616) | 0.062577 / 0.043533 (0.019044) | 0.444471 / 0.255139 (0.189332) | 0.494264 / 0.283200 (0.211065) | 0.117112 / 0.141683 (-0.024571) | 1.848965 / 1.452155 (0.396810) | 1.984008 / 1.492716 (0.491292) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.290494 / 0.018006 (0.272488) | 0.588415 / 0.000490 (0.587925) | 0.000459 / 0.000200 (0.000259) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032873 / 0.037411 (-0.004538) | 0.131139 / 0.014526 (0.116614) | 0.140268 / 0.176557 (-0.036289) | 0.204561 / 0.737135 (-0.532574) | 0.147443 / 0.296338 (-0.148895) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.636899 / 0.215209 (0.421690) | 6.236139 / 2.077655 (4.158484) | 2.801468 / 1.504120 (1.297348) | 2.398808 / 1.541195 (0.857613) | 2.493150 / 1.468490 (1.024659) | 1.228845 / 4.584777 (-3.355932) | 5.675874 / 3.745712 (1.930162) | 3.084939 / 5.269862 (-2.184922) | 2.061310 / 4.565676 (-2.504367) | 0.142285 / 0.424275 (-0.281990) | 0.014972 / 0.007607 (0.007365) | 0.786599 / 0.226044 (0.560555) | 7.876036 / 2.268929 (5.607107) | 3.476136 / 55.444624 (-51.968489) | 2.847922 / 6.876477 (-4.028555) | 3.040326 / 2.142072 (0.898253) | 1.448538 / 4.805227 (-3.356690) | 0.257230 / 6.500664 (-6.243434) | 0.085137 / 0.075469 (0.009668) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.668173 / 1.841788 (-0.173615) | 18.668520 / 8.074308 (10.594212) | 20.535542 / 10.191392 (10.344150) | 0.244580 / 0.680424 (-0.435844) | 0.026364 / 0.534201 (-0.507837) | 0.531753 / 0.579283 (-0.047530) | 0.616578 / 0.434364 (0.182214) | 0.618906 / 0.540337 (0.078569) | 0.738785 / 1.386936 (-0.648151) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f7265cafa3103d77d6d52aa897088faefcd96659 \"CML watermark\")\n"
] | 2023-04-19T14:32:57 | 2023-04-21T06:45:13 | 2023-04-21T06:35:27 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5772",
"html_url": "https://github.com/huggingface/datasets/pull/5772",
"diff_url": "https://github.com/huggingface/datasets/pull/5772.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5772.patch",
"merged_at": "2023-04-21T06:35:27"
} | Until now, the JSON builder only considered the keys present in the first element of the list:
- Either explicitly: by passing index 0 in `dataset[0].keys()`
- Or implicitly: `pa.Table.from_pylist(dataset)`, where "schema (default None): If not passed, will be inferred from the first row of the mapping values"
This PR fixes the bug by considering the union of the keys present in all the rows.
Fix #5726. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5772/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5771 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5771/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5771/comments | https://api.github.com/repos/huggingface/datasets/issues/5771/events | https://github.com/huggingface/datasets/issues/5771 | 1,674,828,380 | I_kwDODunzps5j09pc | 5,771 | Support cloud storage for loading datasets | {
"login": "eli-osherovich",
"id": 2437102,
"node_id": "MDQ6VXNlcjI0MzcxMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eli-osherovich",
"html_url": "https://github.com/eli-osherovich",
"followers_url": "https://api.github.com/users/eli-osherovich/followers",
"following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}",
"gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions",
"organizations_url": "https://api.github.com/users/eli-osherovich/orgs",
"repos_url": "https://api.github.com/users/eli-osherovich/repos",
"events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}",
"received_events_url": "https://api.github.com/users/eli-osherovich/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"A duplicate of https://github.com/huggingface/datasets/issues/5281"
] | 2023-04-19T12:43:53 | 2023-05-07T17:47:41 | 2023-05-07T17:47:41 | CONTRIBUTOR | null | null | null | ### Feature request
It seems that the the current implementation supports cloud storage only for `load_from_disk`. It would be nice if a similar functionality existed in `load_dataset`.
### Motivation
Motivation is pretty clear -- let users work with datasets located in the cloud.
### Your contribution
I can help implementing this. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5771/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5770/comments | https://api.github.com/repos/huggingface/datasets/issues/5770/events | https://github.com/huggingface/datasets/pull/5770 | 1,673,581,555 | PR_kwDODunzps5OmntV | 5,770 | Add IterableDataset.from_spark | {
"login": "maddiedawson",
"id": 106995444,
"node_id": "U_kgDOBmCe9A",
"avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maddiedawson",
"html_url": "https://github.com/maddiedawson",
"followers_url": "https://api.github.com/users/maddiedawson/followers",
"following_url": "https://api.github.com/users/maddiedawson/following{/other_user}",
"gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions",
"organizations_url": "https://api.github.com/users/maddiedawson/orgs",
"repos_url": "https://api.github.com/users/maddiedawson/repos",
"events_url": "https://api.github.com/users/maddiedawson/events{/privacy}",
"received_events_url": "https://api.github.com/users/maddiedawson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi again @lhoestq this is ready for review! Not sure I have permission to add people to the reviewers list...",
"Cool ! I think you can define `IterableDataset.from_spark` instead of adding `streaming=` in `Dataset.from_spark`, it can be more intuitive IMO :)",
"Thanks for reviewing! I moved the streaming behavior to IterableDataset.from_spark",
"Thanks Quentin! I'll flesh out the docs in a follow-up PR",
"Friendly ping @lhoestq ",
"Thanks @lhoestq ! I fixed the partition order thing and added more unit tests.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006165 / 0.011353 (-0.005188) | 0.004497 / 0.011008 (-0.006511) | 0.099142 / 0.038508 (0.060634) | 0.027479 / 0.023109 (0.004369) | 0.352491 / 0.275898 (0.076593) | 0.402993 / 0.323480 (0.079513) | 0.004885 / 0.007986 (-0.003100) | 0.003315 / 0.004328 (-0.001013) | 0.075787 / 0.004250 (0.071537) | 0.035320 / 0.037052 (-0.001732) | 0.368401 / 0.258489 (0.109912) | 0.409090 / 0.293841 (0.115249) | 0.030125 / 0.128546 (-0.098421) | 0.011670 / 0.075646 (-0.063976) | 0.324381 / 0.419271 (-0.094890) | 0.050815 / 0.043533 (0.007283) | 0.352598 / 0.255139 (0.097460) | 0.389189 / 0.283200 (0.105989) | 0.092873 / 0.141683 (-0.048810) | 1.485140 / 1.452155 (0.032986) | 1.545586 / 1.492716 (0.052869) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199522 / 0.018006 (0.181516) | 0.404576 / 0.000490 (0.404087) | 0.003322 / 0.000200 (0.003122) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022945 / 0.037411 (-0.014466) | 0.095512 / 0.014526 (0.080987) | 0.103077 / 0.176557 (-0.073480) | 0.163918 / 0.737135 (-0.573217) | 0.105560 / 0.296338 (-0.190779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417360 / 0.215209 (0.202151) | 4.161693 / 2.077655 (2.084039) | 1.851941 / 1.504120 (0.347821) | 1.649872 / 1.541195 (0.108677) | 1.682099 / 1.468490 (0.213609) | 0.693187 / 4.584777 (-3.891590) | 3.462528 / 3.745712 (-0.283184) | 1.839893 / 5.269862 (-3.429968) | 1.155945 / 4.565676 (-3.409731) | 0.082611 / 0.424275 (-0.341664) | 0.012076 / 0.007607 (0.004469) | 0.514325 / 0.226044 (0.288280) | 5.155052 / 2.268929 (2.886123) | 2.307280 / 55.444624 (-53.137345) | 1.966483 / 6.876477 (-4.909994) | 2.018892 / 2.142072 (-0.123181) | 0.803068 / 4.805227 (-4.002159) | 0.152213 / 6.500664 (-6.348451) | 0.066320 / 0.075469 (-0.009149) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218578 / 1.841788 (-0.623209) | 13.563869 / 8.074308 (5.489561) | 13.954596 / 10.191392 (3.763204) | 0.151527 / 0.680424 (-0.528897) | 0.016655 / 0.534201 (-0.517546) | 0.380637 / 0.579283 (-0.198646) | 0.395854 / 0.434364 (-0.038509) | 0.459111 / 0.540337 (-0.081226) | 0.560219 / 1.386936 (-0.826717) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006427 / 0.011353 (-0.004926) | 0.004728 / 0.011008 (-0.006280) | 0.080525 / 0.038508 (0.042017) | 0.027294 / 0.023109 (0.004185) | 0.414688 / 0.275898 (0.138790) | 0.449882 / 0.323480 (0.126402) | 0.004771 / 0.007986 (-0.003214) | 0.003402 / 0.004328 (-0.000926) | 0.078748 / 0.004250 (0.074497) | 0.037046 / 0.037052 (-0.000007) | 0.417398 / 0.258489 (0.158909) | 0.462921 / 0.293841 (0.169080) | 0.030364 / 0.128546 (-0.098182) | 0.011810 / 0.075646 (-0.063837) | 0.089787 / 0.419271 (-0.329485) | 0.039806 / 0.043533 (-0.003727) | 0.403401 / 0.255139 (0.148262) | 0.439477 / 0.283200 (0.156278) | 0.088431 / 0.141683 (-0.053252) | 1.534373 / 1.452155 (0.082219) | 1.592316 / 1.492716 (0.099600) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217701 / 0.018006 (0.199695) | 0.384770 / 0.000490 (0.384280) | 0.000437 / 0.000200 (0.000237) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024952 / 0.037411 (-0.012459) | 0.098728 / 0.014526 (0.084202) | 0.106324 / 0.176557 (-0.070233) | 0.155484 / 0.737135 (-0.581651) | 0.109503 / 0.296338 (-0.186836) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.450639 / 0.215209 (0.235430) | 4.523110 / 2.077655 (2.445455) | 2.224810 / 1.504120 (0.720690) | 2.119516 / 1.541195 (0.578321) | 2.225192 / 1.468490 (0.756702) | 0.695397 / 4.584777 (-3.889380) | 3.433559 / 3.745712 (-0.312153) | 2.633127 / 5.269862 (-2.636735) | 1.448471 / 4.565676 (-3.117206) | 0.082262 / 0.424275 (-0.342013) | 0.012246 / 0.007607 (0.004639) | 0.561243 / 0.226044 (0.335199) | 5.652711 / 2.268929 (3.383782) | 2.689771 / 55.444624 (-52.754853) | 2.359512 / 6.876477 (-4.516965) | 2.471098 / 2.142072 (0.329026) | 0.802955 / 4.805227 (-4.002272) | 0.151142 / 6.500664 (-6.349522) | 0.067494 / 0.075469 (-0.007975) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306879 / 1.841788 (-0.534909) | 14.030775 / 8.074308 (5.956467) | 12.917790 / 10.191392 (2.726398) | 0.141269 / 0.680424 (-0.539155) | 0.016264 / 0.534201 (-0.517937) | 0.411957 / 0.579283 (-0.167326) | 0.393235 / 0.434364 (-0.041129) | 0.505144 / 0.540337 (-0.035193) | 0.590660 / 1.386936 (-0.796276) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7790ebd7072eafff755fb575b392f3efa74069e4 \"CML watermark\")\n"
] | 2023-04-18T17:47:53 | 2023-05-17T14:07:32 | 2023-05-17T14:00:38 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5770",
"html_url": "https://github.com/huggingface/datasets/pull/5770",
"diff_url": "https://github.com/huggingface/datasets/pull/5770.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5770.patch",
"merged_at": "2023-05-17T14:00:38"
} | Follow-up from https://github.com/huggingface/datasets/pull/5701
Related issue: https://github.com/huggingface/datasets/issues/5678 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5770/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5769 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5769/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5769/comments | https://api.github.com/repos/huggingface/datasets/issues/5769/events | https://github.com/huggingface/datasets/issues/5769 | 1,673,441,182 | I_kwDODunzps5jvq-e | 5,769 | Tiktoken tokenizers are not pickable | {
"login": "markovalexander",
"id": 22663468,
"node_id": "MDQ6VXNlcjIyNjYzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/22663468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/markovalexander",
"html_url": "https://github.com/markovalexander",
"followers_url": "https://api.github.com/users/markovalexander/followers",
"following_url": "https://api.github.com/users/markovalexander/following{/other_user}",
"gists_url": "https://api.github.com/users/markovalexander/gists{/gist_id}",
"starred_url": "https://api.github.com/users/markovalexander/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/markovalexander/subscriptions",
"organizations_url": "https://api.github.com/users/markovalexander/orgs",
"repos_url": "https://api.github.com/users/markovalexander/repos",
"events_url": "https://api.github.com/users/markovalexander/events{/privacy}",
"received_events_url": "https://api.github.com/users/markovalexander/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting, @markovalexander.\r\n\r\nUnfortunately, I'm not able to reproduce the issue: the `tiktoken` tokenizer can be used within `Dataset.map`, both in my local machine and in a Colab notebook: https://colab.research.google.com/drive/1DhJroZgk0sNFJ2Mrz-jYgrmh9jblXaCG?usp=sharing\r\n\r\nAre you sure you are using `datasets` version 2.11.0?"
] | 2023-04-18T16:07:40 | 2023-05-04T18:55:57 | 2023-05-04T18:55:57 | NONE | null | null | null | ### Describe the bug
Since tiktoken tokenizer is not pickable, it is not possible to use it inside `dataset.map()` with multiprocessing enabled. However, you [made](https://github.com/huggingface/datasets/issues/5536) tiktoken's tokenizers pickable in `datasets==2.10.0` for caching. For some reason, this logic does not work in dataset processing and raises `TypeError: cannot pickle 'builtins.CoreBPE' object`
### Steps to reproduce the bug
```
from datasets import load_dataset
import tiktoken
dataset = load_dataset("stas/openwebtext-10k")
enc = tiktoken.get_encoding("gpt2")
tokenized = dataset.map(
process,
remove_columns=['text'],
desc="tokenizing the OWT splits",
num_proc=2,
)
def process(example):
ids = enc.encode(example['text'])
ids.append(enc.eot_token)
out = {'ids': ids, 'len': len(ids)}
return out
```
### Expected behavior
starts processing dataset
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.0-1021-oracle-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.13.4
- PyArrow version: 9.0.0
- Pandas version: 2.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5769/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5768 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5768/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5768/comments | https://api.github.com/repos/huggingface/datasets/issues/5768/events | https://github.com/huggingface/datasets/issues/5768 | 1,672,494,561 | I_kwDODunzps5jsD3h | 5,768 | load_dataset("squad") doesn't work in 2.7.1 and 2.10.1 | {
"login": "yaseen157",
"id": 57412770,
"node_id": "MDQ6VXNlcjU3NDEyNzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/57412770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaseen157",
"html_url": "https://github.com/yaseen157",
"followers_url": "https://api.github.com/users/yaseen157/followers",
"following_url": "https://api.github.com/users/yaseen157/following{/other_user}",
"gists_url": "https://api.github.com/users/yaseen157/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaseen157/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaseen157/subscriptions",
"organizations_url": "https://api.github.com/users/yaseen157/orgs",
"repos_url": "https://api.github.com/users/yaseen157/repos",
"events_url": "https://api.github.com/users/yaseen157/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaseen157/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @yaseen157.\r\n\r\nCould you please give the complete error stack trace?",
"I am not able to reproduce your issue: the dataset loads perfectly on my local machine and on a Colab notebook: https://colab.research.google.com/drive/1Fbdoa1JdNz8DOdX6gmIsOK1nCT8Abj4O?usp=sharing\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"squad\")\r\nDownloading builder script: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5.27k/5.27k [00:00<00:00, 3.22MB/s]\r\nDownloading metadata: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2.36k/2.36k [00:00<00:00, 1.60MB/s]\r\nDownloading readme: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 7.67k/7.67k [00:00<00:00, 4.58MB/s]\r\nDownloading and preparing dataset squad/plain_text to ...t/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453...\r\nDownloading data: 30.3MB [00:00, 91.8MB/s] | 0/2 [00:00<?, ?it/s]\r\nDownloading data: 4.85MB [00:00, 75.3MB/s] \r\nDownloading data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 2.31it/s]\r\nExtracting data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 2157.01it/s]\r\nDataset squad downloaded and prepared to .../.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453. Subsequent calls will reuse this data.\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 463.95it/s]\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 87599\r\n })\r\n validation: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 10570\r\n })\r\n})\r\n```",
"I am at a complete loss for what's happening here. A quick summary, I have 3 machines to try this with:\r\n1) My windows 10 laptop\r\n2) Linux machine1, super computer login node\r\n3) Linux machine2, super computer compute node\r\n\r\nLet's define the following as a test script for the machines:\r\n\r\n```\r\nimport traceback\r\nimport datasets\r\nprint(f\"{datasets.__version__=}\")\r\ntry:\r\n ds = datasets.load_dataset(\"squad\")\r\nexcept:\r\n traceback.print_exc()\r\nelse:\r\n print(\"Success!\")\r\n```\r\n\r\nThe Windows laptop enters some sort of traceback recursion loop:\r\n\r\n> datasets.__version__='2.7.1'\r\n> Downloading and preparing dataset squad/plain_text to C:/Users/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453...\r\n> Downloading data files: 100%|ββββββββββ| 2/2 [00:00<?, ?it/s]\r\n> Traceback (most recent call last):\r\n> File \"<string>\", line 1, in <module>\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 116, in spawn_main\r\n> exitcode = _main(fd, parent_sentinel)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 125, in _main\r\n> prepare(preparation_data)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 236, in prepare\r\n> _fixup_main_from_path(data['init_main_from_path'])\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 287, in _fixup_main_from_path\r\n> main_content = runpy.run_path(main_path,\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 267, in run_path\r\n> code, fname = _get_code_from_file(run_name, path_name)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 237, in _get_code_from_file\r\n> with io.open_code(decoded_path) as f:\r\n> OSError: [Errno 22] Invalid argument: 'C:\\\\Users\\\\yr3g17\\\\OneDrive - University of Southampton\\\\Documents\\\\PhD-repository\\\\<input>'\r\n> Traceback (most recent call last):\r\n> File \"<string>\", line 1, in <module>\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 116, in spawn_main\r\n> exitcode = _main(fd, parent_sentinel)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 125, in _main\r\n> prepare(preparation_data)\r\n**this error traceback is endlessly recursive**\r\n\r\nThis is a brand new issue that started today and I didn't even realise was a thing, as I had been using my windows machine to follow tracebacks for the other machines...\r\n\r\nI suspect this issue had something to do with my filepath naming, but I couldn't confirm this when I spent time trying to debug this myself weeks ago, something to do with files being locked and never released. I'm not too concerned about my laptop not working here because I've had so many issues with Microsoft OneDrive and my filesystem.\r\n\r\nLinux machines 1 and 2 were working fine for months, but have all of a sudden stopped working. Trying to run linux machine 1 (login node), I get:\r\n\r\n> datasets.__version__='2.10.1'\r\n> Downloading and preparing dataset json/squad to /home/yr3g17/.cache/hugg\r\ningface/datasets/json/squad-d733af945be1d2c2/0.0.0/0f7e3662623656454fcd2\r\nb650f34e886a7db4b9104504885bd462096cc7a9f51...\r\n> Downloading data files: 100%|βββββββββββββββββββββββββββββββββββββββββββ\r\nβββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 4042.70\r\nit/s]\r\n>Extracting data files: 100%|βββββββββββββββββββββββββββββββββββββββ\r\nβββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 1\r\n11.15it/s]\r\n> Generating train split: 0 examples [00:00, ? examples/s]\r\n\r\n and hangs here. This has not happened to me before on the Linux machine. If I forcefully keyboard interrupt, I get:\r\n \r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 2, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/load.py\", line 1782, in load_dataset\r\n> builder_instance.download_and_prepare(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/builder.py\", line 793, in download_and_prepare\r\n> with FileLock(lock_path) if is_local else contextlib.nullcontext():\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/utils/filelock.py\", line 320, in __enter__\r\n> self.acquire()\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/utils/filelock.py\", line 282, in acquire\r\n> time.sleep(poll_intervall)\r\n\r\nWhich also appears to be file lock related! I resolved this by navigating to my ~/.cache/huggingface/datasets directory and wiping out anything to do with the squad dataset in *.lock files. Now I get:\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset_load(\"squad\")\r\n\r\n```\r\n> Downloading and preparing dataset squad/plain_text to /home/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb\r\n> 2511d223b9150cce08a837ef62ffea453...\r\n> Downloading data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 44.75it/s]\r\n> Extracting data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 8.54it/s]\r\n> Dataset squad downloaded and prepared to /home/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150\r\n> cce08a837ef62ffea453. Subsequent calls will reuse this data.\r\n> 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 19.77it/s]\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['id', 'title', 'context', 'question', 'answers'],\r\n> num_rows: 87599\r\n> })\r\n> validation: Dataset({\r\n> features: ['id', 'title', 'context', 'question', 'answers'],\r\n> num_rows: 10570\r\n> })\r\n> })\r\n> \r\n\r\nWhich all seems fine right, it's doing what it should be. But now, without ever leaving the IDE, I \"make a subsequent call\" to reuse the data by repeating the command. I encounter the following traceback\r\n\r\n`load_dataset(\"squad\")`\r\n\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1496, in load_dataset_builder\r\n> dataset_module = dataset_module_factory(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1151, in dataset_module_factory\r\n> ).get_module()\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 631, in get_module\r\n> data_files = DataFilesDict.from_local_or_remote(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 796, in from_local_or_remote\r\n> DataFilesList.from_local_or_remote(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 764, in from_local_or_remote\r\n> data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 369, in resolve_patterns_locally_or_by_urls\r\n> raise FileNotFoundError(error_msg)\r\n> FileNotFoundError: Unable to resolve any data file that matches '['train[-._ 0-9/]**', '**[-._ 0-9/]train[-._ 0-9/]**', 'training[-._ 0-9/]**', '**[-\r\n> ._ 0-9/]training[-._ 0-9/]**']' at /mainfs/home/yr3g17/.cache/huggingface/datasets/squad with any supported extension ['csv', 'tsv', 'json', 'jsonl',\r\n> 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'gr\r\n> ib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', '\r\n> mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', '\r\n> emf', 'xbm', 'xpm', 'BLP', 'BMP', 'DIB', 'BUFR', 'CUR', 'PCX', 'DCX', 'DDS', 'PS', 'EPS', 'FIT', 'FITS', 'FLI', 'FLC', 'FTC', 'FTU', 'GBR', 'GIF', 'G\r\n> RIB', 'H5', 'HDF', 'PNG', 'APNG', 'JP2', 'J2K', 'JPC', 'JPF', 'JPX', 'J2C', 'ICNS', 'ICO', 'IM', 'IIM', 'TIF', 'TIFF', 'JFIF', 'JPE', 'JPG', 'JPEG',\r\n> 'MPG', 'MPEG', 'MSP', 'PCD', 'PXR', 'PBM', 'PGM', 'PPM', 'PNM', 'PSD', 'BW', 'RGB', 'RGBA', 'SGI', 'RAS', 'TGA', 'ICB', 'VDA', 'VST', 'WEBP', 'WMF',\r\n> 'EMF', 'XBM', 'XPM', 'aiff', 'au', 'avr', 'caf', 'flac', 'htk', 'svx', 'mat4', 'mat5', 'mpc2k', 'ogg', 'paf', 'pvf', 'raw', 'rf64', 'sd2', 'sds', 'ir\r\n> cam', 'voc', 'w64', 'wav', 'nist', 'wavex', 'wve', 'xi', 'mp3', 'opus', 'AIFF', 'AU', 'AVR', 'CAF', 'FLAC', 'HTK', 'SVX', 'MAT4', 'MAT5', 'MPC2K', 'O\r\n> GG', 'PAF', 'PVF', 'RAW', 'RF64', 'SD2', 'SDS', 'IRCAM', 'VOC', 'W64', 'WAV', 'NIST', 'WAVEX', 'WVE', 'XI', 'MP3', 'OPUS', 'zip']\r\n\r\nIt doesn't even appear like I can reliably repeat this process. I'll nuke squad files in my dataset cache and run the Python code again (which downloads a new copy of the dataset to cache). It will either fail (as it just did in the quote above), or it will successfully recall the dataset.\r\n\r\nI repeated this nuking process a few times until calling load_dataset was reliably giving me the correct result (no filelocking issues or tracebacks). I then sent the test script as a job to the supercomputer compute nodes (which do not have internet access and therefore depend on cached data from Linux machine 1 login nodes)\r\n\r\n> Using the latest cached version of the module from /home/yr3g17/.cache/huggingface/modules/datasets_modules/datasets/squad/8730650fed465361f38ac4d810\r\n> ccdd16e8fc87b56498e52fb7e2cadaefc1f177 (last modified on Tue Feb 14 10:12:56 2023) since it couldn't be found locally at squad., or remotely on the Hugging Face Hub.\r\n> Traceback (most recent call last):\r\n> File \"/mainfs/scratch/yr3g17/squad_qanswering/3054408/0/../../main.py\", line 5, in <module>\r\n> dataset = load_dataset(\"squad\")\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1522, in load_dataset_builder\r\n> builder_instance: DatasetBuilder = builder_cls(\r\n> TypeError: 'NoneType' object is not callable\r\n\r\nand I have absolutely no idea why the second and third machines are producing different tracebacks. I have previously run these exact scripts successfully on the login and compute nodes of the supercomputer, this issue I'm raising has appeared fairly recently for me. This, is where I encounter the TypeError that I opened this issue with, which I was able to traceback (using my laptop before it too started not working) to whatever was dynamically importing \"builder_cls\". That bit of code wasn't doing importing builder_cls correctly and would effectively make the assignment \"builder_cls=None\" resulting in the TypeError. Does any of this help?",
"I'm back on linux machine 1 (login node) now. After submitting that as a job to machine 2 and it failing with TypeError, linux machine 1 now produces identical traceback to machine 2:\r\n\r\n> (arkroyal) [yr3g17@cyan52 squad_qanswering]$ python\r\n> Python 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] on linux\r\n> Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>\r\n> from datasets import load_dataset\r\n> load_dataset(\"squad\")\r\n>\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1522, in load_dataset_builder\r\n> builder_instance: DatasetBuilder = builder_cls(\r\n> TypeError: 'NoneType' object is not callable\r\n\r\nI thought it might be useful to provide you with my cache file structure:\r\n\r\n>_home_yr3g17_.cache_huggingface_datasets_casino_default_1.1.0_302c3b1ac78c48091deabe83a11f4003c7b472a4e11a8eb92799653785bd5da1.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_imdb_plain_text_1.0.0_2fdd8b9bcadd6e7055e742a706876ba43f19faee861df134affd7a3f60fc38a1.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_squad_plain_text_1.0.0_d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_yelp_review_full_yelp_review_full_1.0.0_e8e18e19d7be9e75642fc66b198abadb116f73599ec89a69ba5dd8d1e57ba0bf.lock\r\n> casino\r\n> downloads\r\n> imdb\r\n> json\r\n> squad\r\n> squad_v2\r\n> yelp_review_full\r\n\r\nThe inside of squad/plain_text/1.0.0/ looks like\r\n\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453.incomplete_info.lock\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453_builder.lock\r\n",
"I see this is quite a complex use case...\r\n\r\nLet's try multiple things:\r\n- First, update `datasets` and make sure you use the same version in all machines, so that we can easily compare different behaviors.\r\n ```\r\n pip install -U datasets\r\n ```\r\n- Second, wherever you run the `load_dataset(\"squad\")` command, make sure there is not a local directory named \"squad\". The datasets library gives priority to any local file/directory over the datasets on the Hugging Face Hub\r\n - I tell you this, because in one of your trace backs, it seems it refers to a local directory:\r\n ```\r\n Downloading and preparing dataset json/squad to /home/yr3g17/.cache/huggingface/datasets/json/squad-d733af945be1d2c2/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...\r\n ```\r\n- Third, to use the \"squad\" dataset from the Hub, you need to have internet connection, so that you can download the \"squad\" Python loading script from the Hub. Do all your machines have internet connection?\r\n - I ask this because of this error message:\r\n ```\r\n Using the latest cached version of the module from /home/yr3g17/.cache/huggingface/modules/datasets_modules/datasets/squad/8730650fed465361f38ac4d810ccdd16e8fc87b56498e52fb7e2cadaefc1f177 (last modified on Tue Feb 14 10:12:56 2023) since it couldn't be found locally at squad., or remotely on the Hugging Face Hub.\r\n ```\r\n- Fourth, to be sure that we avoid any issues with the cache, it is a good idea to remove it and regenerate it. Remove `.cache/huggingface/datasets` and also `.cache/huggingface/modules`\r\n- Fifth, as an additional debugging tool, let's be sure we use the latest \"squad\" Python loading script by passing the revision parameter:\r\n ```\r\n ds = load_dataset(\"squad\", revision=\"5fe18c4c680f9922d794e3f4dd673a751c74ee37\")\r\n ```",
"Additionally, we just had an infrastructure issue on the Hugging Face Hub at around 11:30 today. That might have contributed to the connectivity issue... It is fixed now.\r\n\r\nhttps://status.huggingface.co/",
"Hi again, thanks for your help and insight Albert Villanova.\r\n\r\nIt's all working now, so thank you for that. For the benefit of anyone else who ends up in this thread, I solved the problem by addressing Albert's advice:\r\n\r\n(1) Both Windows and Linux machine 1 (have internet access) and can now access the SQuAD dataset. The supercomputer login node can only access version 2.7.1, but my Windows laptop is running on datasets 2.11.0 just fine. I suspect it was just a perfect storm alongside the aforementioned \"infrastructure issue\".\r\n\r\n(2) I did have a local directory called squad, because I was using a local copy of evaluate's \"SQuAD\" metric. The supercomputer compute nodes do not have internet access and treat `metric = evaluate.load('<x>')` as a way of loading a metric at the local path `./<x>/<x>.py`, which could've been a related issue as I was storing the metric under `squad/squad.py`. Don't be lazy like me and store the evaluation code under a path with a name that can be misinterpreted.\r\n\r\n(3) I can't give internet access to the supercomputer compute nodes, so local files do just fine here.\r\n\r\n(4) The windows and Linux machine 1 can both access the internet and were getting fresh copies of the dataset from the huggingface hub. Linux machine 2 was working after I cleared the contents of ~/.cache/huggingface/....\r\n\r\nI feel silly now, knowing it was all so simple! Sorry about that Albert, and thanks again for the help. I've not raised a Github issue like this before, so I'm not sure if I should be close my own issues or if this is something you guys do?",
"Thanks for your detailed feedback which for sure will be useful to other community members."
] | 2023-04-18T07:10:56 | 2023-04-20T10:27:23 | 2023-04-20T10:27:22 | NONE | null | null | null | ### Describe the bug
There is an issue that seems to be unique to the "squad" dataset, in which it cannot be loaded using standard methods. This issue is most quickly reproduced from the command line, using the HF examples to verify a dataset is loaded properly.
This is not a problem with "squad_v2" dataset for example.
### Steps to reproduce the bug
cmd line
> $ python -c "from datasets import load_dataset; print(load_dataset('squad', split='train')[0])"
OR
Python IDE
> from datasets import load_dataset
> load_dataset("squad")
### Expected behavior
I expected to either see the output described here from running the very same command in command line ([https://huggingface.co/docs/datasets/installation]), or any output that does not raise Python's TypeError.
There is some funky behaviour in the dataset builder portion of the codebase that means it is trying to import the squad dataset with an incorrect path, or the squad dataset couldn't be downloaded. I'm not really sure what the problem is beyond that. Messing around with caching I did manage to get it to load the dataset once, and then couldn't repeat this.
### Environment info
datasets=2.7.1 **or** 2.10.1, python=3.10.8, Linux 3.10.0-1160.36.2.el7.x86_64 **or** Windows 10-64
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5768/timeline | null | completed | false |