url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.27B
node_id
stringlengths
18
32
number
int64
1
4.51k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
null
assignees
sequence
milestone
null
comments
sequence
created_at
int64
1,587B
1,655B
updated_at
int64
1,587B
1,655B
closed_at
int64
1,587B
1,655B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
nullclasses
1 value
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/4507
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4507/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4507/comments
https://api.github.com/repos/huggingface/datasets/issues/4507/events
https://github.com/huggingface/datasets/issues/4507
1,272,615,932
I_kwDODunzps5L2pP8
4,507
How to let `load_dataset` return a `Dataset` instead of `DatasetDict` in customized loading script
{ "login": "liyucheng09", "id": 27999909, "node_id": "MDQ6VXNlcjI3OTk5OTA5", "avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liyucheng09", "html_url": "https://github.com/liyucheng09", "followers_url": "https://api.github.com/users/liyucheng09/followers", "following_url": "https://api.github.com/users/liyucheng09/following{/other_user}", "gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}", "starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions", "organizations_url": "https://api.github.com/users/liyucheng09/orgs", "repos_url": "https://api.github.com/users/liyucheng09/repos", "events_url": "https://api.github.com/users/liyucheng09/events{/privacy}", "received_events_url": "https://api.github.com/users/liyucheng09/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,655,319,394,000
1,655,319,464,000
null
NONE
null
If the dataset does not need splits, i.e., no training and validation split, more like a table. How can I let the `load_dataset` function return a `Dataset` object directly rather than return a `DatasetDict` object with only one key-value pair. Or I can paraphrase the question in the following way: how to skip `_split_generators` step in `DatasetBuilder` to let `as_dataset` gives a single `Dataset` rather than a list`[Dataset]`? Many thanks for any help.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4507/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4506
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4506/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4506/comments
https://api.github.com/repos/huggingface/datasets/issues/4506/events
https://github.com/huggingface/datasets/issues/4506
1,272,516,895
I_kwDODunzps5L2REf
4,506
Failure to hash (and cache) a `.map(...)` (almost always)
{ "login": "DrMatters", "id": 22641583, "node_id": "MDQ6VXNlcjIyNjQxNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DrMatters", "html_url": "https://github.com/DrMatters", "followers_url": "https://api.github.com/users/DrMatters/followers", "following_url": "https://api.github.com/users/DrMatters/following{/other_user}", "gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}", "starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions", "organizations_url": "https://api.github.com/users/DrMatters/orgs", "repos_url": "https://api.github.com/users/DrMatters/repos", "events_url": "https://api.github.com/users/DrMatters/events{/privacy}", "received_events_url": "https://api.github.com/users/DrMatters/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Important info:\r\n\r\nAs hashes are generated randomly for functions, it leads to **false identifying some results as already hashed** (mapping is not executed after a method update) when there's a `pytorch_lightning.seed_everything(123)`" ]
1,655,313,091,000
1,655,315,434,000
null
NONE
null
## Describe the bug Sometimes I get messages about not being able to hash a method: `Parameter 'function'=<function StupidDataModule._separate_speaker_id_from_dialogue at 0x7f1b27180d30> of the transform datasets.arrow_dataset.Dataset. _map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.` Whilst the function looks like this: ```python @staticmethod def _separate_speaker_id_from_dialogue(example: arrow_dataset.Example): speaker_id, dialogue = tuple(zip(*(example["dialogue"]))) example["speaker_id"] = speaker_id example["dialogue"] = dialogue return example ``` This is the first step in my preprocessing pipeline, but sometimes the message about failure to hash is not appearing on the first step, but then appears on a later step. This error is sometimes causing a failure to use cached data, instead of re-running all steps again. ## Steps to reproduce the bug ```python import copy import datasets from datasets import arrow_dataset def main(): dataset = datasets.load_dataset("blended_skill_talk") res = dataset.map(method) print(res) def method(example: arrow_dataset.Example): example['previous_utterance_copy'] = copy.deepcopy(example['previous_utterance']) return example if __name__ == '__main__': main() ``` Run with: ``` python -m reproduce_error ``` ## Expected results Dataset is mapped and cached correctly. ## Actual results The code outputs this at some point: `Parameter 'function'=<function method at 0x7faa83d2a160> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Ubuntu 20.04.3 - Python version: 3.9.12 - PyArrow version: 8.0.0 - Datasets version: 2.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4506/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4506/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4505
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4505/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4505/comments
https://api.github.com/repos/huggingface/datasets/issues/4505/events
https://github.com/huggingface/datasets/pull/4505
1,272,477,226
PR_kwDODunzps45uH-o
4,505
Fix double dots in data files
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The CI fails are unrelated to this PR (apparently something related to `seqeval` on windows) - merging :)" ]
1,655,310,664,000
1,655,313,358,000
1,655,312,753,000
MEMBER
null
As mentioned in https://github.com/huggingface/transformers/pull/17715 `data_files` can't find a file if the path contains double dots `/../`. This has been introduced in https://github.com/huggingface/datasets/pull/4412, by trying to ignore hidden files and directories (i.e. if they start with a dot) I fixed this and added a test cc @sgugger @ydshieh
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4505/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4505/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4505", "html_url": "https://github.com/huggingface/datasets/pull/4505", "diff_url": "https://github.com/huggingface/datasets/pull/4505.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4505.patch", "merged_at": 1655312753000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4504
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4504/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4504/comments
https://api.github.com/repos/huggingface/datasets/issues/4504/events
https://github.com/huggingface/datasets/issues/4504
1,272,418,480
I_kwDODunzps5L15Cw
4,504
Can you please add the Stanford dog dataset?
{ "login": "dgrnd4", "id": 69434832, "node_id": "MDQ6VXNlcjY5NDM0ODMy", "avatar_url": "https://avatars.githubusercontent.com/u/69434832?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dgrnd4", "html_url": "https://github.com/dgrnd4", "followers_url": "https://api.github.com/users/dgrnd4/followers", "following_url": "https://api.github.com/users/dgrnd4/following{/other_user}", "gists_url": "https://api.github.com/users/dgrnd4/gists{/gist_id}", "starred_url": "https://api.github.com/users/dgrnd4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dgrnd4/subscriptions", "organizations_url": "https://api.github.com/users/dgrnd4/orgs", "repos_url": "https://api.github.com/users/dgrnd4/repos", "events_url": "https://api.github.com/users/dgrnd4/events{/privacy}", "received_events_url": "https://api.github.com/users/dgrnd4/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "would you like to give it a try, @dgrnd4? (maybe with the help of the dataset author?)", "@julien-c i am sorry but I have no idea about how it works: can I add the dataset by myself, following \"instructions to add a new dataset\"?\r\nCan I add a dataset even if it's not mine? (it's public in the link that I wrote on the post)\r\n" ]
1,655,307,575,000
1,655,314,455,000
null
NONE
null
## Adding a Dataset - **Name:** *Stanford dog dataset* - **Description:** *The dataset is about 120 classes for a total of 20.580 images. You can find the dataset here http://vision.stanford.edu/aditya86/ImageNetDogs/* - **Paper:** *http://vision.stanford.edu/aditya86/ImageNetDogs/* - **Data:** *[link to the Github repository or current dataset location](http://vision.stanford.edu/aditya86/ImageNetDogs/)* - **Motivation:** *The dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It is useful for fine-grain purpose * Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4504/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4504/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4503
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4503/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4503/comments
https://api.github.com/repos/huggingface/datasets/issues/4503/events
https://github.com/huggingface/datasets/pull/4503
1,272,367,055
PR_kwDODunzps45twLR
4,503
Add feverous config to fever dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4503). All of your documentation changes will be reflected on that endpoint." ]
1,655,305,187,000
1,655,305,665,000
null
MEMBER
null
Related to: #4452 and #3792.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4503/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4503/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4503", "html_url": "https://github.com/huggingface/datasets/pull/4503", "diff_url": "https://github.com/huggingface/datasets/pull/4503.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4503.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4502
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4502/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4502/comments
https://api.github.com/repos/huggingface/datasets/issues/4502/events
https://github.com/huggingface/datasets/issues/4502
1,272,353,700
I_kwDODunzps5L1pOk
4,502
Logic bug in arrow_writer?
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,655,304,600,000
1,655,304,600,000
null
CONTRIBUTOR
null
https://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L475-L488 I got some error, and I found it's caused by `batch_examples` being `{}`. I wonder if the code should be as follows: ``` - if batch_examples and len(next(iter(batch_examples.values()))) == 0: + if not batch_examples or len(next(iter(batch_examples.values()))) == 0: return ``` @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4502/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4502/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4501
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4501/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4501/comments
https://api.github.com/repos/huggingface/datasets/issues/4501/events
https://github.com/huggingface/datasets/pull/4501
1,272,300,646
PR_kwDODunzps45th2M
4,501
Corrected broken links in doc
{ "login": "clefourrier", "id": 22726840, "node_id": "MDQ6VXNlcjIyNzI2ODQw", "avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clefourrier", "html_url": "https://github.com/clefourrier", "followers_url": "https://api.github.com/users/clefourrier/followers", "following_url": "https://api.github.com/users/clefourrier/following{/other_user}", "gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}", "starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions", "organizations_url": "https://api.github.com/users/clefourrier/orgs", "repos_url": "https://api.github.com/users/clefourrier/repos", "events_url": "https://api.github.com/users/clefourrier/events{/privacy}", "received_events_url": "https://api.github.com/users/clefourrier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,302,337,000
1,655,305,865,000
1,655,305,256,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4501/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4501/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4501", "html_url": "https://github.com/huggingface/datasets/pull/4501", "diff_url": "https://github.com/huggingface/datasets/pull/4501.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4501.patch", "merged_at": 1655305256000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4500
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4500/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4500/comments
https://api.github.com/repos/huggingface/datasets/issues/4500/events
https://github.com/huggingface/datasets/pull/4500
1,272,281,992
PR_kwDODunzps45tdxk
4,500
Add `concatenate_datasets` for iterable datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4500). All of your documentation changes will be reflected on that endpoint." ]
1,655,301,530,000
1,655,302,799,000
null
MEMBER
null
`concatenate_datasets` currently only supports lists of `datasets.Dataset`, not lists of `datasets.IterableDataset` like `interleave_datasets` Fix https://github.com/huggingface/datasets/issues/2564 I also moved `_interleave_map_style_datasets` from combine.py to arrow_dataset.py, since the logic depends a lot on the `Dataset` object internals And I moved `concatenate_datasets` from arrow_dataset.py to combine.py to have it with `interleave_datasets`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4500/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4500", "html_url": "https://github.com/huggingface/datasets/pull/4500", "diff_url": "https://github.com/huggingface/datasets/pull/4500.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4500.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4499
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4499/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4499/comments
https://api.github.com/repos/huggingface/datasets/issues/4499/events
https://github.com/huggingface/datasets/pull/4499
1,272,118,162
PR_kwDODunzps45s6Jh
4,499
fix ETT m1/m2 test/val dataset
{ "login": "kashif", "id": 8100, "node_id": "MDQ6VXNlcjgxMDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kashif", "html_url": "https://github.com/kashif", "followers_url": "https://api.github.com/users/kashif/followers", "following_url": "https://api.github.com/users/kashif/following{/other_user}", "gists_url": "https://api.github.com/users/kashif/gists{/gist_id}", "starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kashif/subscriptions", "organizations_url": "https://api.github.com/users/kashif/orgs", "repos_url": "https://api.github.com/users/kashif/repos", "events_url": "https://api.github.com/users/kashif/events{/privacy}", "received_events_url": "https://api.github.com/users/kashif/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thansk for the fix ! Can you regenerate the datasets_infos.json please ? This way it will update the expected number of examples in the test and val splits", "ah yes!" ]
1,655,293,862,000
1,655,304,956,000
1,655,304,313,000
CONTRIBUTOR
null
https://huggingface.co/datasets/ett/discussions/1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4499/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4499/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4499", "html_url": "https://github.com/huggingface/datasets/pull/4499", "diff_url": "https://github.com/huggingface/datasets/pull/4499.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4499.patch", "merged_at": 1655304312000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4498
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4498/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4498/comments
https://api.github.com/repos/huggingface/datasets/issues/4498/events
https://github.com/huggingface/datasets/issues/4498
1,272,100,549
I_kwDODunzps5L0rbF
4,498
WER and CER > 1
{ "login": "sadrasabouri", "id": 43045767, "node_id": "MDQ6VXNlcjQzMDQ1NzY3", "avatar_url": "https://avatars.githubusercontent.com/u/43045767?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sadrasabouri", "html_url": "https://github.com/sadrasabouri", "followers_url": "https://api.github.com/users/sadrasabouri/followers", "following_url": "https://api.github.com/users/sadrasabouri/following{/other_user}", "gists_url": "https://api.github.com/users/sadrasabouri/gists{/gist_id}", "starred_url": "https://api.github.com/users/sadrasabouri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sadrasabouri/subscriptions", "organizations_url": "https://api.github.com/users/sadrasabouri/orgs", "repos_url": "https://api.github.com/users/sadrasabouri/repos", "events_url": "https://api.github.com/users/sadrasabouri/events{/privacy}", "received_events_url": "https://api.github.com/users/sadrasabouri/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "WER can have values bigger than 1.0, this is expected when there are too many insertions\r\n\r\nFrom [wikipedia](https://en.wikipedia.org/wiki/Word_error_rate):\r\n> Note that since N is the number of words in the reference, the word error rate can be larger than 1.0" ]
1,655,292,912,000
1,655,311,085,000
1,655,311,085,000
NONE
null
## Describe the bug It seems that in some cases in which the `prediction` is longer than the `reference` we may have word/character error rate higher than 1 which is a bit odd. If it's a real bug I think I can solve it with a PR changing [this](https://github.com/huggingface/datasets/blob/master/metrics/wer/wer.py#L105) line to ```python return min(incorrect / total, 1.0) ``` ## Steps to reproduce the bug ```python from datasets import load_metric wer = load_metric("wer") wer_value = wer.compute(predictions=["Hi World vka"], references=["Hello"]) print(wer_value) ``` ## Expected results ``` 1.0 ``` ## Actual results ``` 3.0 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4498/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4498/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4497
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4497/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4497/comments
https://api.github.com/repos/huggingface/datasets/issues/4497/events
https://github.com/huggingface/datasets/pull/4497
1,271,964,338
PR_kwDODunzps45sYns
4,497
Re-add download_manager module in utils
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the fix.\r\n\r\nI'm wondering how this fixes backward compatibility...\r\n\r\nExecuting this code:\r\n```python\r\nfrom datasets.utils.download_manager import DownloadMode\r\n```\r\nwe will have\r\n```python\r\nDownloadMode = None\r\n```\r\n\r\nIf afterwards we use something like:\r\n```python\r\nif download_mode == DownloadMode.FORCE_REDOWNLOAD\r\n```\r\nthat will raise an exception.", "It works fine on my side:\r\n```python\r\n>>> from datasets.utils.download_manager import DownloadMode\r\n>>> DownloadMode is not None\r\nTrue\r\n```", "As reported in https://github.com/huggingface/evaluate/pull/143\r\n```python\r\nfrom datasets.utils import DownloadConfig\r\n```\r\nis also missing, I'm re-adding it", "Took the liberty of merging this one, to do a patch release soon. If we think of a better approach we can improve it later" ]
1,655,286,273,000
1,655,289,208,000
1,655,288,624,000
MEMBER
null
https://github.com/huggingface/datasets/pull/4384 moved `datasets.utils.download_manager` to `datasets.download.download_manager` This breaks `evaluate` which imports `DownloadMode` from `datasets.utils.download_manager` This PR re-adds `datasets.utils.download_manager` without circular imports. We could also show a message that says that accessing it is deprecated, but I think we can do this in a subsequent PR, and just focus on doing a patch release for now
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4497/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4497/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4497", "html_url": "https://github.com/huggingface/datasets/pull/4497", "diff_url": "https://github.com/huggingface/datasets/pull/4497.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4497.patch", "merged_at": 1655288624000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4496
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4496/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4496/comments
https://api.github.com/repos/huggingface/datasets/issues/4496/events
https://github.com/huggingface/datasets/pull/4496
1,271,945,704
PR_kwDODunzps45sUnW
4,496
Replace `assertEqual` with `assertTupleEqual` in unit tests for verbosity
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4496). All of your documentation changes will be reflected on that endpoint.", "FYI I used the following regex to look for the `assertEqual` statements where the assertion was being done over a Tuple: `self.assertEqual(.*, \\(.*,)(\\)\\))$`, hope this is useful!" ]
1,655,285,356,000
1,655,286,253,000
null
CONTRIBUTOR
null
As detailed in #4419 and as suggested by @mariosasko, we could replace the `assertEqual` assertions with `assertTupleEqual` when the assertion is between Tuples, in order to make the tests more verbose.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4496/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4496/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4496", "html_url": "https://github.com/huggingface/datasets/pull/4496", "diff_url": "https://github.com/huggingface/datasets/pull/4496.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4496.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4495
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4495/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4495/comments
https://api.github.com/repos/huggingface/datasets/issues/4495/events
https://github.com/huggingface/datasets/pull/4495
1,271,851,025
PR_kwDODunzps45sAgO
4,495
Fix patching module that doesn't exist
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,281,070,000
1,655,311,249,000
1,655,283,249,000
MEMBER
null
Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true When trying to patch `scipy.io.loadmat`: ```python ModuleNotFoundError: No module named 'scipy' ``` Instead it shouldn't raise an error and do nothing Bug introduced by #4375 Fix https://github.com/huggingface/datasets/issues/4494
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4495/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4495/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4495", "html_url": "https://github.com/huggingface/datasets/pull/4495", "diff_url": "https://github.com/huggingface/datasets/pull/4495.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4495.patch", "merged_at": 1655283249000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4494
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4494/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4494/comments
https://api.github.com/repos/huggingface/datasets/issues/4494/events
https://github.com/huggingface/datasets/issues/4494
1,271,850,599
I_kwDODunzps5LzuZn
4,494
Patching fails for modules that are not installed or don't exist
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,655,281,049,000
1,655,283,249,000
1,655,283,249,000
MEMBER
null
Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true When trying to patch `scipy.io.loadmat`: ```python ModuleNotFoundError: No module named 'scipy' ``` Instead it shouldn't raise an error and do nothing We use patching to extend such functions to support remote URLs and work in streaming mode
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4494/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4494/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4493
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4493/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4493/comments
https://api.github.com/repos/huggingface/datasets/issues/4493/events
https://github.com/huggingface/datasets/pull/4493
1,271,306,385
PR_kwDODunzps45qL7J
4,493
Add `@transmit_format` in `flatten`, `rename_column`, and `rename_columns`
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@mariosasko please let me know whether we need to include some sort of tests to make sure that the decorator is working as expected. Thanks! 🤗 ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4493). All of your documentation changes will be reflected on that endpoint.", "Hi, thanks for working on this! Yes, please add (simple) tests so we can avoid any unexpected behavior in the future.\r\n\r\n`@transmit_format` doesn't handle column renaming, so I removed it from `rename_column` and `rename_columns` and added a comment to explain this." ]
1,655,237,349,000
1,655,310,206,000
null
CONTRIBUTOR
null
As suggested by @mariosasko in https://github.com/huggingface/datasets/pull/4411, we should include the `@format_columns` decorator to `flatten`, `rename_column`, and `rename_columns` so as to ensure that the value of `_format_columns` in an `ArrowDataset` is properly updated.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4493/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4493/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4493", "html_url": "https://github.com/huggingface/datasets/pull/4493", "diff_url": "https://github.com/huggingface/datasets/pull/4493.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4493.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4492/comments
https://api.github.com/repos/huggingface/datasets/issues/4492/events
https://github.com/huggingface/datasets/pull/4492
1,271,112,497
PR_kwDODunzps45pktu
4,492
Pin the revision in imagenet download links
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,226,917,000
1,655,228,113,000
1,655,227,545,000
MEMBER
null
Use the commit sha in the data files URLs of the imagenet-1k download script, in case we want to restructure the data files in the future. For example we may split it into many more shards for better paralellism. cc @mariosasko
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4492/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4492/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4492", "html_url": "https://github.com/huggingface/datasets/pull/4492", "diff_url": "https://github.com/huggingface/datasets/pull/4492.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4492.patch", "merged_at": 1655227545000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4491
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4491/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4491/comments
https://api.github.com/repos/huggingface/datasets/issues/4491/events
https://github.com/huggingface/datasets/issues/4491
1,270,803,822
I_kwDODunzps5Lvu1u
4,491
Dataset Viewer issue for Pavithree/test
{ "login": "Pavithree", "id": 23344465, "node_id": "MDQ6VXNlcjIzMzQ0NDY1", "avatar_url": "https://avatars.githubusercontent.com/u/23344465?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Pavithree", "html_url": "https://github.com/Pavithree", "followers_url": "https://api.github.com/users/Pavithree/followers", "following_url": "https://api.github.com/users/Pavithree/following{/other_user}", "gists_url": "https://api.github.com/users/Pavithree/gists{/gist_id}", "starred_url": "https://api.github.com/users/Pavithree/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Pavithree/subscriptions", "organizations_url": "https://api.github.com/users/Pavithree/orgs", "repos_url": "https://api.github.com/users/Pavithree/repos", "events_url": "https://api.github.com/users/Pavithree/events{/privacy}", "received_events_url": "https://api.github.com/users/Pavithree/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue can be resolved according to this post https://stackoverflow.com/questions/70566660/parquet-with-null-columns-on-pyarrow. It looks like first data entry in the json file must not have any null values as pyarrow uses this first file to infer schema for entire dataset." ]
1,655,212,990,000
1,655,217,441,000
1,655,217,273,000
NONE
null
### Link https://huggingface.co/datasets/Pavithree/test ### Description I have extracted the subset of original eli5 dataset found at hugging face. However, while loading the dataset It throws ArrowNotImplementedError: Unsupported cast from string to null using function cast_null error. Is there anything missing from my end? Kindly help. ### Owner _No response_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4491/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4491/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4490
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4490/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4490/comments
https://api.github.com/repos/huggingface/datasets/issues/4490/events
https://github.com/huggingface/datasets/issues/4490
1,270,719,074
I_kwDODunzps5LvaJi
4,490
Use `torch.nested_tensor` for arrays of varying length in torch formatter
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,655,209,180,000
1,655,209,180,000
null
CONTRIBUTOR
null
Use `torch.nested_tensor` for arrays of varying length in `TorchFormatter`. The PyTorch API of nested tensors is in the prototype stage, so wait for it to become more mature.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4490/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4490/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4489
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4489/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4489/comments
https://api.github.com/repos/huggingface/datasets/issues/4489/events
https://github.com/huggingface/datasets/pull/4489
1,270,706,195
PR_kwDODunzps45oONF
4,489
Add SV-Ident dataset
{ "login": "e-tornike", "id": 20404466, "node_id": "MDQ6VXNlcjIwNDA0NDY2", "avatar_url": "https://avatars.githubusercontent.com/u/20404466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/e-tornike", "html_url": "https://github.com/e-tornike", "followers_url": "https://api.github.com/users/e-tornike/followers", "following_url": "https://api.github.com/users/e-tornike/following{/other_user}", "gists_url": "https://api.github.com/users/e-tornike/gists{/gist_id}", "starred_url": "https://api.github.com/users/e-tornike/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/e-tornike/subscriptions", "organizations_url": "https://api.github.com/users/e-tornike/orgs", "repos_url": "https://api.github.com/users/e-tornike/repos", "events_url": "https://api.github.com/users/e-tornike/events{/privacy}", "received_events_url": "https://api.github.com/users/e-tornike/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,655,208,540,000
1,655,292,918,000
null
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4489/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4489/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4489", "html_url": "https://github.com/huggingface/datasets/pull/4489", "diff_url": "https://github.com/huggingface/datasets/pull/4489.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4489.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4488
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4488/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4488/comments
https://api.github.com/repos/huggingface/datasets/issues/4488/events
https://github.com/huggingface/datasets/pull/4488
1,270,613,857
PR_kwDODunzps45n6Ja
4,488
Update PASS dataset version
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,203,634,000
1,655,224,915,000
1,655,224,348,000
CONTRIBUTOR
null
Update the PASS dataset to version v3 (the newest one) from the [version history](https://github.com/yukimasano/PASS/blob/main/version_history.txt). PS: The older versions are not exposed as configs in the script because v1 was removed from Zenodo, and the same thing will probably happen to v2.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4488/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4488/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4488", "html_url": "https://github.com/huggingface/datasets/pull/4488", "diff_url": "https://github.com/huggingface/datasets/pull/4488.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4488.patch", "merged_at": 1655224348000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4487
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4487/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4487/comments
https://api.github.com/repos/huggingface/datasets/issues/4487/events
https://github.com/huggingface/datasets/pull/4487
1,270,525,163
PR_kwDODunzps45nm5J
4,487
Support streaming UDHR dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,199,213,000
1,655,269,762,000
1,655,269,189,000
MEMBER
null
This PR: - Adds support for streaming UDHR dataset - Adds the BCP 47 language code as feature
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4487/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4487/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4487", "html_url": "https://github.com/huggingface/datasets/pull/4487", "diff_url": "https://github.com/huggingface/datasets/pull/4487.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4487.patch", "merged_at": 1655269189000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4486
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4486/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4486/comments
https://api.github.com/repos/huggingface/datasets/issues/4486/events
https://github.com/huggingface/datasets/pull/4486
1,269,518,084
PR_kwDODunzps45kP88
4,486
Add CCAgT dataset
{ "login": "johnnv1", "id": 20444345, "node_id": "MDQ6VXNlcjIwNDQ0MzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnnv1", "html_url": "https://github.com/johnnv1", "followers_url": "https://api.github.com/users/johnnv1/followers", "following_url": "https://api.github.com/users/johnnv1/following{/other_user}", "gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}", "starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions", "organizations_url": "https://api.github.com/users/johnnv1/orgs", "repos_url": "https://api.github.com/users/johnnv1/repos", "events_url": "https://api.github.com/users/johnnv1/events{/privacy}", "received_events_url": "https://api.github.com/users/johnnv1/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4486). All of your documentation changes will be reflected on that endpoint.", "Hi! Excellent job @johnnv1! There were typos/missing words in the card, so I took the liberty to rewrite some parts to make them easier to understand. Let me know if you are ok with the changes. Also, feel free to add some info under the `Who are the annotators?` section.\r\n\r\nAdditionally, I fixed the issue with streaming and renamed the `digits` feature to `objects`.\r\n\r\n@lhoestq Are you ok with skipping the dummy data test here as it's tricky to generate it due to the splits separation logic?", "I think I can also add instance segmentation: by exposing the segment of each instance, so it will be similar with object detection:\r\n\r\n- `instances`: a dictionary containing bounding boxes, segments, and labels of the cell objects \r\n - `bbox`: a list of bounding boxes\r\n - `segment`: a list of segments in format of `[polygon]`, where each polygon is `[x0, y0, ..., xn, yn]`\r\n - `label`: a list of integers representing the category\r\n\r\nDo you think it would be ok?", "Don't you think it makes sense to keep the same category IDs for all approaches? \r\n\r\nNow we have:\r\n - nucleus category ID equals 0 for object detection and instance segmentation\r\n - background category ID equals 0 (on the masks) for semantic segmentation", "I find it weird to have a dummy label in object detection just to align the mapping with semantic segmentation. Instead, let's explain in the card (under Data Fields -> annotation) what the pixel values mean (background + object detection labels)" ]
1,655,130,019,000
1,655,314,747,000
null
NONE
null
As described in #4075 I could not generate the dummy data. Also, on the data repository isn't provided the split IDs, but I copy the functions that provide the correct data split. In summary, to have a better distribution, the data in this dataset should be separated based on the amount of NORs in each image.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4486/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4486/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4486", "html_url": "https://github.com/huggingface/datasets/pull/4486", "diff_url": "https://github.com/huggingface/datasets/pull/4486.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4486.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4485
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4485/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4485/comments
https://api.github.com/repos/huggingface/datasets/issues/4485/events
https://github.com/huggingface/datasets/pull/4485
1,269,463,054
PR_kwDODunzps45kD7A
4,485
Fix cast to null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,127,872,000
1,655,214,234,000
1,655,213,654,000
MEMBER
null
It currently fails with `ArrowNotImplementedError` instead of `TypeError` when one tries to cast integer to null type. Because if this, type inference breaks when one replaces null values with integers in `map` (it first tries to cast to the previous type before inferring the new type). Fix https://github.com/huggingface/datasets/issues/4483
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4485/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4485/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4485", "html_url": "https://github.com/huggingface/datasets/pull/4485", "diff_url": "https://github.com/huggingface/datasets/pull/4485.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4485.patch", "merged_at": 1655213654000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4484
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4484/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4484/comments
https://api.github.com/repos/huggingface/datasets/issues/4484/events
https://github.com/huggingface/datasets/pull/4484
1,269,383,811
PR_kwDODunzps45jywZ
4,484
Better ImportError message when a dataset script dependency is missing
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Discussed offline with @mariosasko, merging :)" ]
1,655,124,277,000
1,655,128,831,000
1,655,128,247,000
MEMBER
null
When a depenency is missing for a dataset script, an ImportError message is shown, with a tip to install the missing dependencies. This message is not ideal at the moment: it may show duplicate dependencies, and is not very readable. I improved it from ``` ImportError: To be able to use bigbench, you need to install the following dependencies['bigbench', 'bigbench', 'bigbench', 'bigbench'] using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz" bigbench bigbench bigbench' for instance' ``` to ``` ImportError: To be able to use bigbench, you need to install the following dependency: bigbench. Please install it using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"' for instance' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4484/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4484/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4484", "html_url": "https://github.com/huggingface/datasets/pull/4484", "diff_url": "https://github.com/huggingface/datasets/pull/4484.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4484.patch", "merged_at": 1655128247000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4483
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4483/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4483/comments
https://api.github.com/repos/huggingface/datasets/issues/4483/events
https://github.com/huggingface/datasets/issues/4483
1,269,253,840
I_kwDODunzps5Lp0bQ
4,483
Dataset.map throws pyarrow.lib.ArrowNotImplementedError when converting from list of empty lists
{ "login": "sanderland", "id": 48946947, "node_id": "MDQ6VXNlcjQ4OTQ2OTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanderland", "html_url": "https://github.com/sanderland", "followers_url": "https://api.github.com/users/sanderland/followers", "following_url": "https://api.github.com/users/sanderland/following{/other_user}", "gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanderland/subscriptions", "organizations_url": "https://api.github.com/users/sanderland/orgs", "repos_url": "https://api.github.com/users/sanderland/repos", "events_url": "https://api.github.com/users/sanderland/events{/privacy}", "received_events_url": "https://api.github.com/users/sanderland/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @sanderland ! Thanks for reporting :) This is a bug, I opened a PR to fix it. We'll do a new release soon\r\n\r\nIn the meantime you can fix it by specifying in advance that the \"label\" are integers:\r\n```python\r\nimport numpy as np\r\n\r\nds = Dataset.from_dict(\r\n {\r\n \"text\": [\"the lazy dog jumps over the quick fox\", \"another sentence\"],\r\n \"label\": [[], []],\r\n }\r\n)\r\n# explicitly say that the \"label\" type is int64, even though it contains only null values\r\nds = ds.cast_column(\"label\", Sequence(Value(\"int64\")))\r\n\r\ndef mapper(features):\r\n features['label'] = [\r\n [0,0,0] for l in features['label']\r\n ]\r\n return features\r\n\r\nds_mapped = ds.map(mapper,batched=True)\r\n```" ]
1,655,117,272,000
1,655,213,654,000
1,655,213,654,000
NONE
null
## Describe the bug Dataset.map throws pyarrow.lib.ArrowNotImplementedError: Unsupported cast from int64 to null using function cast_null when converting from a type of 'empty lists' to 'lists with some type'. This appears to be due to the interaction of arrow internals and some assumptions made by datasets. The bug appeared when binarizing some labels, and then adding a dataset which had all these labels absent (to force the model to not label empty strings such with anything) Particularly the fact that this only happens in batched mode is strange. ## Steps to reproduce the bug ```python import numpy as np ds = Dataset.from_dict( { "text": ["the lazy dog jumps over the quick fox", "another sentence"], "label": [[], []], } ) def mapper(features): features['label'] = [ [0,0,0] for l in features['label'] ] return features ds_mapped = ds.map(mapper,batched=True) ``` ## Expected results Not crashing ## Actual results ``` ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2346: in map return self._map_single( ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:532: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:499: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/fingerprint.py:458: in wrapper out = func(self, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2751: in _map_single writer.write_batch(batch) ../.venv/lib/python3.8/site-packages/datasets/arrow_writer.py:503: in write_batch arrays.append(pa.array(typed_sequence)) pyarrow/array.pxi:230: in pyarrow.lib.array ??? pyarrow/array.pxi:110: in pyarrow.lib._handle_arrow_array_protocol ??? ../.venv/lib/python3.8/site-packages/datasets/arrow_writer.py:198: in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) ../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper return func(array, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/table.py:1812: in cast_array_to_feature casted_values = _c(array.values, feature.feature) ../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper return func(array, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/table.py:1843: in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) ../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper return func(array, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/table.py:1752: in array_cast return array.cast(pa_type) pyarrow/array.pxi:915: in pyarrow.lib.Array.cast ??? ../.venv/lib/python3.8/site-packages/pyarrow/compute.py:376: in cast return call_function("cast", [arr], options) pyarrow/_compute.pyx:542: in pyarrow._compute.call_function ??? pyarrow/_compute.pyx:341: in pyarrow._compute.Function.call ??? pyarrow/error.pxi:144: in pyarrow.lib.pyarrow_internal_check_status ??? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > ??? E pyarrow.lib.ArrowNotImplementedError: Unsupported cast from int64 to null using function cast_null pyarrow/error.pxi:121: ArrowNotImplementedError ``` ## Workarounds * Not using batched=True * Using an np.array([],dtype=float) or similar instead of [] in the input * Naming the output column differently from the input column ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2 - Platform: Ubuntu - Python version: 3.8 - PyArrow version: 8.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4483/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4482
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4482/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4482/comments
https://api.github.com/repos/huggingface/datasets/issues/4482/events
https://github.com/huggingface/datasets/pull/4482
1,269,237,447
PR_kwDODunzps45jS_c
4,482
Test that TensorFlow is not imported on startup
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4482). All of your documentation changes will be reflected on that endpoint." ]
1,655,116,429,000
1,655,122,701,000
null
MEMBER
null
TF takes some time to be imported, and also uses some GPU memory. I just added a test to make sure that in the future it's never imported by default when ```python import datasets ``` is called. Right now this fails because `huggingface_hub` does import tensorflow (though this is fixed now on their `main` branch) I'll mark this PR as ready for review once `huggingface_hub` has a new release
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4482/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4482/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4482", "html_url": "https://github.com/huggingface/datasets/pull/4482", "diff_url": "https://github.com/huggingface/datasets/pull/4482.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4482.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4481
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4481/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4481/comments
https://api.github.com/repos/huggingface/datasets/issues/4481/events
https://github.com/huggingface/datasets/pull/4481
1,269,187,792
PR_kwDODunzps45jIRi
4,481
Fix iwslt2017
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "CI fails are just abut missing tags in the dataset card, merging !" ]
1,655,113,881,000
1,655,117,397,000
1,655,116,818,000
MEMBER
null
The files were moved to google drive, I hosted them on the Hub instead (ok according to the license) I also updated the `datasets_infos.json`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4481/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4481/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4481", "html_url": "https://github.com/huggingface/datasets/pull/4481", "diff_url": "https://github.com/huggingface/datasets/pull/4481.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4481.patch", "merged_at": 1655116818000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4480
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4480/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4480/comments
https://api.github.com/repos/huggingface/datasets/issues/4480/events
https://github.com/huggingface/datasets/issues/4480
1,268,921,567
I_kwDODunzps5LojTf
4,480
Bigbench tensorflow GPU dependency
{ "login": "cceyda", "id": 15624271, "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cceyda", "html_url": "https://github.com/cceyda", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "organizations_url": "https://api.github.com/users/cceyda/orgs", "repos_url": "https://api.github.com/users/cceyda/repos", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "received_events_url": "https://api.github.com/users/cceyda/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Thanks for reporting ! :) cc @andersjohanandreassen can you take a look at this ?\r\n\r\nAlso @cceyda feel free to open an issue at [BIG-Bench](https://github.com/google/BIG-bench) as well regarding the `AttributeError`", "I'm on vacation for the next week, so won't be able to do much debugging at the moment. Sorry for the inconvenience.\r\nBut I did quickly take a look:\r\n\r\n**pypi**:\r\nI managed to reproduce the above error with the pypi version begin out of date. \r\nThe version on `https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz` should be up to date, but it was my understanding that there was some issue with the pypi upload, so I don't even understand why there is a version [on pypi from April 1](https://pypi.org/project/bigbench/0.0.1/). Perhaps @ethansdyer, who's handling the pypi upload, knows the answer to that?\r\n\r\n**OOM error**:\r\nBut, I'm unable to reproduce the OOM error in a google colab with GPU enabled.\r\nThis is what I ran:\r\n```\r\n!pip install bigbench@https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz\r\n!pip install datasets\r\n\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"bigbench\",\"swedish_to_german_proverbs\")\r\n``` \r\nThe `swedish_to_german_proverbs`task is only 72 examples, so I don't understand what could be causing the OOM error. Loading the task has no effect on the RAM for me. @cceyda Can you confirm that this does not occur in a [colab](https://colab.research.google.com/)?\r\nIf the GPU is somehow causing issues on your system, disabling the GPU from TF might be an option too\r\n```\r\nimport os\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"-1\"\r\n```\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "Solved.\r\nYes it works on colab, and somehow magically on my machine too now. hmm not sure what was wrong before I had used a fresh venv both times with just the dataloading code, and tried multiple times. (maybe just a wrong tensorflow version got mixed up somehow) The tensorflow call seems to come from the bigbench side anyway.\r\n\r\nabout bigbench pypi version update, I opened an issue over there https://github.com/google/BIG-bench/issues/846\r\n\r\nanyway closing this now. If anyone else has the same problem can re-open." ]
1,655,097,846,000
1,655,235,924,000
1,655,235,923,000
CONTRIBUTOR
null
## Describe the bug Loading bigbech ```py from datasets import load_dataset dataset = load_dataset("bigbench","swedish_to_german_proverbs") ``` tries to use gpu and fails with OOM with the following error ``` Downloading and preparing dataset bigbench/swedish_to_german_proverbs (download: Unknown size, generated: 68.92 KiB, post-processed: Unknown size, total: 68.92 KiB) to /home/ceyda/.cache/huggingface/datasets/bigbench/swedish_to_german_proverbs/1.0.0/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0... Generating default split: 0%| | 0/72 [00:00<?, ? examples/s]2022-06-13 14:11:04.154469: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-06-13 14:11:05.133600: F tensorflow/core/platform/statusor.cc:33] Attempting to fetch value instead of handling error INTERNAL: failed initializing StreamExecutor for CUDA device ordinal 3: INTERNAL: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_OUT_OF_MEMORY: out of memory; total memory reported: 25396838400 Aborted (core dumped) ``` I think this is because bigbench dependency (below) installs tensorflow (GPU version) and dataloading tries to use GPU as default. `pip install bigbench@https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz` while just doing 'pip install bigbench' results in following error ``` File "/home/ceyda/.local/lib/python3.7/site-packages/datasets/load.py", line 109, in import_main_class module = importlib.import_module(module_path) File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/ceyda/.cache/huggingface/modules/datasets_modules/datasets/bigbench/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0/bigbench.py", line 118, in <module> class Bigbench(datasets.GeneratorBasedBuilder): File "/home/ceyda/.cache/huggingface/modules/datasets_modules/datasets/bigbench/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0/bigbench.py", line 127, in Bigbench BigBenchConfig(name=name, version=datasets.Version("1.0.0")) for name in bb_utils.get_all_json_task_names() AttributeError: module 'bigbench.api.util' has no attribute 'get_all_json_task_names' ``` ## Steps to avoid the bug Not ideal but can solve with (since I don't really use tensorflow elsewhere) `pip uninstall tensorflow` `pip install tensorflow-cpu` ## Environment info - datasets @ master - Python version: 3.7
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4480/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4480/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4479
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4479/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4479/comments
https://api.github.com/repos/huggingface/datasets/issues/4479/events
https://github.com/huggingface/datasets/pull/4479
1,268,558,237
PR_kwDODunzps45hHtZ
4,479
Include entity positions as feature in ReCoRD
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "repos_url": "https://api.github.com/users/richarddwang/repos", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4479). All of your documentation changes will be reflected on that endpoint." ]
1,655,034,988,000
1,655,036,946,000
null
CONTRIBUTOR
null
https://huggingface.co/datasets/super_glue/viewer/record/validation TLDR: We need to record entity positions, which included in the source data but excluded by the loading script, to enable efficient and effective training for ReCoRD. Currently, the loading script ignores the entity positions ("entity_start", "entity_end") and only records entity text. This might be because the training method of the official baseline is to make n training instance from a datapoint by replacing \"\@ placeholder\" in query with each entity individually. But it increases the already heavy computation by multiple folds. So DeBERTa uses a method that take entity embeddings by their positions in passage, and thus make one training instance from one datapoint. It is way more efficient and proved effective for ReCoRD task.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4479/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4479/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4479", "html_url": "https://github.com/huggingface/datasets/pull/4479", "diff_url": "https://github.com/huggingface/datasets/pull/4479.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4479.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4478
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4478/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4478/comments
https://api.github.com/repos/huggingface/datasets/issues/4478/events
https://github.com/huggingface/datasets/issues/4478
1,268,358,213
I_kwDODunzps5LmZxF
4,478
Dataset slow during model training
{ "login": "lehrig", "id": 9555494, "node_id": "MDQ6VXNlcjk1NTU0OTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9555494?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lehrig", "html_url": "https://github.com/lehrig", "followers_url": "https://api.github.com/users/lehrig/followers", "following_url": "https://api.github.com/users/lehrig/following{/other_user}", "gists_url": "https://api.github.com/users/lehrig/gists{/gist_id}", "starred_url": "https://api.github.com/users/lehrig/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lehrig/subscriptions", "organizations_url": "https://api.github.com/users/lehrig/orgs", "repos_url": "https://api.github.com/users/lehrig/repos", "events_url": "https://api.github.com/users/lehrig/events{/privacy}", "received_events_url": "https://api.github.com/users/lehrig/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! cc @Rocketknight1 maybe you know better ?\r\n\r\nI'm not too familiar with `tf.data.experimental.save`. Note that `datasets` uses memory mapping, so depending on your hardware and the disk you are using you can expect performance differences with a dataset loaded in RAM", "Hi @lehrig, I suspect what's happening here is that our `to_tf_dataset()` method has some performance issues when streaming samples. This is usually not a problem, but they become apparent when streaming a vision dataset into a very small vision model, which will need a lot of sample throughput to saturate the GPU.\r\n\r\nWhen you save a `tf.data.Dataset` with `tf.data.experimental.save`, all of the samples from the dataset (which are, in this case, batches of images), are saved to disk. When you load this saved dataset, you're effectively bypassing `to_tf_dataset()` entirely, which alleviates this performance bottleneck.\r\n\r\n`to_tf_dataset()` is something we're actively working on overhauling right now - particularly for image datasets, we want to make it possible to access the underlying images with `tf.data` without going through the current layer of indirection with `Arrow`, which should massively improve simplicity and performance. \r\n\r\nHowever, if you just want this to work quickly but without needing your save/load hack, my advice would be to simply load the dataset into memory if it's small enough to fit. Since all your samples have the same dimensions, you can do this simply with:\r\n\r\n```\r\ndataset = load_from_disk(prep_data_dir)\r\ndataset = dataset.with_format(\"numpy\")\r\ndata_in_memory = dataset[:]\r\n```\r\n\r\nThen you can simply do something like:\r\n\r\n```\r\nmodel.fit(data_in_memory[\"pixel_values\"], data_in_memory[\"labels\"])\r\n```", "Thanks for the information! \r\n\r\nI have now updated the training code like so:\r\n\r\n```\r\ndataset = load_from_disk(prep_data_dir)\r\ntrain_dataset = dataset[\"train\"][:]\r\nvalidation_dataset = dataset[\"dev\"][:]\r\n\r\n...\r\n\r\nmodel.fit(\r\n train_dataset[\"pixel_values\"],\r\n train_dataset[\"label\"],\r\n epochs=epochs,\r\n validation_data=(\r\n validation_dataset[\"pixel_values\"],\r\n validation_dataset[\"label\"]\r\n ),\r\n callbacks=[earlyStopping, mcp_save, reduce_lr_loss]\r\n)\r\n```\r\n\r\n- Creating the in-memory dataset is quite quick\r\n- But: There is now a long wait (~4-5 Minutes) before the training starts (why?)\r\n- And: Training times have improved but the very first epoch leaves me wondering why it takes so long (why?)\r\n\r\n**Epoch Breakdown:**\r\n- Epoch 1/10\r\n78s 12s/step - loss: 3.1307 - accuracy: 0.0737 - val_loss: 2.2827 - val_accuracy: 0.1273 - lr: 0.0010\r\n- Epoch 2/10\r\n1s 168ms/step - loss: 2.3616 - accuracy: 0.2350 - val_loss: 2.2679 - val_accuracy: 0.2182 - lr: 0.0010\r\n- Epoch 3/10\r\n1s 189ms/step - loss: 2.0221 - accuracy: 0.3180 - val_loss: 2.2670 - val_accuracy: 0.1818 - lr: 0.0010\r\n- Epoch 4/10\r\n0s 67ms/step - loss: 1.8895 - accuracy: 0.3548 - val_loss: 2.2771 - val_accuracy: 0.1273 - lr: 0.0010\r\n- Epoch 5/10\r\n0s 67ms/step - loss: 1.7846 - accuracy: 0.3963 - val_loss: 2.2860 - val_accuracy: 0.1455 - lr: 0.0010\r\n- Epoch 6/10\r\n0s 65ms/step - loss: 1.5946 - accuracy: 0.4516 - val_loss: 2.2938 - val_accuracy: 0.1636 - lr: 0.0010\r\n- Epoch 7/10\r\n0s 63ms/step - loss: 1.4217 - accuracy: 0.5115 - val_loss: 2.2968 - val_accuracy: 0.2182 - lr: 0.0010\r\n- Epoch 8/10\r\n0s 67ms/step - loss: 1.3089 - accuracy: 0.5438 - val_loss: 2.2842 - val_accuracy: 0.2182 - lr: 0.0010\r\n- Epoch 9/10\r\n1s 184ms/step - loss: 1.2480 - accuracy: 0.5806 - val_loss: 2.2652 - val_accuracy: 0.1818 - lr: 0.0010\r\n- Epoch 10/10\r\n0s 65ms/step - loss: 1.2699 - accuracy: 0.5622 - val_loss: 2.2670 - val_accuracy: 0.2000 - lr: 0.0010\r\n\r\n", "Regarding the new long ~5 min. wait introduced by the in-memory dataset update: this might be causing it? https://datascience.stackexchange.com/questions/33364/why-model-fit-generator-in-keras-is-taking-so-much-time-even-before-picking-the\r\n\r\nFor now, my save/load hack is still more performant, even though having more boiler-plate code :/ ", "That 5 minute wait is quite surprising! I don't have a good explanation for why it's happening, but it can't be an issue with `datasets` or `tf.data` because you're just fitting directly on Numpy arrays at this point. All I can suggest is seeing if you can isolate the issue - for example, does fitting on a smaller dataset containing only 10% of the original data reduce the wait? This might indicate the delay is caused by your data being copied or converted somehow. Alternatively, you could try removing things like callbacks and seeing if you could isolate the issue there." ]
1,654,976,419,000
1,655,208,271,000
null
NONE
null
## Describe the bug While migrating towards 🤗 Datasets, I encountered an odd performance degradation: training suddenly slows down dramatically. I train with an image dataset using Keras and execute a `to_tf_dataset` just before training. First, I have optimized my dataset following https://discuss.huggingface.co/t/solved-image-dataset-seems-slow-for-larger-image-size/10960/6, which actually improved the situation from what I had before but did not completely solve it. Second, I saved and loaded my dataset using `tf.data.experimental.save` and `tf.data.experimental.load` before training (for which I would have expected no performance change). However, I ended up with the performance I had before tinkering with 🤗 Datasets. Any idea what's the reason for this and how to speed-up training with 🤗 Datasets? ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset import os dataset_dir = "./dataset" prep_dataset_dir = "./prepdataset" model_dir = "./model" # Load Data dataset = load_dataset("Lehrig/Monkey-Species-Collection", "downsized") def read_image_file(example): with open(example["image"].filename, "rb") as f: example["image"] = {"bytes": f.read()} return example dataset = dataset.map(read_image_file) dataset.save_to_disk(dataset_dir) # Preprocess from datasets import ( Array3D, DatasetDict, Features, load_from_disk, Sequence, Value ) import numpy as np from transformers import ImageFeatureExtractionMixin dataset = load_from_disk(dataset_dir) num_classes = dataset["train"].features["label"].num_classes one_hot_matrix = np.eye(num_classes) feature_extractor = ImageFeatureExtractionMixin() def to_pixels(image): image = feature_extractor.resize(image, size=size) image = feature_extractor.to_numpy_array(image, channel_first=False) image = image / 255.0 return image def process(examples): examples["pixel_values"] = [ to_pixels(image) for image in examples["image"] ] examples["label"] = [ one_hot_matrix[label] for label in examples["label"] ] return examples features = Features({ "pixel_values": Array3D(dtype="float32", shape=(size, size, 3)), "label": Sequence(feature=Value(dtype="int32"), length=num_classes) }) prep_dataset = dataset.map( process, remove_columns=["image"], batched=True, batch_size=batch_size, num_proc=2, features=features, ) prep_dataset = prep_dataset.with_format("numpy") # Split train_dev_dataset = prep_dataset['test'].train_test_split( test_size=test_size, shuffle=True, seed=seed ) train_dev_test_dataset = DatasetDict({ 'train': train_dev_dataset['train'], 'dev': train_dev_dataset['test'], 'test': prep_dataset['test'], }) train_dev_test_dataset.save_to_disk(prep_dataset_dir) # Train Model import datetime import tensorflow as tf from tensorflow.keras import Sequential from tensorflow.keras.applications import InceptionV3 from tensorflow.keras.layers import Dense, Dropout, GlobalAveragePooling2D, BatchNormalization from tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping from transformers import DefaultDataCollator dataset = load_from_disk(prep_data_dir) data_collator = DefaultDataCollator(return_tensors="tf") train_dataset = dataset["train"].to_tf_dataset( columns=['pixel_values'], label_cols=['label'], shuffle=True, batch_size=batch_size, collate_fn=data_collator ) validation_dataset = dataset["dev"].to_tf_dataset( columns=['pixel_values'], label_cols=['label'], shuffle=False, batch_size=batch_size, collate_fn=data_collator ) print(f'{datetime.datetime.now()} - Saving Data') tf.data.experimental.save(train_dataset, model_dir+"/train") tf.data.experimental.save(validation_dataset, model_dir+"/val") print(f'{datetime.datetime.now()} - Loading Data') train_dataset = tf.data.experimental.load(model_dir+"/train") validation_dataset = tf.data.experimental.load(model_dir+"/val") shape = np.shape(dataset["train"][0]["pixel_values"]) backbone = InceptionV3( include_top=False, weights='imagenet', input_shape=shape ) for layer in backbone.layers: layer.trainable = False model = Sequential() model.add(backbone) model.add(GlobalAveragePooling2D()) model.add(Dense(128, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.3)) model.add(Dense(64, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.3)) model.add(Dense(10, activation='softmax')) model.compile( optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'] ) print(model.summary()) earlyStopping = EarlyStopping( monitor='val_loss', patience=10, verbose=0, mode='min' ) mcp_save = ModelCheckpoint( f'{model_dir}/best_model.hdf5', save_best_only=True, monitor='val_loss', mode='min' ) reduce_lr_loss = ReduceLROnPlateau( monitor='val_loss', factor=0.1, patience=7, verbose=1, min_delta=0.0001, mode='min' ) hist = model.fit( train_dataset, epochs=epochs, validation_data=validation_dataset, callbacks=[earlyStopping, mcp_save, reduce_lr_loss] ) ``` ## Expected results Same performance when training without my "save/load hack" or a good explanation/recommendation about the issue. ## Actual results Performance slower without my "save/load hack". **Epoch Breakdown (without my "save/load hack"):** - Epoch 1/10 41s 2s/step - loss: 1.6302 - accuracy: 0.5048 - val_loss: 1.4713 - val_accuracy: 0.3273 - lr: 0.0010 - Epoch 2/10 32s 2s/step - loss: 0.5357 - accuracy: 0.8510 - val_loss: 1.0447 - val_accuracy: 0.5818 - lr: 0.0010 - Epoch 3/10 36s 3s/step - loss: 0.3547 - accuracy: 0.9231 - val_loss: 0.6245 - val_accuracy: 0.7091 - lr: 0.0010 - Epoch 4/10 36s 3s/step - loss: 0.2721 - accuracy: 0.9231 - val_loss: 0.3395 - val_accuracy: 0.9091 - lr: 0.0010 - Epoch 5/10 32s 2s/step - loss: 0.1676 - accuracy: 0.9856 - val_loss: 0.2187 - val_accuracy: 0.9636 - lr: 0.0010 - Epoch 6/10 42s 3s/step - loss: 0.2066 - accuracy: 0.9615 - val_loss: 0.1635 - val_accuracy: 0.9636 - lr: 0.0010 - Epoch 7/10 32s 2s/step - loss: 0.1814 - accuracy: 0.9423 - val_loss: 0.1418 - val_accuracy: 0.9636 - lr: 0.0010 - Epoch 8/10 32s 2s/step - loss: 0.1301 - accuracy: 0.9856 - val_loss: 0.1388 - val_accuracy: 0.9818 - lr: 0.0010 - Epoch 9/10 loss: 0.1102 - accuracy: 0.9856 - val_loss: 0.1185 - val_accuracy: 0.9818 - lr: 0.0010 - Epoch 10/10 32s 2s/step - loss: 0.1013 - accuracy: 0.9808 - val_loss: 0.0978 - val_accuracy: 0.9818 - lr: 0.0010 **Epoch Breakdown (with my "save/load hack"):** - Epoch 1/10 13s 625ms/step - loss: 3.0478 - accuracy: 0.1146 - val_loss: 2.3061 - val_accuracy: 0.0727 - lr: 0.0010 - Epoch 2/10 0s 80ms/step - loss: 2.3105 - accuracy: 0.2656 - val_loss: 2.3085 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 3/10 0s 77ms/step - loss: 1.8608 - accuracy: 0.3542 - val_loss: 2.3130 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 4/10 1s 98ms/step - loss: 1.8677 - accuracy: 0.3750 - val_loss: 2.3157 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 5/10 1s 204ms/step - loss: 1.5561 - accuracy: 0.4583 - val_loss: 2.3049 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 6/10 1s 210ms/step - loss: 1.4657 - accuracy: 0.4896 - val_loss: 2.2944 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 7/10 1s 205ms/step - loss: 1.4018 - accuracy: 0.5312 - val_loss: 2.2917 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 8/10 1s 207ms/step - loss: 1.2370 - accuracy: 0.5729 - val_loss: 2.2814 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 9/10 1s 214ms/step - loss: 1.1190 - accuracy: 0.6250 - val_loss: 2.2733 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 10/10 1s 207ms/step - loss: 1.1484 - accuracy: 0.6302 - val_loss: 2.2624 - val_accuracy: 0.0909 - lr: 0.0010 ## Environment info - `datasets` version: 2.2.2 - Platform: Linux-4.18.0-305.45.1.el8_4.ppc64le-ppc64le-with-glibc2.17 - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2 - TensorFlow: 2.8.0 - GPU (used during training): Tesla V100-SXM2-32GB
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4478/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4477
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4477/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4477/comments
https://api.github.com/repos/huggingface/datasets/issues/4477/events
https://github.com/huggingface/datasets/issues/4477
1,268,308,986
I_kwDODunzps5LmNv6
4,477
Dataset Viewer issue for fgrezes/WIESP2022-NER
{ "login": "AshTayade", "id": 42551754, "node_id": "MDQ6VXNlcjQyNTUxNzU0", "avatar_url": "https://avatars.githubusercontent.com/u/42551754?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AshTayade", "html_url": "https://github.com/AshTayade", "followers_url": "https://api.github.com/users/AshTayade/followers", "following_url": "https://api.github.com/users/AshTayade/following{/other_user}", "gists_url": "https://api.github.com/users/AshTayade/gists{/gist_id}", "starred_url": "https://api.github.com/users/AshTayade/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AshTayade/subscriptions", "organizations_url": "https://api.github.com/users/AshTayade/orgs", "repos_url": "https://api.github.com/users/AshTayade/repos", "events_url": "https://api.github.com/users/AshTayade/events{/privacy}", "received_events_url": "https://api.github.com/users/AshTayade/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "https://huggingface.co/datasets/fgrezes/WIESP2022-NER\r\n\r\nThe error:\r\n\r\n```\r\nMessage: Couldn't find a dataset script at /src/services/worker/fgrezes/WIESP2022-NER/WIESP2022-NER.py or any data file in the same directory. Couldn't find 'fgrezes/WIESP2022-NER' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**test*', '**eval*'] in dataset repository fgrezes/WIESP2022-NER with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']\r\n```\r\n\r\nI understand the issue is not related to the dataset viewer in itself, but with the autodetection of the data files without a loading script in the datasets library. cc @lhoestq @albertvillanova @mariosasko ", "Apparently it finds `scoring-scripts/compute_seqeval.py` which matches `**eval*`, a regex that detects a test split. We should probably improve the regex because it's not supposed to catch this kind of files. It must also only check for files with supported extensions: txt, csv, png etc." ]
1,654,962,557,000
1,655,114,366,000
null
NONE
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4477/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4477/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4476
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4476/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4476/comments
https://api.github.com/repos/huggingface/datasets/issues/4476/events
https://github.com/huggingface/datasets/issues/4476
1,267,987,499
I_kwDODunzps5Lk_Qr
4,476
`to_pandas` doesn't take into account format.
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Thanks for opening a discussion :)\r\n\r\nNote that you can use `.remove_columns(...)` to keep only the ones you're interested in before calling `.to_pandas()`", "Yes I can do that thank you!\r\n\r\nDo you think that conceptually my example should work? If not, I'm happy to close this issue. \r\n\r\nIf yes, I can start working on it.", "Hi! Instead of `with_format(columns=['a', 'b']).to_pandas()`, use `with_format(\"pandas\", columns=[\"a\", \"b\"])` for easy conversion of the parts of the dataset to pandas via indexing/slicing.\r\n\r\nThe full code:\r\n```python\r\nfrom datasets import Dataset\r\n\r\nds = Dataset.from_dict({'a': [1,2,3], 'b': [5,6,7], 'c': [8,9,10]})\r\npandas_df = ds.with_format(\"pandas\", columns=['a', 'b'])[:]\r\n```", "Ahhhh Thank you!\r\n\r\nclosing then :)" ]
1,654,892,731,000
1,655,314,901,000
1,655,314,901,000
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** I have a large dataset that I need to convert part of to pandas to do some further analysis. Calling `to_pandas` directly on it is expensive. So I thought I could simply select the columns that I want and then call `to_pandas`. **Describe the solution you'd like** ```python from datasets import Dataset ds = Dataset.from_dict({'a': [1,2,3], 'b': [5,6,7], 'c': [8,9,10]}) pandas_df = ds.with_format(columns=['a', 'b']).to_pandas() # I would expect `pandas_df` to only include a,b as column. ``` **Describe alternatives you've considered** I could remove all columns that I don't want? But I don't know all of them in advance. **Additional context** I can probably make a PR with some pointers.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4476/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4476/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4475
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4475/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4475/comments
https://api.github.com/repos/huggingface/datasets/issues/4475/events
https://github.com/huggingface/datasets/pull/4475
1,267,798,451
PR_kwDODunzps45eufw
4,475
Improve error message for missing pacakges from inside dataset script
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I opened a PR before I noticed yours ^^' You can find it here: https://github.com/huggingface/datasets/pull/4484\r\n\r\nThe only comment I have regarding your message is that it possibly shows several `pip install` commands, whereas one can run one single `pip install` command with the list of missing dependencies, which is maybe simpler.\r\n\r\nLet me know which one your prefer", "Closing in favor of #4484. " ]
1,654,880,376,000
1,655,126,787,000
1,655,126,203,000
CONTRIBUTOR
null
Improve the error message for missing packages from inside a dataset script: With this change, the error message for missing packages for `bigbench` looks as follows: ``` ImportError: To be able to use bigbench, you need to install the following dependencies: - 'bigbench' using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"' ``` And this is how it looked before: ``` ImportError: To be able to use bigbench, you need to install the following dependencies['bigbench', 'bigbench', 'bigbench', 'bigbench'] using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz" bigbench bigbench bigbench' for instance' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4475/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4475/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4475", "html_url": "https://github.com/huggingface/datasets/pull/4475", "diff_url": "https://github.com/huggingface/datasets/pull/4475.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4475.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4474
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4474/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4474/comments
https://api.github.com/repos/huggingface/datasets/issues/4474/events
https://github.com/huggingface/datasets/pull/4474
1,267,767,541
PR_kwDODunzps45en98
4,474
[Docs] How to use with PyTorch page
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,878,349,000
1,655,217,632,000
1,655,215,473,000
MEMBER
null
Currently the docs about PyTorch are scattered around different pages, and we were missing a place to explain more in depth how to use and optimize a dataset for PyTorch. This PR is related to #4457 which is the TF counterpart :) cc @Rocketknight1 we can try to align both documentations contents now I think cc @stevhliu let me know what you think !
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4474/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4474/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4474", "html_url": "https://github.com/huggingface/datasets/pull/4474", "diff_url": "https://github.com/huggingface/datasets/pull/4474.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4474.patch", "merged_at": 1655215472000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4473
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4473/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4473/comments
https://api.github.com/repos/huggingface/datasets/issues/4473/events
https://github.com/huggingface/datasets/pull/4473
1,267,555,994
PR_kwDODunzps45d5-R
4,473
Add SST-2 dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "on the hub this dataset is referenced as `sst-2` not `sst2` – is there a canonical orthography? If not, could we name it `sst-2`?", "@julien-c, we normally do not use hyphens for dataset names: whenever the original dataset name contains a hyphen, we usually:\r\n- either suppress it: CoNLL-2000 (`conll2000`), CORD-19 (`cord19`)\r\n- or replace it with underscore: CC-News (`cc_news`), SQuAD-es (`squad_es`)\r\n\r\nThere are some exceptions though... (I wonder why)\r\n\r\nI think, the reason is there was a 1-to-1 relation with the corresponding Python module name.\r\n\r\nI personally find confusing not having a rule and using both hyphens and underscores indistinctly: you never know which is the right orthography.\r\n\r\nWhichever the decision we make, I would prefer to be applied consistently.\r\n\r\nAlso note that we already implemented this dataset as part of GLUE: https://github.com/huggingface/datasets/blob/master/datasets/glue/glue.py#L163\r\n- dataset name: `glue`\r\n- config name: `sst2`\r\n\r\nOn the other hand, let's see how other libraries name it:\r\n- torchtext: `SST2` https://pytorch.org/text/stable/datasets.html#sst2\r\n- OpenAI CLIP: `rendered-sst2` https://github.com/openai/CLIP/blob/main/data/rendered-sst2.md\r\n- Kaggle: `SST2` https://www.kaggle.com/datasets/atulanandjha/stanford-sentiment-treebank-v2-sst2/version/22\r\n- TensorFlow Datasets: `glue/sst2` https://www.tensorflow.org/datasets/catalog/glue#gluesst2", "Ok, another option is to open PRs against the models in https://huggingface.co/models?datasets=sst-2 to change their dataset reference to `sst2`\r\n\r\n(BTW some models refer to `sst2` already – but they're less popular: https://huggingface.co/models?datasets=sst2)", "OK, I'm taking care of the subsequent PRs on models to align with this dataset name." ]
1,654,868,246,000
1,655,129,494,000
1,655,128,869,000
MEMBER
null
Add SST-2 dataset. Currently it is part of GLUE benchmark. This PR adds it as a standalone dataset. CC: @julien-c
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4473/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4473/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4473", "html_url": "https://github.com/huggingface/datasets/pull/4473", "diff_url": "https://github.com/huggingface/datasets/pull/4473.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4473.patch", "merged_at": 1655128869000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4472
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4472/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4472/comments
https://api.github.com/repos/huggingface/datasets/issues/4472/events
https://github.com/huggingface/datasets/pull/4472
1,267,488,523
PR_kwDODunzps45drcb
4,472
Fix 401 error for unauthticated requests to non-existing repos
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,864,691,000
1,654,866,311,000
1,654,865,757,000
MEMBER
null
The hub now returns 401 instead of 404 for unauthenticated requests to non-existing repos. This PR add support for the 401 error and fixes the CI fails on `master`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4472/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4472/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4472", "html_url": "https://github.com/huggingface/datasets/pull/4472", "diff_url": "https://github.com/huggingface/datasets/pull/4472.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4472.patch", "merged_at": 1654865756000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4471
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4471/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4471/comments
https://api.github.com/repos/huggingface/datasets/issues/4471/events
https://github.com/huggingface/datasets/issues/4471
1,267,475,268
I_kwDODunzps5LjCNE
4,471
CI error with repo lhoestq/_dummy
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "fixed by https://github.com/huggingface/datasets/pull/4472" ]
1,654,863,966,000
1,654,867,493,000
1,654,867,493,000
MEMBER
null
## Describe the bug CI is failing because of repo "lhoestq/_dummy". See: https://app.circleci.com/pipelines/github/huggingface/datasets/12461/workflows/1b040b45-9578-4ab9-8c44-c643c4eb8691/jobs/74269 ``` requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/datasets/lhoestq/_dummy?full=true ``` The repo seems to no longer exist: https://huggingface.co/api/datasets/lhoestq/_dummy ``` error: "Repository not found" ``` CC: @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4471/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4471/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4470
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4470/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4470/comments
https://api.github.com/repos/huggingface/datasets/issues/4470/events
https://github.com/huggingface/datasets/pull/4470
1,267,470,051
PR_kwDODunzps45dnYw
4,470
Reorder returned validation/test splits in script template
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,863,673,000
1,654,884,250,000
1,654,883,690,000
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4470/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4470/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4470", "html_url": "https://github.com/huggingface/datasets/pull/4470", "diff_url": "https://github.com/huggingface/datasets/pull/4470.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4470.patch", "merged_at": 1654883690000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4469
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4469/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4469/comments
https://api.github.com/repos/huggingface/datasets/issues/4469/events
https://github.com/huggingface/datasets/pull/4469
1,267,213,849
PR_kwDODunzps45cweQ
4,469
Replace data URLs in wider_face dataset once hosted on the Hub
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,848,805,000
1,654,879,328,000
1,654,878,766,000
MEMBER
null
This PR replaces the URLs of data files in Google Drive with our Hub ones, once the data owners have approved to host their data on the Hub. They also informed us that their dataset is licensed under CC BY-NC-ND.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4469/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4469/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4469", "html_url": "https://github.com/huggingface/datasets/pull/4469", "diff_url": "https://github.com/huggingface/datasets/pull/4469.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4469.patch", "merged_at": 1654878766000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4468
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4468/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4468/comments
https://api.github.com/repos/huggingface/datasets/issues/4468/events
https://github.com/huggingface/datasets/pull/4468
1,266,715,742
PR_kwDODunzps45bERK
4,468
Generalize tutorials for audio and vision
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,812,044,000
1,655,223,722,000
1,655,223,120,000
MEMBER
null
This PR updates the tutorials to be more generalizable to all modalities. After reading the tutorials, a user should be able to load any type of dataset, know how to index into and slice a dataset, and do the most basic/common type of preprocessing (tokenization, resampling, applying transforms) depending on their dataset. Other changes include: - Removed the sections about a dataset's metadata, features, and columns because we cover this in an earlier tutorial about inspecting the `DatasetInfo` through the dataset builder. - Separate the sharing dataset tutorial into two sections: (1) uploading via the web interface and (2) using the `huggingface_hub` library. - Renamed some tutorials in the TOC to be more clear and specific. - Added more text to nudge users towards joining the community and asking questions on the forums. - If it's okay with everyone, I'd also like to remove the section about loading and using metrics since we have the `evaluate` docs now.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4468/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4468/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4468", "html_url": "https://github.com/huggingface/datasets/pull/4468", "diff_url": "https://github.com/huggingface/datasets/pull/4468.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4468.patch", "merged_at": 1655223120000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4467
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4467/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4467/comments
https://api.github.com/repos/huggingface/datasets/issues/4467/events
https://github.com/huggingface/datasets/issues/4467
1,266,218,358
I_kwDODunzps5LePV2
4,467
Transcript string 'null' converted to [None] by load_dataset()
{ "login": "mbarnig", "id": 1360633, "node_id": "MDQ6VXNlcjEzNjA2MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/1360633?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mbarnig", "html_url": "https://github.com/mbarnig", "followers_url": "https://api.github.com/users/mbarnig/followers", "following_url": "https://api.github.com/users/mbarnig/following{/other_user}", "gists_url": "https://api.github.com/users/mbarnig/gists{/gist_id}", "starred_url": "https://api.github.com/users/mbarnig/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mbarnig/subscriptions", "organizations_url": "https://api.github.com/users/mbarnig/orgs", "repos_url": "https://api.github.com/users/mbarnig/repos", "events_url": "https://api.github.com/users/mbarnig/events{/privacy}", "received_events_url": "https://api.github.com/users/mbarnig/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @mbarnig, thanks for reporting.\r\n\r\nPlease note that is an expected behavior by `pandas` (we use the `pandas` library to parse CSV files): https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html\r\n```\r\nBy default the following values are interpreted as NaN: \r\n‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’, ‘1.#IND’, ‘1.#QNAN’, ‘<NA>’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’, ‘nan’, ‘null’.\r\n```\r\n(see \"null\" in the last position in the above list).\r\n\r\nIn order to prevent `pandas` from performing that automatic conversion from the string \"null\" to a NaN value, you should pass the `pandas` parameter `keep_default_na=False`:\r\n```python\r\nIn [2]: dataset = load_dataset('csv', data_files={'train': 'null-test.csv'}, keep_default_na=False)\r\nIn [3]: dataset[\"train\"][0][\"transcript\"]\r\nOut[3]: 'null'\r\n```", "Thanks for the quick answer." ]
1,654,784,760,000
1,654,797,337,000
1,654,792,142,000
NONE
null
## Issue I am training a luxembourgish speech-recognition model in Colab with a custom dataset, including a dictionary of luxembourgish words, for example the speaken numbers 0 to 9. When preparing the dataset with the script `ds_train1 = mydataset.map(prepare_dataset)` the following error was issued: ``` ValueError Traceback (most recent call last) <ipython-input-69-1e8f2b37f5bc> in <module>() ----> 1 ds_train = mydataset_train.map(prepare_dataset) 11 frames /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 2450 if not _is_valid_text_input(text): 2451 raise ValueError( -> 2452 "text input must of type str (single example), List[str] (batch or single pretokenized example) " 2453 "or List[List[str]] (batch of pretokenized examples)." 2454 ) ValueError: text input must of type str (single example), List[str] (batch or single pretokenized example) or List[List[str]] (batch of pretokenized examples). ``` Debugging this problem was not easy, all transcriptions in the dataset are correct strings. Finally I discovered that the transcription string 'null' is interpreted as [None] by the `load_dataset()` script. By deleting this row in the dataset the training worked fine. ## Expected result: transcription 'null' interpreted as 'str' instead of 'None'. ## Reproduction Here is the code to reproduce the error with a one-row-dataset. ``` with open("null-test.csv") as f: reader = csv.reader(f) for row in reader: print(row) ``` ['wav_filename', 'wav_filesize', 'transcript'] ['wavs/female/NULL1.wav', '17530', 'null'] ``` dataset = load_dataset('csv', data_files={'train': 'null-test.csv'}) ``` Using custom data configuration default-81ac0c0e27af3514 Downloading and preparing dataset csv/default to /root/.cache/huggingface/datasets/csv/default-81ac0c0e27af3514/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519... Downloading data files: 100% 1/1 [00:00<00:00, 29.55it/s] Extracting data files: 100% 1/1 [00:00<00:00, 23.66it/s] Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/default-81ac0c0e27af3514/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data. 100% 1/1 [00:00<00:00, 25.84it/s] ``` print(dataset['train']['transcript']) ``` [None] ## Environment info ``` !pip install datasets==2.2.2 !pip install transformers==4.19.2 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4467/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4467/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4466
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4466/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4466/comments
https://api.github.com/repos/huggingface/datasets/issues/4466/events
https://github.com/huggingface/datasets/pull/4466
1,266,159,920
PR_kwDODunzps45ZLsd
4,466
Optimize contiguous shard and select
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I thought of just mentioning the benefits I got. Here's the code that @lhoestq provided:\r\n\r\n```py\r\nimport os\r\nfrom datasets import load_dataset\r\nfrom tqdm.auto import tqdm\r\n\r\nds = load_dataset(\"squad\", split=\"train\")\r\nos.makedirs(\"tmp\")\r\n\r\nnum_shards = 5\r\nfor index in tqdm(range(num_shards)):\r\n size = len(ds) // num_shards\r\n shard = Dataset(ds.data.slice(size * index, size), fingerprint=f\"{ds._fingerprint}_{index}\")\r\n shard.to_json(f\"tmp/data_{index}.jsonl\")\r\n```\r\n\r\nIt is 1.64s. Previously the code was:\r\n\r\n```py\r\nnum_shards = 5\r\nfor index in tqdm(range(num_shards)):\r\n shard = ds.shard(num_shards=num_shards, index=index, contiguous=True)\r\n shard.to_json(f\"tmp/data_{index}.jsonl\")\r\n # upload_to_gcs(f\"tmp/data_{index}.jsonl\")\r\n```\r\n\r\nIt was 2min31s. \r\n\r\nI ran it on my humble MacBook Pro:\r\n\r\n<img width=\"574\" alt=\"image\" src=\"https://user-images.githubusercontent.com/22957388/172864881-f1db489a-2305-47f2-a07f-7d3df610b1b8.png\">\r\n", "I addressed your comments @albertvillanova , let me know what you think :)" ]
1,654,782,339,000
1,655,222,670,000
1,655,222,085,000
MEMBER
null
Currently `.shard()` and `.select()` always create an indices mapping. However if the requested data are contiguous, it's much more optimized to simply slice the Arrow table instead of building an indices mapping. In particular: - the shard/select operation will be much faster - reading speed will be much faster in the resulting dataset, since it won't have to do a lookup step in the indices mapping Since `.shard()` is also used for `.map()` with `num_proc>1`, it will also significantly improve the reading speed of multiprocessed `.map()` operations Here is an example of speed-up: ```python >>> import io >>> import numpy as np >>> from datasets import Dataset >>> ds = Dataset.from_dict({"a": np.random.rand(10_000_000)}) >>> shard = ds.shard(num_shards=4, index=0, contiguous=True) # this calls `.select(range(2_500_000))` >>> buf = io.BytesIO() >>> %time dd.to_json(buf) Creating json from Arrow format: 100%|██████████████████| 100/100 [00:00<00:00, 376.17ba/s] CPU times: user 258 ms, sys: 9.06 ms, total: 267 ms Wall time: 266 ms ``` while previously it was ```python Creating json from Arrow format: 100%|███████████████████| 100/100 [00:03<00:00, 29.41ba/s] CPU times: user 3.33 s, sys: 69.1 ms, total: 3.39 s Wall time: 3.4 s ``` In this simple case the speed-up is x10, but @sayakpaul experienced a x100 speed-up on its data when exporting to JSON. ## Implementation details I mostly improved `.select()`: it now checks if the input corresponds to a contiguous chunk of data and then it slices the main Arrow table (or the indices mapping table if it exists). To check if the input indices are contiguous it checks two possibilities: - if the indices is of type `range`, it checks that start >= 0 and step = 1 - otherwise in the general case, it iterates over the indices. If all the indices are contiguous then we're good, otherwise we have to build an indices mapping. Having to iterate over the indices doesn't cause performance issues IMO because: - either they are contiguous and in this case the cost of iterating over the indices is much less than the cost of creating an indices mapping - or they are not contiguous, and then iterating generally stops quickly when it first encounters the first indice that is not contiguous.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4466/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4466/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4466", "html_url": "https://github.com/huggingface/datasets/pull/4466", "diff_url": "https://github.com/huggingface/datasets/pull/4466.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4466.patch", "merged_at": 1655222085000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4465
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4465/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4465/comments
https://api.github.com/repos/huggingface/datasets/issues/4465/events
https://github.com/huggingface/datasets/pull/4465
1,265,754,479
PR_kwDODunzps45X0XY
4,465
Fix bigbench config names
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,761,979,000
1,654,785,516,000
1,654,784,959,000
MEMBER
null
Fix https://github.com/huggingface/datasets/issues/4462 in the case of bigbench
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4465/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4465/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4465", "html_url": "https://github.com/huggingface/datasets/pull/4465", "diff_url": "https://github.com/huggingface/datasets/pull/4465.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4465.patch", "merged_at": 1654784958000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4464
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4464/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4464/comments
https://api.github.com/repos/huggingface/datasets/issues/4464/events
https://github.com/huggingface/datasets/pull/4464
1,265,682,931
PR_kwDODunzps45XlWW
4,464
Extend support for streaming datasets that use xml.dom.minidom.parse
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,757,905,000
1,654,764,204,000
1,654,763,656,000
MEMBER
null
This PR extends the support in streaming mode for datasets that use `xml.dom.minidom.parse`, by patching that function. This PR adds support for streaming datasets like "Yaxin/SemEval2015". Fix #4453.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4464/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4464/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4464", "html_url": "https://github.com/huggingface/datasets/pull/4464", "diff_url": "https://github.com/huggingface/datasets/pull/4464.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4464.patch", "merged_at": 1654763655000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4463
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4463/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4463/comments
https://api.github.com/repos/huggingface/datasets/issues/4463/events
https://github.com/huggingface/datasets/pull/4463
1,265,093,211
PR_kwDODunzps45Vnzu
4,463
Use config_id to check split sizes instead of config name
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "closing in favor of https://github.com/huggingface/datasets/pull/4465" ]
1,654,710,324,000
1,654,762,543,000
1,654,761,997,000
MEMBER
null
Fix https://github.com/huggingface/datasets/issues/4462
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4463/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4463/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4463", "html_url": "https://github.com/huggingface/datasets/pull/4463", "diff_url": "https://github.com/huggingface/datasets/pull/4463.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4463.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4462
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4462/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4462/comments
https://api.github.com/repos/huggingface/datasets/issues/4462/events
https://github.com/huggingface/datasets/issues/4462
1,265,079,347
I_kwDODunzps5LZ5Qz
4,462
NonMatchingSplitsSizesError when passing a dataset configuration parameter
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Why not adding `max_examples` as part of the config name?", "Yup it can also work, and maybe it's simpler this way. Opening a PR to fix bigbench instead of https://github.com/huggingface/datasets/pull/4463" ]
1,654,709,484,000
1,654,784,958,000
1,654,784,958,000
MEMBER
null
As noticed in https://github.com/huggingface/datasets/pull/4125 when a dataset config class has a parameter that reduces the number of examples (e.g. named `max_examples`), then loading the dataset and passing `max_examples` raises `NonMatchingSplitsSizesError`. This is because it will check for expected the number of examples of the config with the same name without taking into account the `max_examples` parameter. This can be fixed by checking the expected number of examples using the **config id** instead of name. Indeed the config id corresponds to the config name + an optional suffix that depends on the config parameters
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4462/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4462/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4461
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4461/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4461/comments
https://api.github.com/repos/huggingface/datasets/issues/4461/events
https://github.com/huggingface/datasets/issues/4461
1,264,800,451
I_kwDODunzps5LY1LD
4,461
AttributeError: module 'datasets' has no attribute 'load_dataset'
{ "login": "AlexNLP", "id": 59248970, "node_id": "MDQ6VXNlcjU5MjQ4OTcw", "avatar_url": "https://avatars.githubusercontent.com/u/59248970?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AlexNLP", "html_url": "https://github.com/AlexNLP", "followers_url": "https://api.github.com/users/AlexNLP/followers", "following_url": "https://api.github.com/users/AlexNLP/following{/other_user}", "gists_url": "https://api.github.com/users/AlexNLP/gists{/gist_id}", "starred_url": "https://api.github.com/users/AlexNLP/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AlexNLP/subscriptions", "organizations_url": "https://api.github.com/users/AlexNLP/orgs", "repos_url": "https://api.github.com/users/AlexNLP/repos", "events_url": "https://api.github.com/users/AlexNLP/events{/privacy}", "received_events_url": "https://api.github.com/users/AlexNLP/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,654,696,760,000
1,654,699,260,000
1,654,699,260,000
NONE
null
## Describe the bug I have piped install datasets, but this package doesn't have these attributes: load_dataset, load_metric. ## Environment info - `datasets` version: 1.9.0 - Platform: Linux-5.13.0-44-generic-x86_64-with-debian-bullseye-sid - Python version: 3.6.13 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4461/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4461/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4460
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4460/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4460/comments
https://api.github.com/repos/huggingface/datasets/issues/4460/events
https://github.com/huggingface/datasets/pull/4460
1,264,644,205
PR_kwDODunzps45UHIs
4,460
Drop Python 3.6 support
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4460). All of your documentation changes will be reflected on that endpoint." ]
1,654,690,218,000
1,655,133,773,000
null
CONTRIBUTOR
null
Remove the fallback imports/checks in the code needed for Python 3.6 and update the requirements/CI files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4460/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4460/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4460", "html_url": "https://github.com/huggingface/datasets/pull/4460", "diff_url": "https://github.com/huggingface/datasets/pull/4460.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4460.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4459
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4459/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4459/comments
https://api.github.com/repos/huggingface/datasets/issues/4459/events
https://github.com/huggingface/datasets/pull/4459
1,264,636,481
PR_kwDODunzps45UFc8
4,459
Add and fix language tags for udhr dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,689,822,000
1,654,691,784,000
1,654,691,233,000
MEMBER
null
Related to #4362.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4459/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4459/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4459", "html_url": "https://github.com/huggingface/datasets/pull/4459", "diff_url": "https://github.com/huggingface/datasets/pull/4459.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4459.patch", "merged_at": 1654691233000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4457
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4457/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4457/comments
https://api.github.com/repos/huggingface/datasets/issues/4457/events
https://github.com/huggingface/datasets/pull/4457
1,263,531,911
PR_kwDODunzps45QZCU
4,457
First draft of the docs for TF + Datasets
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Some links are still missing I think :)", "This is probably quite close to being ready, so cc some TF people @gante @amyeroberts @merveenoyan just so they see it! No need for a full review, but if you have any comments or suggestions feel free.", "Thanks ! We plan to make a new release later today for `to_tf_dataset` FYI, so I think we can merge it soon and include this documentation in the new release" ]
1,654,618,008,000
1,655,222,921,000
1,655,222,348,000
MEMBER
null
I might cc a few of the other TF people to take a look when this is closer to being finished, but it's still a draft for now.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4457/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4457/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4457", "html_url": "https://github.com/huggingface/datasets/pull/4457", "diff_url": "https://github.com/huggingface/datasets/pull/4457.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4457.patch", "merged_at": 1655222348000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4456
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4456/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4456/comments
https://api.github.com/repos/huggingface/datasets/issues/4456/events
https://github.com/huggingface/datasets/issues/4456
1,263,241,449
I_kwDODunzps5LS4jp
4,456
Workflow for Tabular data
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[]
1,654,606,102,000
1,654,703,156,000
null
MEMBER
null
Tabular data are treated very differently than data for NLP, audio, vision, etc. and therefore the worflow for tabular data in `datasets` is not ideal. For example for tabular data, it is common to use pandas/spark/dask to process the data, and then load the data into X and y (X is an array of features and y an array of labels), then train_test_split and finally feed the data to a machine learning model. In `datasets` the workflow is different: we use load_dataset, then map, then train_test_split (if we only have a train split) and we end up with columnar dataset splits, not formatted as X and y. Right now, it is already possible to convert a dataset from and to pandas, but there are still many things that could improve the workflow for tabular data: - be able to load the data into X and y - be able to load a dataset from the output of spark or dask (as far as I know it's usually csv or parquet files on S3/GCS/HDFS etc.) - support "unsplit" datasets explicitly, instead of putting everything in "train" by default cc @adrinjalali @merveenoyan feel free to complete/correct this :) Feel free to also share ideas of APIs that would be super intuitive in your opinion !
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4456/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/4456/timeline
null
null
null
null
false

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
5
Add dataset card