id
int64
1.14B
2.23B
labels_url
stringlengths
75
75
body
stringlengths
2
33.9k
updated_at
stringlengths
20
20
number
int64
3.76k
6.79k
milestone
dict
repository_url
stringclasses
1 value
draft
bool
2 classes
labels
listlengths
0
4
created_at
stringlengths
20
20
comments_url
stringlengths
70
70
assignee
dict
timeline_url
stringlengths
70
70
title
stringlengths
1
290
events_url
stringlengths
68
68
active_lock_reason
null
user
dict
assignees
listlengths
0
3
performed_via_github_app
null
state_reason
stringclasses
3 values
author_association
stringclasses
3 values
closed_at
stringlengths
20
20
pull_request
dict
node_id
stringlengths
18
19
comments
sequencelengths
0
30
reactions
dict
state
stringclasses
2 values
locked
bool
1 class
url
stringlengths
61
61
html_url
stringlengths
49
51
is_pull_request
bool
2 classes
1,162,702,044
https://api.github.com/repos/huggingface/datasets/issues/3861/labels{/name}
Hi! I am interested in working with the big_patent dataset. In Tensorflow, there are a number of versions of the dataset: - 1.0.0 : lower cased tokenized words - 2.0.0 : Update to use cased raw strings - 2.1.2 (default): Fix update to cased raw strings. The version in the huggingface `datasets` library is the 1.0.0. I would be very interested in using the 2.1.2 cased version (used more, recently, for example in the Pegasus paper), but it does not seem to be supported (I tried using the `revision` parameter in `load_datasets`). Is there a way to already load it, or would it be possible to add that version?
2023-04-21T14:32:03Z
3,861
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
2022-03-08T14:08:55Z
https://api.github.com/repos/huggingface/datasets/issues/3861/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/3861/timeline
big_patent cased version
https://api.github.com/repos/huggingface/datasets/issues/3861/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/25265140?v=4", "events_url": "https://api.github.com/users/slvcsl/events{/privacy}", "followers_url": "https://api.github.com/users/slvcsl/followers", "following_url": "https://api.github.com/users/slvcsl/following{/other_user}", "gists_url": "https://api.github.com/users/slvcsl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/slvcsl", "id": 25265140, "login": "slvcsl", "node_id": "MDQ6VXNlcjI1MjY1MTQw", "organizations_url": "https://api.github.com/users/slvcsl/orgs", "received_events_url": "https://api.github.com/users/slvcsl/received_events", "repos_url": "https://api.github.com/users/slvcsl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/slvcsl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slvcsl/subscriptions", "type": "User", "url": "https://api.github.com/users/slvcsl" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
completed
NONE
2023-04-21T14:32:03Z
null
I_kwDODunzps5FTWzc
[ "To follow up on this: the cased and uncased versions actually contain different content, and the cased one is easier since it contains a Summary of the Invention in the input.\r\n\r\nSee the paper describing the issue here:\r\nhttps://aclanthology.org/2022.gem-1.34/", "Thanks for proposing the addition of the cased version of this dataset and for pinging again recently.\r\n\r\nI have just merged a PR that adds the cased version: https://huggingface.co/datasets/big_patent/discussions/3\r\n\r\nThe cased version (2.1.2) is the default one:\r\n```python\r\nds = load_dataset(\"big_patent\", \"all\")\r\n```\r\n\r\nTo use the 1.0.0 version (lower cased tokenized words), pass both parameters `codes` and `version`:\r\n```python\r\nds = load_dataset(\"big_patent\", codes=\"all\", version=\"1.0.0\")\r\n```\r\n\r\nClosed by: https://huggingface.co/datasets/big_patent/discussions/3" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3861/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3861
https://github.com/huggingface/datasets/issues/3861
false
1,162,623,329
https://api.github.com/repos/huggingface/datasets/issues/3860/labels{/name}
null
2022-03-08T17:37:13Z
3,860
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-08T12:55:39Z
https://api.github.com/repos/huggingface/datasets/issues/3860/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3860/timeline
Small doc fixes
https://api.github.com/repos/huggingface/datasets/issues/3860/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mishig25", "id": 11827707, "login": "mishig25", "node_id": "MDQ6VXNlcjExODI3NzA3", "organizations_url": "https://api.github.com/users/mishig25/orgs", "received_events_url": "https://api.github.com/users/mishig25/received_events", "repos_url": "https://api.github.com/users/mishig25/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "type": "User", "url": "https://api.github.com/users/mishig25" }
[]
null
null
CONTRIBUTOR
2022-03-08T17:37:13Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3860.diff", "html_url": "https://github.com/huggingface/datasets/pull/3860", "merged_at": "2022-03-08T17:37:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/3860.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3860" }
PR_kwDODunzps40GpzZ
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3860). All of your documentation changes will be reflected on that endpoint.", "There are still some `.. code-block:: python` (e.g. see [this](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes#datasets.Dataset.align_labels_with_mapping)) directives in our codebase, so maybe we can remove those as well as part of this PR." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3860/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3860
https://github.com/huggingface/datasets/pull/3860
true
1,162,559,333
https://api.github.com/repos/huggingface/datasets/issues/3859/labels{/name}
## Describe the bug I am trying to download some splits of the big_patent dataset, using the following code: `ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload") ` However, this leads to a FileNotFoundError. FileNotFoundError Traceback (most recent call last) [<ipython-input-3-8d8a745706a9>](https://localhost:8080/#) in <module>() 1 from datasets import load_dataset ----> 2 ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload") 8 frames [/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs) 1705 ignore_verifications=ignore_verifications, 1706 try_from_hf_gcs=try_from_hf_gcs, -> 1707 use_auth_token=use_auth_token, 1708 ) 1709 [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 593 if not downloaded_from_gcs: 594 self._download_and_prepare( --> 595 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 596 ) 597 # Sync info [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 659 split_dict = SplitDict(dataset_name=self.name) 660 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 661 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 662 663 # Checksums verification [/root/.cache/huggingface/modules/datasets_modules/datasets/big_patent/bdefa7c0b39fba8bba1c6331b70b738e30d63c8ad4567f983ce315a5fef6131c/big_patent.py](https://localhost:8080/#) in _split_generators(self, dl_manager) 123 split_types = ["train", "val", "test"] 124 extract_paths = dl_manager.extract( --> 125 {k: os.path.join(dl_path, "bigPatentData", k + ".tar.gz") for k in split_types} 126 ) 127 extract_paths = {k: os.path.join(extract_paths[k], k) for k in split_types} [/usr/local/lib/python3.7/dist-packages/datasets/utils/download_manager.py](https://localhost:8080/#) in extract(self, path_or_paths, num_proc) 282 download_config.extract_compressed_file = True 283 extracted_paths = map_nested( --> 284 partial(cached_path, download_config=download_config), path_or_paths, num_proc=num_proc, disable_tqdm=False 285 ) 286 path_or_paths = NestedDataStructure(path_or_paths) [/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm) 260 mapped = [ 261 _single_map_nested((function, obj, types, None, True)) --> 262 for obj in utils.tqdm(iterable, disable=disable_tqdm) 263 ] 264 else: [/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <listcomp>(.0) 260 mapped = [ 261 _single_map_nested((function, obj, types, None, True)) --> 262 for obj in utils.tqdm(iterable, disable=disable_tqdm) 263 ] 264 else: [/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in _single_map_nested(args) 194 # Singleton first to spare some computation 195 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 196 return function(data_struct) 197 198 # Reduce logging to keep things readable in multiprocessing with tqdm [/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py](https://localhost:8080/#) in cached_path(url_or_filename, download_config, **download_kwargs) 314 elif is_local_path(url_or_filename): 315 # File, but it doesn't exist. --> 316 raise FileNotFoundError(f"Local file {url_or_filename} doesn't exist") 317 else: 318 # Something unknown FileNotFoundError: Local file /root/.cache/huggingface/datasets/downloads/extracted/ad068abb3e11f9f2f5440b62e37eb2b03ee515df9de1637c55cd1793b68668b2/bigPatentData/train.tar.gz doesn't exist I have tried this in a number of machines, including on Colab, so I think this is not environment dependent. How do I load the bigPatent dataset?
2022-03-08T13:04:09Z
3,859
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
2022-03-08T11:47:12Z
https://api.github.com/repos/huggingface/datasets/issues/3859/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/3859/timeline
Unable to dowload big_patent (FileNotFoundError)
https://api.github.com/repos/huggingface/datasets/issues/3859/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/25265140?v=4", "events_url": "https://api.github.com/users/slvcsl/events{/privacy}", "followers_url": "https://api.github.com/users/slvcsl/followers", "following_url": "https://api.github.com/users/slvcsl/following{/other_user}", "gists_url": "https://api.github.com/users/slvcsl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/slvcsl", "id": 25265140, "login": "slvcsl", "node_id": "MDQ6VXNlcjI1MjY1MTQw", "organizations_url": "https://api.github.com/users/slvcsl/orgs", "received_events_url": "https://api.github.com/users/slvcsl/received_events", "repos_url": "https://api.github.com/users/slvcsl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/slvcsl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slvcsl/subscriptions", "type": "User", "url": "https://api.github.com/users/slvcsl" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
completed
NONE
2022-03-08T13:04:04Z
null
I_kwDODunzps5FSz9l
[ "Hi @slvcsl, thanks for reporting.\r\n\r\nYesterday we just made a patch release of our `datasets` library that fixes this issue: version 1.18.4.\r\nhttps://pypi.org/project/datasets/#history\r\n\r\nPlease, feel free to update `datasets` library to the latest version: \r\n```shell\r\npip install -U datasets\r\n```\r\nAnd then you should force redownload of the data file to update your local cache: \r\n```python\r\nds = load_dataset(\"big_patent\", \"g\", split=\"validation\", download_mode=\"force_redownload\")\r\n```\r\n- Note that before the fix, you just downloaded and cached the Google Drive virus scan warning page, instead of the data file\r\n\r\nThis issue was already reported \r\n- #3784\r\n\r\nand its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe already fixed it. See:\r\n- #3787 \r\n" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3859/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3859
https://github.com/huggingface/datasets/issues/3859
false
1,162,526,688
https://api.github.com/repos/huggingface/datasets/issues/3858/labels{/name}
null
2022-03-08T12:57:57Z
3,858
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-08T11:11:52Z
https://api.github.com/repos/huggingface/datasets/issues/3858/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3858/timeline
Udpate index.mdx margins
https://api.github.com/repos/huggingface/datasets/issues/3858/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/3841370?v=4", "events_url": "https://api.github.com/users/gary149/events{/privacy}", "followers_url": "https://api.github.com/users/gary149/followers", "following_url": "https://api.github.com/users/gary149/following{/other_user}", "gists_url": "https://api.github.com/users/gary149/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gary149", "id": 3841370, "login": "gary149", "node_id": "MDQ6VXNlcjM4NDEzNzA=", "organizations_url": "https://api.github.com/users/gary149/orgs", "received_events_url": "https://api.github.com/users/gary149/received_events", "repos_url": "https://api.github.com/users/gary149/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gary149/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gary149/subscriptions", "type": "User", "url": "https://api.github.com/users/gary149" }
[]
null
null
CONTRIBUTOR
2022-03-08T12:57:56Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3858.diff", "html_url": "https://github.com/huggingface/datasets/pull/3858", "merged_at": "2022-03-08T12:57:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/3858.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3858" }
PR_kwDODunzps40GVSq
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3858). All of your documentation changes will be reflected on that endpoint." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3858/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3858
https://github.com/huggingface/datasets/pull/3858
true
1,162,525,353
https://api.github.com/repos/huggingface/datasets/issues/3857/labels{/name}
## Describe the bug After discussion with @lhoestq, just want to mention here that `glob.glob(...)` should always be used in combination with `sorted(...)` to make sure the list of files returned by `glob.glob(...)` doesn't change depending on the OS system. There are currently multiple datasets that use `glob.glob()` without making use of `sorted(...)` even the streaming download manager (if I'm not mistaken): https://github.com/huggingface/datasets/blob/c14bfeb4af89da14f870de5ddaa584b08aa08eeb/src/datasets/utils/streaming_download_manager.py#L483
2022-03-14T11:08:22Z
3,857
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
2022-03-08T11:10:30Z
https://api.github.com/repos/huggingface/datasets/issues/3857/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3857/timeline
Order of dataset changes due to glob.glob.
https://api.github.com/repos/huggingface/datasets/issues/3857/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
null
null
CONTRIBUTOR
null
null
I_kwDODunzps5FSrqp
[ "I agree using `glob.glob` alone is bad practice because it's not deterministic. Using `sorted` is a nice solution.\r\n\r\nNote that the `xglob` function you are referring to in the `streaming_download_manager.py` code just extends `glob.glob` for URLs - we don't change its behavior. That's why it has no `sorted()`" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3857/reactions" }
open
false
https://api.github.com/repos/huggingface/datasets/issues/3857
https://github.com/huggingface/datasets/issues/3857
false
1,162,522,034
https://api.github.com/repos/huggingface/datasets/issues/3856/labels{/name}
This code currently raises an error because of the null image: ```python import datasets dataset_dict = { 'name': ['image001.jpg', 'image002.jpg'], 'image': ['cat.jpg', None] } features = datasets.Features({ 'name': datasets.Value('string'), 'image': datasets.Image(), }) dataset = datasets.Dataset.from_dict(dataset_dict, features) dataset.push_to_hub("username/dataset") # this line produces an error: 'NoneType' object is not subscriptable ``` I fixed this in this PR TODO: - [x] add a test
2022-03-08T15:22:17Z
3,856
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-08T11:07:09Z
https://api.github.com/repos/huggingface/datasets/issues/3856/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3856/timeline
Fix push_to_hub with null images
https://api.github.com/repos/huggingface/datasets/issues/3856/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
null
null
MEMBER
2022-03-08T15:22:16Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3856.diff", "html_url": "https://github.com/huggingface/datasets/pull/3856", "merged_at": "2022-03-08T15:22:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/3856.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3856" }
PR_kwDODunzps40GUSf
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3856). All of your documentation changes will be reflected on that endpoint." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3856/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3856
https://github.com/huggingface/datasets/pull/3856
true
1,162,448,589
https://api.github.com/repos/huggingface/datasets/issues/3855/labels{/name}
## Describe the bug A pretty common behavior of an interaction between the Hub and datasets is the following. An organization adds a dataset in private mode and wants to load it afterward. ```python from transformers import load_dataset ds = load_dataset("NewT5/dummy_data", "dummy") ``` This command then fails with: ```bash FileNotFoundError: Couldn't find a dataset script at /home/patrick/NewT5/dummy_data/dummy_data.py or any data file in the same directory. Couldn't find 'NewT5/dummy_data' on the Hugging Face Hub either: FileNotFoundError: Dataset 'NewT5/dummy_data' doesn't exist on the Hub ``` **even though** the user has access to the website `NewT5/dummy_data` since she/he is part of the org. We need to improve the error message here similar to how @sgugger, @LysandreJik and @julien-c have done it for transformers IMO. ## Steps to reproduce the bug E.g. execute the following code to see the different error messages between `transformes` and `datasets`. 1. Transformers ```python from transformers import BertModel BertModel.from_pretrained("NewT5/dummy_model") ``` The error message is clearer here - it gives: ``` OSError: patrickvonplaten/gpt2-xl is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`. ``` Let's maybe do the same for datasets? The PR was introduced to `transformers` here: https://github.com/huggingface/transformers/pull/15261 ## Expected results Better error message ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.4.dev0 - Platform: Linux-5.15.15-76051515-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - PyArrow version: 6.0.1
2022-07-11T15:06:40Z
3,855
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-03-08T09:55:17Z
https://api.github.com/repos/huggingface/datasets/issues/3855/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/3855/timeline
Bad error message when loading private dataset
https://api.github.com/repos/huggingface/datasets/issues/3855/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }, { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
completed
CONTRIBUTOR
2022-07-11T15:06:40Z
null
I_kwDODunzps5FSY7N
[ "We raise the error “ FileNotFoundError: can’t find the dataset” mainly to follow best practice in security (otherwise users could be able to guess what private repositories users/orgs may have)\r\n\r\nWe can indeed reformulate this and add the \"If this is a private repository,...\" part !", "Resolved via https://github.com/huggingface/datasets/pull/4536" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3855/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3855
https://github.com/huggingface/datasets/issues/3855
false
1,162,434,199
https://api.github.com/repos/huggingface/datasets/issues/3854/labels{/name}
training_data = load_dataset("common_voice", "en",split='train[:250]+validation[:250]') testing_data = load_dataset("common_voice", "en", split="test[:200]") I'm trying to load only 8% of the English common voice data with accent == "England English." Can somebody assist me with this? **Typical Voice Accent Proportions:** - 24% United States English - 8% England English - 5% India and South Asia (India, Pakistan, Sri Lanka) - 3% Australian English - 3% Canadian English - 2% Scottish English - 1% Irish English - 1% Southern African (South Africa, Zimbabwe, Namibia) - 1% New Zealand English Can we replicate this for Age as well? **Age proportions of the common voice:-** - 24% 19 - 29 - 14% 30 - 39 - 10% 40 - 49 - 6% < 19 - 4% 50 - 59 - 4% 60 - 69 - 1% 70 – 79
2024-03-23T12:40:58Z
3,854
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
2022-03-08T09:40:52Z
https://api.github.com/repos/huggingface/datasets/issues/3854/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/3854/timeline
load only England English dataset from common voice english dataset
https://api.github.com/repos/huggingface/datasets/issues/3854/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/36677001?v=4", "events_url": "https://api.github.com/users/amanjaiswal777/events{/privacy}", "followers_url": "https://api.github.com/users/amanjaiswal777/followers", "following_url": "https://api.github.com/users/amanjaiswal777/following{/other_user}", "gists_url": "https://api.github.com/users/amanjaiswal777/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/amanjaiswal777", "id": 36677001, "login": "amanjaiswal777", "node_id": "MDQ6VXNlcjM2Njc3MDAx", "organizations_url": "https://api.github.com/users/amanjaiswal777/orgs", "received_events_url": "https://api.github.com/users/amanjaiswal777/received_events", "repos_url": "https://api.github.com/users/amanjaiswal777/repos", "site_admin": false, "starred_url": "https://api.github.com/users/amanjaiswal777/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amanjaiswal777/subscriptions", "type": "User", "url": "https://api.github.com/users/amanjaiswal777" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
completed
NONE
2022-03-09T08:13:33Z
null
I_kwDODunzps5FSVaX
[ "Hi @amanjaiswal777,\r\n\r\nFirst note that the dataset you are trying to load is deprecated: it was the Common Voice dataset release as of Dec 2020.\r\n\r\nCurrently, Common Voice dataset releases are directly hosted on the Hub, under the Mozilla Foundation organization: https://huggingface.co/mozilla-foundation\r\n\r\nFor example, to get their latest Common Voice relase (8.0):\r\n- Go to the dataset page and request access permission (Mozilla Foundation requires this for people willing to use their datasets): https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0\r\n- Looking at the dataset card, you can check that data instances have, among other fields, the ones you are interested in: \"accent\", \"age\",... \r\n- Then you can load their \"en\" language dataset as usual, besides passing your authentication token (more info on auth token here: https://huggingface.co/docs/hub/security)\r\n ```python\r\n from datasets import load_dataset\r\n ds_en = load_dataset(\"mozilla-foundation/common_voice_8_0\", \"en\", use_auth_token=True)\r\n ```\r\n- Finally, you can filter only the data instances you are interested in (more info on `filter` here: https://huggingface.co/docs/datasets/process#select-and-filter):\r\n ```python\r\n ds_england_en = ds_en.filter(lambda item: item[\"accent\"] == \"England English\")\r\n ```\r\n\r\nFeel free to reopen this issue if you need further assistance.", "Hey @albertvillanova trying the same approach as you with the common_voice_16_1 dataset. What I'm trying to do is to filter the valencian accent in the catalan subset. Gave me this error and I have everything it asks for decoding mp3:\r\n![image](https://github.com/huggingface/datasets/assets/96977715/7ec02483-e728-4358-9372-ba74ec1b7fd4)\r\n\r\n![image](https://github.com/huggingface/datasets/assets/96977715/c10fcf23-a141-4dba-a88d-89e293acfe67)\r\n\r\n" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3854/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3854
https://github.com/huggingface/datasets/issues/3854
false
1,162,386,592
https://api.github.com/repos/huggingface/datasets/issues/3853/labels{/name}
# Introduction of the dataset OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre, multilingual corpus manually annotated with syntactic, semantic and discourse information. This dataset is the version of OntoNotes v5.0 extended and used in the CoNLL-2012 shared task , includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only). This dataset is widely used in name entity recognition, coreference resolution, and semantic role labeling. In dataset loading script, I modify and use the code of [AllenNLP/Ontonotes](https://docs.allennlp.org/models/main/models/common/ontonotes/#ontonotes) to read the special conll files while don't get extra package dependency. # Some workarounds I did 1. task ids I add tasks that I can't find anywhere `semantic-role-labeling`, `lemmatization`, and `word-sense-disambiguation` to the task category `structure-prediction`, because they are related to "syntax". I feel there is another good name for the task category since some tasks mentioned aren't related to structure, but I have no good idea. 2. `dl_manage.extract` Since we'll get another zip after unzip the downloaded zip data, I have to use `dl_manager.extract` directly inside `_generate_examples`. But when testing dummy data, `dl_manager.extract` do nothing. So I make a conditional such that it manually extract data when testing dummy data. # Help Don't know how to fix the building doc error.
2022-03-15T10:48:02Z
3,853
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-08T08:53:42Z
https://api.github.com/repos/huggingface/datasets/issues/3853/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3853/timeline
add ontonotes_conll dataset
https://api.github.com/repos/huggingface/datasets/issues/3853/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang" }
[]
null
null
CONTRIBUTOR
2022-03-15T10:48:02Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3853.diff", "html_url": "https://github.com/huggingface/datasets/pull/3853", "merged_at": "2022-03-15T10:48:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/3853.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3853" }
PR_kwDODunzps40F3uN
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3853). All of your documentation changes will be reflected on that endpoint.", "The CI fail is unrelated to this dataset, merging :)" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3853/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3853
https://github.com/huggingface/datasets/pull/3853
true
1,162,252,337
https://api.github.com/repos/huggingface/datasets/issues/3852/labels{/name}
> Alternatively, you can follow the steps to [add a dataset](https://huggingface.co/docs/datasets/add_dataset.html) and [share a dataset](https://huggingface.co/docs/datasets/share_dataset.html) in the documentation. The "add a dataset link" gives 404 Error, and the share_dataset link has changed. I feel this information is redundant/deprecated now since we have a more detailed guide for "How to add a dataset?".
2022-03-08T16:54:36Z
3,852
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-08T05:57:05Z
https://api.github.com/repos/huggingface/datasets/issues/3852/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3852/timeline
Redundant add dataset information and dead link.
https://api.github.com/repos/huggingface/datasets/issues/3852/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" }
[]
null
null
CONTRIBUTOR
2022-03-08T16:54:36Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3852.diff", "html_url": "https://github.com/huggingface/datasets/pull/3852", "merged_at": "2022-03-08T16:54:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/3852.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3852" }
PR_kwDODunzps40Fb26
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3852). All of your documentation changes will be reflected on that endpoint." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3852/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3852
https://github.com/huggingface/datasets/pull/3852
true
1,162,137,998
https://api.github.com/repos/huggingface/datasets/issues/3851/labels{/name}
## Load audio dataset error Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb, ``` from datasets import load_dataset, load_metric, Audio raw_datasets = load_dataset("superb", "ks", split="train") print(raw_datasets[0]["audio"]) ``` following errors occur ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-169-3f8253239fa0> in <module> ----> 1 raw_datasets[0]["audio"] /usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key) 1924 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" 1925 return self._getitem( -> 1926 key, 1927 ) 1928 /usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs) 1909 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) 1910 formatted_output = format_table( -> 1911 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 1912 ) 1913 return formatted_output /usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns) 530 python_formatter = PythonFormatter(features=None) 531 if format_columns is None: --> 532 return formatter(pa_table, query_type=query_type) 533 elif query_type == "column": 534 if key in format_columns: /usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type) 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 280 if query_type == "row": --> 281 return self.format_row(pa_table) 282 elif query_type == "column": 283 return self.format_column(pa_table) /usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_row(self, pa_table) 310 row = self.python_arrow_extractor().extract_row(pa_table) 311 if self.decoded: --> 312 row = self.python_features_decoder.decode_row(row) 313 return row 314 /usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_row(self, row) 219 220 def decode_row(self, row: dict) -> dict: --> 221 return self.features.decode_example(row) if self.features else row 222 223 def decode_column(self, column: list, column_name: str) -> list: /usr/lib/python3.6/site-packages/datasets/features/features.py in decode_example(self, example) 1320 else value 1321 for column_name, (feature, value) in utils.zip_dict( -> 1322 {key: value for key, value in self.items() if key in example}, example 1323 ) 1324 } /usr/lib/python3.6/site-packages/datasets/features/features.py in <dictcomp>(.0) 1319 if self._column_requires_decoding[column_name] 1320 else value -> 1321 for column_name, (feature, value) in utils.zip_dict( 1322 {key: value for key, value in self.items() if key in example}, example 1323 ) /usr/lib/python3.6/site-packages/datasets/features/features.py in decode_nested_example(schema, obj) 1053 # Object with special decoding: 1054 elif isinstance(schema, (Audio, Image)): -> 1055 return schema.decode_example(obj) if obj is not None else None 1056 return obj 1057 /usr/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value) 100 array, sampling_rate = self._decode_non_mp3_file_like(file) 101 else: --> 102 array, sampling_rate = self._decode_non_mp3_path_like(path) 103 return {"path": path, "array": array, "sampling_rate": sampling_rate} 104 /usr/lib/python3.6/site-packages/datasets/features/audio.py in _decode_non_mp3_path_like(self, path) 143 144 with xopen(path, "rb") as f: --> 145 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono) 146 return array, sampling_rate 147 /usr/lib/python3.6/site-packages/librosa/core/audio.py in load(path, sr, mono, offset, duration, dtype, res_type) 110 111 y = [] --> 112 with audioread.audio_open(os.path.realpath(path)) as input_file: 113 sr_native = input_file.samplerate 114 n_channels = input_file.channels /usr/lib/python3.6/posixpath.py in realpath(filename) 392 """Return the canonical path of the specified filename, eliminating any 393 symbolic links encountered in the path.""" --> 394 filename = os.fspath(filename) 395 path, ok = _joinrealpath(filename[:0], filename, {}) 396 return abspath(path) TypeError: expected str, bytes or os.PathLike object, not _io.BufferedReader ``` ## Expected results ``` >>> raw_datasets[0]["audio"] {'array': array([-0.0005188 , -0.00109863, 0.00030518, ..., 0.01730347, 0.01623535, 0.01724243]), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/bb3a06b491a64aff422f307cd8116820b4f61d6f32fcadcfc554617e84383cb7/bed/026290a7_nohash_0.wav', 'sampling_rate': 16000} ```
2022-09-27T12:13:55Z
3,851
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-03-08T02:16:04Z
https://api.github.com/repos/huggingface/datasets/issues/3851/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3851/timeline
Load audio dataset error
https://api.github.com/repos/huggingface/datasets/issues/3851/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/31890987?v=4", "events_url": "https://api.github.com/users/lemoner20/events{/privacy}", "followers_url": "https://api.github.com/users/lemoner20/followers", "following_url": "https://api.github.com/users/lemoner20/following{/other_user}", "gists_url": "https://api.github.com/users/lemoner20/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lemoner20", "id": 31890987, "login": "lemoner20", "node_id": "MDQ6VXNlcjMxODkwOTg3", "organizations_url": "https://api.github.com/users/lemoner20/orgs", "received_events_url": "https://api.github.com/users/lemoner20/received_events", "repos_url": "https://api.github.com/users/lemoner20/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lemoner20/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lemoner20/subscriptions", "type": "User", "url": "https://api.github.com/users/lemoner20" }
[]
null
completed
NONE
2022-03-08T11:20:06Z
null
I_kwDODunzps5FRNGO
[ "Hi @lemoner20, thanks for reporting.\r\n\r\nI'm sorry but I cannot reproduce your problem:\r\n```python\r\nIn [1]: from datasets import load_dataset, load_metric, Audio\r\n ...: raw_datasets = load_dataset(\"superb\", \"ks\", split=\"train\")\r\n ...: print(raw_datasets[0][\"audio\"])\r\nDownloading builder script: 30.2kB [00:00, 13.0MB/s] \r\nDownloading metadata: 38.0kB [00:00, 16.6MB/s] \r\nDownloading and preparing dataset superb/ks (download: 1.45 GiB, generated: 9.64 MiB, post-processed: Unknown size, total: 1.46 GiB) to .../.cache/huggingface/datasets/superb/ks/1.9.0/fc1f59e1fa54262dfb42de99c326a806ef7de1263ece177b59359a1a3354a9c9...\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.49G/1.49G [00:37<00:00, 39.3MB/s]\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 71.3M/71.3M [00:01<00:00, 36.1MB/s]\r\nDownloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:41<00:00, 20.67s/it]\r\nExtracting data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:28<00:00, 14.24s/it]\r\nDataset superb downloaded and prepared to .../.cache/huggingface/datasets/superb/ks/1.9.0/fc1f59e1fa54262dfb42de99c326a806ef7de1263ece177b59359a1a3354a9c9. Subsequent calls will reuse this data.\r\n{'path': '.../.cache/huggingface/datasets/downloads/extracted/8571921d3088b48f58f75b2e514815033e1ffbd06aa63fd4603691ac9f1c119f/_background_noise_/doing_the_dishes.wav', 'array': array([ 0. , 0. , 0. , ..., -0.00592041,\r\n -0.00405884, -0.00253296], dtype=float32), 'sampling_rate': 16000}\r\n``` \r\n\r\nWhich version of `datasets` are you using? Could you please fill in the environment info requested in the bug report template? You can run the command `datasets-cli env` and copy-and-paste its output below\r\n## Environment info\r\n<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:", "@albertvillanova Thanks for your reply. The environment info below\r\n\r\n## Environment info\r\n- `datasets` version: 1.18.3\r\n- Platform: Linux-4.19.91-007.ali4000.alios7.x86_64-x86_64-with-debian-buster-sid\r\n- Python version: 3.6.12\r\n- PyArrow version: 6.0.1", "Thanks @lemoner20,\r\n\r\nI cannot reproduce your issue in datasets version 1.18.3 either.\r\n\r\nMaybe redownloading the data file may work if you had already cached this dataset previously. Could you please try passing \"force_redownload\"?\r\n```python\r\nraw_datasets = load_dataset(\"superb\", \"ks\", split=\"train\", download_mode=\"force_redownload\")", "Thanks, @albertvillanova,\r\n\r\nI install the python package of **librosa=0.9.1** again, it works now!\r\n\r\n\r\n", "Cool!", "@albertvillanova, you can actually reproduce the error if you reach the cell `common_voice_train[0][\"path\"]` of this [notebook](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_%F0%9F%A4%97_Transformers.ipynb#scrollTo=_0kRndSvqaKk). Error gets solved after updating the versions of the libraries used in there.", "@jvel07, thanks for reporting and finding a solution.\r\n\r\nMaybe we could tell @patrickvonplaten about the version pinning issue in his notebook.", "Should I update the version of datasets @albertvillanova ? " ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3851/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3851
https://github.com/huggingface/datasets/issues/3851
false
1,162,126,030
https://api.github.com/repos/huggingface/datasets/issues/3850/labels{/name}
In this PR, tqdm arguments can be passed to the map() function and such, in order to be more flexible.
2022-12-16T05:34:07Z
3,850
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-08T01:53:25Z
https://api.github.com/repos/huggingface/datasets/issues/3850/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3850/timeline
[feat] Add tqdm arguments
https://api.github.com/repos/huggingface/datasets/issues/3850/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/28087825?v=4", "events_url": "https://api.github.com/users/penguinwang96825/events{/privacy}", "followers_url": "https://api.github.com/users/penguinwang96825/followers", "following_url": "https://api.github.com/users/penguinwang96825/following{/other_user}", "gists_url": "https://api.github.com/users/penguinwang96825/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/penguinwang96825", "id": 28087825, "login": "penguinwang96825", "node_id": "MDQ6VXNlcjI4MDg3ODI1", "organizations_url": "https://api.github.com/users/penguinwang96825/orgs", "received_events_url": "https://api.github.com/users/penguinwang96825/received_events", "repos_url": "https://api.github.com/users/penguinwang96825/repos", "site_admin": false, "starred_url": "https://api.github.com/users/penguinwang96825/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/penguinwang96825/subscriptions", "type": "User", "url": "https://api.github.com/users/penguinwang96825" }
[]
null
null
NONE
2022-12-16T05:34:07Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3850.diff", "html_url": "https://github.com/huggingface/datasets/pull/3850", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3850.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3850" }
PR_kwDODunzps40FBx9
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3850/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3850
https://github.com/huggingface/datasets/pull/3850
true
1,162,091,075
https://api.github.com/repos/huggingface/datasets/issues/3849/labels{/name}
Adds the Adversarial GLUE dataset: https://adversarialglue.github.io/ ```python >>> import datasets >>> >>> datasets.load_dataset('adv_glue') Using the latest cached version of the module from /home/jxm3/.cache/huggingface/modules/datasets_modules/datasets/adv_glue/26709a83facad2830d72d4419dd179c0be092f4ad3303ad0ebe815d0cdba5cb4 (last modified on Mon Mar 7 19:19:48 2022) since it couldn't be found locally at adv_glue., or remotely on the Hugging Face Hub. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jxm3/random/datasets/src/datasets/load.py", line 1657, in load_dataset builder_instance = load_dataset_builder( File "/home/jxm3/random/datasets/src/datasets/load.py", line 1510, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "/home/jxm3/random/datasets/src/datasets/builder.py", line 1021, in __init__ super().__init__(*args, **kwargs) File "/home/jxm3/random/datasets/src/datasets/builder.py", line 258, in __init__ self.config, self.config_id = self._create_builder_config( File "/home/jxm3/random/datasets/src/datasets/builder.py", line 337, in _create_builder_config raise ValueError( ValueError: Config name is missing. Please pick one among the available configs: ['adv_sst2', 'adv_qqp', 'adv_mnli', 'adv_mnli_mismatched', 'adv_qnli', 'adv_rte'] Example of usage: `load_dataset('adv_glue', 'adv_sst2')` >>> datasets.load_dataset('adv_glue', 'adv_sst2')['validation'][0] Reusing dataset adv_glue (/home/jxm3/.cache/huggingface/datasets/adv_glue/adv_sst2/1.0.0/3719a903f606f2c96654d87b421bc01114c37084057cdccae65cd7bc24b10933) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 604.11it/s] {'sentence': "it 's an uneven treat that bores fun at the democratic exercise while also examining its significance for those who take part .", 'label': 1, 'idx': 0} ```
2022-03-28T11:17:14Z
3,849
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-08T00:47:11Z
https://api.github.com/repos/huggingface/datasets/issues/3849/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3849/timeline
Add "Adversarial GLUE" dataset to datasets library
https://api.github.com/repos/huggingface/datasets/issues/3849/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12" }
[]
null
null
CONTRIBUTOR
2022-03-28T11:12:04Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3849.diff", "html_url": "https://github.com/huggingface/datasets/pull/3849", "merged_at": "2022-03-28T11:12:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/3849.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3849" }
PR_kwDODunzps40E6sW
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq can you review when you have some time?", "Hi @lhoestq -- thanks so much for your review! I just added the stuff you requested to the README.md, including an example from the dataset, the table of contents, and lots of section headers with \"More Information Needed\" below. Let me know if there's anything else I need to do!", "Feel free to also merge `master` into your branch to get the latest updates for the tests ;)", "thanks @lhoestq - just made all the updates you requested!" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3849/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3849
https://github.com/huggingface/datasets/pull/3849
true
1,162,076,902
https://api.github.com/repos/huggingface/datasets/issues/3848/labels{/name}
I ran into the following error when adding a new dataset: ```bash expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}} recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': 'efb4cbd3aa4a87bfaffc310ae951981cc0a36c6c71c6425dd74e5b55f2f325c9', 'num_bytes': 40662}} verification_name = 'dataset source files' def verify_checksums(expected_checksums: Optional[dict], recorded_checksums: dict, verification_name=None): if expected_checksums is None: logger.info("Unable to verify checksums.") return if len(set(expected_checksums) - set(recorded_checksums)) > 0: raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums))) if len(set(recorded_checksums) - set(expected_checksums)) > 0: raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums))) bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]] for_verification_name = " for " + verification_name if verification_name is not None else "" if len(bad_urls) > 0: error_msg = "Checksums didn't match" + for_verification_name + ":\n" > raise NonMatchingChecksumError(error_msg + str(bad_urls)) E datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: E ['https://adversarialglue.github.io/dataset/dev.zip'] src/datasets/utils/info_utils.py:40: NonMatchingChecksumError ``` ## Expected results The dataset downloads correctly, and there is no error. ## Actual results Datasets library is looking for a checksum of None, and it gets a non-None checksum, and throws an error. This is clearly a bug.
2022-03-15T14:37:26Z
3,848
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-03-08T00:24:12Z
https://api.github.com/repos/huggingface/datasets/issues/3848/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/3848/timeline
NonMatchingChecksumError when checksum is None
https://api.github.com/repos/huggingface/datasets/issues/3848/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
completed
CONTRIBUTOR
2022-03-15T12:28:23Z
null
I_kwDODunzps5FQ-Lm
[ "Hi @jxmorris12, thanks for reporting.\r\n\r\nThe objective of `verify_checksums` is to check that both checksums are equal. Therefore if one is None and the other is non-None, they are not equal, and the function accordingly raises a NonMatchingChecksumError. That behavior is expected.\r\n\r\nThe question is: how did you generate the expected checksum? Normally, it should not be None. To properly generate it (it is contained in the `dataset_infos.json` file), you should have runned: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md\r\n```shell\r\ndatasets-cli test <your-dataset-folder> --save_infos --all_configs\r\n```\r\n\r\nOn the other hand, you should take into account that the generation of this file is NOT mandatory for personal/community datasets (we only require it for \"canonical\" datasets, i.e., datasets added to our library GitHub repository: https://github.com/huggingface/datasets/tree/master/datasets). Therefore, other option would be just to delete the `dataset_infos.json` file. If that file is not present, the function `verify_checksums` is not executed.\r\n\r\nFinally, you can circumvent the `verify_checksums` function by passing `ignore_verifications=True` to `load_dataset`:\r\n```python\r\nload_dataset(..., ignore_verifications=True)\r\n``` ", "Thanks @albertvillanova!\r\n\r\nThat's fine. I did run that command when I was adding a new dataset. Maybe because the command crashed in the middle, the checksum wasn't stored properly. I don't know where the bug is happening. But either (i) `verify_checksums` should properly handle this edge case, where the passed checksum is None or (ii) the `datasets-cli test` shouldn't generate a corrupted dataset_infos.json file.\r\n\r\nJust a more high-level thing, I was trying to follow the instructions for adding a dataset in the CONTRIBUTING.md, so if running that command isn't even necessary, that should probably be mentioned in the document, right? But that's somewhat of a moot point, since something isn't working quite right internally if I was able to get into this corrupted state in the first place, just by following those instructions.", "Hi @jxmorris12,\r\n\r\nDefinitely, your `dataset_infos.json` was corrupted (and wrongly contains expected None checksum). \r\n\r\nWhile we further investigate how this can happen and fix it, feel free to delete your `dataset_infos.json` file and recreate it with:\r\n```shell\r\ndatasets-cli test <your-dataset-folder> --save_infos --all_configs\r\n```\r\n\r\nAlso note that `verify_checksum` is working as expected: if it receives a None and and a non-None checksums as input pair, it must raise an exception: they are not equal. That is not a bug.", "At a higher level, also note that we are preparing the release of `datasets` version 2.0, and some docs are being updated...\r\n\r\nIn order to add a dataset, I think the most updated instructions are in our official documentation pages: https://huggingface.co/docs/datasets/share", "Thanks for the info. Maybe you can update the contributing.md if it's not up-to-date.", "Hi @jxmorris12, we have discovered the bug why `None` checksums wrongly appeared when generating the `dataset_infos.json` file:\r\n- #3892\r\n\r\nThe fix will be accessible once this PR merged. And we are planning to do our 2.0 release today.\r\n\r\nWe are also working on updating all our docs for our release today.", "Thanks @albertvillanova - congrats on the release!" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3848/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3848
https://github.com/huggingface/datasets/issues/3848
false
1,161,856,417
https://api.github.com/repos/huggingface/datasets/issues/3847/labels{/name}
## Describe the bug For most tokenizers I have tested (e.g. the RoBERTa tokenizer), the data preprocessing cache are not fully reused in the first few runs, although their `.arrow` cache files are in the cache directory. ## Steps to reproduce the bug Here is a reproducer. The GPT2 tokenizer works perfectly with caching, but not the RoBERTa tokenizer in this example. ```python from datasets import load_dataset from transformers import AutoTokenizer raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1") # tokenizer = AutoTokenizer.from_pretrained("gpt2") tokenizer = AutoTokenizer.from_pretrained("roberta-base") text_column_name = "text" column_names = raw_datasets["train"].column_names def tokenize_function(examples): return tokenizer(examples[text_column_name], return_special_tokens_mask=True) tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, remove_columns=column_names, load_from_cache_file=True, desc="Running tokenizer on every text in dataset", ) ``` ## Expected results No tokenization would be required after the 1st run. Everything should be loaded from the cache. ## Actual results Tokenization for some subsets are repeated at the 2nd and 3rd run. Starting from the 4th run, everything are loaded from cache. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Ubuntu 18.04.6 LTS - Python version: 3.6.9 - PyArrow version: 6.0.1
2023-11-20T18:14:37Z
3,847
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-03-07T19:55:15Z
https://api.github.com/repos/huggingface/datasets/issues/3847/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3847/timeline
Datasets' cache not re-used
https://api.github.com/repos/huggingface/datasets/issues/3847/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/15106980?v=4", "events_url": "https://api.github.com/users/gejinchen/events{/privacy}", "followers_url": "https://api.github.com/users/gejinchen/followers", "following_url": "https://api.github.com/users/gejinchen/following{/other_user}", "gists_url": "https://api.github.com/users/gejinchen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gejinchen", "id": 15106980, "login": "gejinchen", "node_id": "MDQ6VXNlcjE1MTA2OTgw", "organizations_url": "https://api.github.com/users/gejinchen/orgs", "received_events_url": "https://api.github.com/users/gejinchen/received_events", "repos_url": "https://api.github.com/users/gejinchen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gejinchen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gejinchen/subscriptions", "type": "User", "url": "https://api.github.com/users/gejinchen" }
[]
null
reopened
NONE
null
null
I_kwDODunzps5FQIWh
[ "<s>I think this is because the tokenizer is stateful and because the order in which the splits are processed is not deterministic. Because of that, the hash of the tokenizer may change for certain splits, which causes issues with caching.\r\n\r\nTo fix this we can try making the order of the splits deterministic for map.</s>", "Actually this is not because of the order of the splits, but most likely because the tokenizer used to process the second split is in a state that has been modified by the first split.\r\n\r\nTherefore after reloading the first split from the cache, then the second split can't be reloaded since the tokenizer hasn't seen the first split (and therefore is considered a different tokenizer).\r\n\r\nThis is a bit trickier to fix, we can explore fixing this next week maybe", "Sorry didn't have the bandwidth to take care of this yet - will re-assign when I'm diving into it again !", "I had this issue with `run_speech_recognition_ctc.py` for wa2vec2.0 fine-tuning. I made a small change and the hash for the function (which includes tokenisation) is now the same before and after pre-porocessing. With the hash being the same, the caching works as intended.\r\n\r\nBefore:\r\n```\r\n def prepare_dataset(batch):\r\n # load audio\r\n sample = batch[audio_column_name]\r\n\r\n inputs = feature_extractor(sample[\"array\"], sampling_rate=sample[\"sampling_rate\"])\r\n batch[\"input_values\"] = inputs.input_values[0]\r\n batch[\"input_length\"] = len(batch[\"input_values\"])\r\n\r\n # encode targets\r\n additional_kwargs = {}\r\n if phoneme_language is not None:\r\n additional_kwargs[\"phonemizer_lang\"] = phoneme_language\r\n\r\n batch[\"labels\"] = tokenizer(batch[\"target_text\"], **additional_kwargs).input_ids\r\n\r\n return batch\r\n\r\n with training_args.main_process_first(desc=\"dataset map preprocessing\"):\r\n vectorized_datasets = raw_datasets.map(\r\n prepare_dataset,\r\n remove_columns=next(iter(raw_datasets.values())).column_names,\r\n num_proc=num_workers,\r\n desc=\"preprocess datasets\",\r\n )\r\n```\r\nAfter:\r\n```\r\n def prepare_dataset(batch, feature_extractor, tokenizer):\r\n # load audio\r\n sample = batch[audio_column_name]\r\n\r\n inputs = feature_extractor(sample[\"array\"], sampling_rate=sample[\"sampling_rate\"])\r\n batch[\"input_values\"] = inputs.input_values[0]\r\n batch[\"input_length\"] = len(batch[\"input_values\"])\r\n\r\n # encode targets\r\n additional_kwargs = {}\r\n if phoneme_language is not None:\r\n additional_kwargs[\"phonemizer_lang\"] = phoneme_language\r\n\r\n batch[\"labels\"] = tokenizer(batch[\"target_text\"], **additional_kwargs).input_ids\r\n\r\n return batch\r\n\r\n pd = lambda batch: prepare_dataset(batch, feature_extractor, tokenizer)\r\n\r\n with training_args.main_process_first(desc=\"dataset map preprocessing\"):\r\n vectorized_datasets = raw_datasets.map(\r\n pd,\r\n remove_columns=next(iter(raw_datasets.values())).column_names,\r\n num_proc=num_workers,\r\n desc=\"preprocess datasets\",\r\n )\r\n```", "Not sure why the second one would work and not the first one - they're basically the same with respect to hashing. In both cases the function is hashed recursively, and therefore the feature_extractor and the tokenizer are hashed the same way.\r\n\r\nWith which tokenizer or feature extractor are you experiencing this behavior ?\r\n\r\nDo you also experience this ?\r\n> Tokenization for some subsets are repeated at the 2nd and 3rd run. Starting from the 4th run, everything are loaded from cache.", "Thanks ! Hopefully this can be useful to others, and also to better understand and improve hashing/caching ", "`tokenizer.save_pretrained(training_args.output_dir)` produces a different tokenizer hash when loaded on restart of the script. When I was debugging before I was terminating the script prior to this command, then rerunning. \r\n\r\nI compared the tokenizer items on the first and second runs, there are two different items:\r\n1st:\r\n```\r\n('_additional_special_tokens', [AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True)])\r\n\r\n...\r\n\r\n('tokens_trie', <transformers.tokenization_utils.Trie object at 0x7f4d6d0ddb38>)\r\n```\r\n\r\n2nd:\r\n```\r\n('_additional_special_tokens', [AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True)])\r\n\r\n...\r\n\r\n('tokens_trie', <transformers.tokenization_utils.Trie object at 0x7efc23dcce80>)\r\n```\r\n\r\n On every run of this the special tokens are being added on, and the hash is different on the `tokens_trie`. The increase in the special tokens category could be cleaned, but not sure about the hash for the `tokens_trie`. What might work is that the call for the tokenizer encoding can be translated into a function that strips any unnecessary information out, but that's a guess.\r\n", "Thanks for investigating ! Does that mean that `save_pretrained`() produces non-deterministic tokenizers on disk ? Or is it `from_pretrained()` which is not deterministic given the same files on disk ?\r\n\r\nI think one way to fix this would be to make save/from_pretrained deterministic, or make the pickling of `transformers.tokenization_utils.Trie` objects deterministic (this could be implemented in `transformers`, but maybe let's discuss in an issue in `transformers` before opening a PR)", "Late to the party but everything should be deterministic (afaik at least).\r\n\r\nBut `Trie` is a simple class object, so afaik it's hash function is linked to its `id(self)` so basically where it's stored in memory, so super highly non deterministic. Could that be the issue ?", "> But Trie is a simple class object, so afaik it's hash function is linked to its id(self) so basically where it's stored in memory, so super highly non deterministic. Could that be the issue ?\r\n\r\nWe're computing the hash of the pickle dump of the class so it should be fine, as long as the pickle dump is deterministic", "I've ported wav2vec2.0 fine-tuning into Optimum-Graphcore which is where I found the issue. The majority of the script was copied from the Transformers version to keep it similar, [here is the tokenizer loading section from the source](https://github.com/huggingface/transformers/blob/f0982682bd6fd0b438dda79ec45f3a8fac83a985/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L531).\r\n\r\nIn the last comment I have two loaded tokenizers, one from run 'N' of the script and one from 'N+1'. I think what's happening is that when you add special tokens (e.g. PAD and UNK) another AddedToken object is appended when tokenizer is saved regardless of whether special tokens are there already. \r\n\r\nIf there is a AddedTokens cleanup at load/save this could solve the issue, but then is Trie going to cause hash to be different? I'm not sure. ", "Which Python version are you using ?\r\n\r\nThe trie is basically a big dict of dics, so deterministic nature depends on python version:\r\nhttps://stackoverflow.com/questions/2053021/is-the-order-of-a-python-dictionary-guaranteed-over-iterations\r\n\r\nMaybe the investigation is actually not finding the right culprit though (the memory id is changed, but `datasets` is not using that to compare, so maybe we need to be looking within `datasets` so see where the comparison fails)", "Similar issue found on `BartTokenizer`. You can bypass the bug by loading a fresh new tokenizer everytime.\r\n\r\n```\r\n dataset = dataset.map(lambda x: tokenize_func(x, BartTokenizer.from_pretrained(xxx)),\r\n num_proc=num_proc, desc='Tokenize')\r\n```", "Linking in https://github.com/huggingface/datasets/issues/6179#issuecomment-1701244673 with an explanation.", "I got the same problem while using Wav2Vec2CTCTokenizer in a distributed experiment (many processes), and found that the problem was localized in the serialization (pickle dump) of the field `tokenizer.tokens_trie._tokens` (just a python set). I focussed into the set serialization and found it is not deterministic:\r\n\r\n```\r\nfrom datasets.fingerprint import Hasher\r\nfrom pickle import dumps,loads\r\n\r\n# used just once to get a serialized literal\r\n#print(dumps(set(\"abc\")))\r\nserialized = b'\\x80\\x04\\x95\\x11\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x8f\\x94(\\x8c\\x01a\\x94\\x8c\\x01c\\x94\\x8c\\x01b\\x94\\x90.'\r\n\r\nmyset = loads(serialized)\r\nprint(f'{myset=} {Hasher.hash(myset)}')\r\nprint(serialized == dumps(myset))\r\n```\r\n\r\nEvery time you run the python script (different processes) you get a random result. @lhoestq does it make any sense?", "OK, I assume python's set is just a hash table implementation that uses internally the hash() function. The problem is that python's hash() is not deterministic. I believe that setting the environment variable PYTHONHASHSEED to a fixed value, you can force it to be deterministic. I tried it (file `set_pickle_dump.py`):\r\n\r\n```\r\n#!/usr/bin/python3\r\n\r\nfrom datasets.fingerprint import Hasher\r\nfrom pickle import dumps,loads\r\n\r\n# used just once to get a serialized literal (with environment variable PYTHONHASHSEED set to 42)\r\n#print(dumps(set(\"abc\")))\r\nserialized = b'\\x80\\x04\\x95\\x11\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x8f\\x94(\\x8c\\x01b\\x94\\x8c\\x01c\\x94\\x8c\\x01a\\x94\\x90.'\r\n\r\nmyset = loads(serialized)\r\nprint(f'{myset=} {Hasher.hash(myset)}')\r\nprint(serialized == dumps(myset))\r\n```\r\n\r\nand now every run (`PYTHONHASHSEED=42 ./set_pickle_dump.py`) gets tthe same result. I tried then to test it with the tokenizer (file `test_tokenizer.py`):\r\n\r\n```\r\n#!/usr/bin/python3\r\nfrom transformers import AutoTokenizer\r\nfrom datasets.fingerprint import Hasher\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('model')\r\nprint(f'{type(tokenizer)=}')\r\nprint(f'{Hasher.hash(tokenizer)=}')\r\n```\r\n\r\nexecuted as `PYTHONHASHSEED=42 ./test_tokenizer.py` and now the tokenizer fingerprint is allways the same!\r\n", "Thanks for reporting. I opened a PR here to propose a fix: https://github.com/huggingface/datasets/pull/6318 and doesn't require setting `PYTHONHASHSEED`\r\n\r\nCan you try to install `datasets` from this branch and tell me if it fixes the issue ?", "I patched (*) the file `datasets/utils/py_utils.py` and cache is working propperly now. Thanks!\r\n\r\n(*): I am running my experiments inside a docker container that depends on `huggingface/transformers-pytorch-gpu:latest`, so pattched the file instead of rebuilding the container from scratch", "Fixed by #6318.", "The OP issue hasn't been fixed, re-opening", "I think the Trie()._tokens of PreTrainedTokenizer need to be a sorted set So that the results of `hash_bytes(dumps(tokenizer))` are consistent every time", "I believe the issue may be linked to [tokenization_utils.py#L507](https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils.py#L507),specifically in the line where self.tokens_trie.add(token.content) is called. The function _update_trie appears to modify an unordered set. Consequently, this line:\r\n`value = hash_bytes(dumps(tokenizer.tokens_trie._tokens))`\r\ncan lead to inconsistencies when rerunning the code.\r\n\r\nThis, in turn, results in inconsistent outputs for both `hash_bytes(dumps(function))` at [arrow_dataset.py#L3053](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L3053) and\r\n`hasher.update(transform_args[key])` at [fingerprint.py#L323](https://github.com/huggingface/datasets/blob/main/src/datasets/fingerprint.py#L323)\r\n\r\n```\r\ndataset_kwargs = {\r\n \"shard\": raw_datasets,\r\n \"function\": tokenize_function,\r\n}\r\ntransform = format_transform_for_fingerprint(Dataset._map_single)\r\nkwargs_for_fingerprint = format_kwargs_for_fingerprint(Dataset._map_single, (), dataset_kwargs)\r\nkwargs_for_fingerprint[\"fingerprint_name\"] = \"new_fingerprint\"\r\nnew_fingerprint = update_fingerprint(raw_datasets._fingerprint, transform, kwargs_for_fingerprint)\r\n```\r\n", "Alternatively, does the \"dumps\" function require separate processing for the set?", "We did a fix that does sorting whenever we hash sets. The fix is available on `main` if you want to try it out. We'll do a new release soon :)", "Is there a documentation chapter that discusses in which cases you should expect your dataset preprocessing to be cached. Including do's and don'ts for the preprocessing functions? I think Datasets team does amazing job at tacking this issue on their side, but it would be great to have some guidelines on the user side as well.\r\n\r\nIn our current project we have two cases (text-to-text classification and summarization) and in one of them the cache is sometimes reused when it's not supposed to be reused while in the other it's never used at all 😅", "You can find some docs here :) \r\nhttps://huggingface.co/docs/datasets/about_cache" ]
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3847/reactions" }
open
false
https://api.github.com/repos/huggingface/datasets/issues/3847
https://github.com/huggingface/datasets/issues/3847
false
1,161,810,226
https://api.github.com/repos/huggingface/datasets/issues/3846/labels{/name}
Following https://github.com/huggingface/datasets/pull/3721 I updated the docstring of the `device` argument of the FAISS related methods of `Dataset`
2022-03-07T19:21:23Z
3,846
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-07T19:06:59Z
https://api.github.com/repos/huggingface/datasets/issues/3846/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3846/timeline
Update faiss device docstring
https://api.github.com/repos/huggingface/datasets/issues/3846/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
null
null
MEMBER
2022-03-07T19:21:22Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3846.diff", "html_url": "https://github.com/huggingface/datasets/pull/3846", "merged_at": "2022-03-07T19:21:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/3846.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3846" }
PR_kwDODunzps40D-uh
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3846). All of your documentation changes will be reflected on that endpoint." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3846/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3846
https://github.com/huggingface/datasets/pull/3846
true
1,161,739,483
https://api.github.com/repos/huggingface/datasets/issues/3845/labels{/name}
This PR adds RMSE - Root Mean Squared Error and MAE - Mean Absolute Error to the metrics API. Both implementations are based on usage of sciket-learn. Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608) Please suggest any changes if required. Thank you.
2022-03-09T16:50:03Z
3,845
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-07T17:53:24Z
https://api.github.com/repos/huggingface/datasets/issues/3845/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3845/timeline
add RMSE and MAE metrics.
https://api.github.com/repos/huggingface/datasets/issues/3845/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" }
[]
null
null
CONTRIBUTOR
2022-03-09T16:50:03Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3845.diff", "html_url": "https://github.com/huggingface/datasets/pull/3845", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3845.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3845" }
PR_kwDODunzps40DvqX
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3845). All of your documentation changes will be reflected on that endpoint.", "@mariosasko I've reopened it here. Please suggest any changes if required. Thank you.", "Thanks for suggestions. :) I have added update the KWARGS_DESCRIPTION for the missing params and also changed RMSE to MSE.\r\nWhile testing, I noticed that when the input is a list of lists, we get an error :\r\n`TypeError: float() argument must be a string or a number, not 'list'`\r\nCould you suggest the datasets.Value() attribute to support both list of floats and list of lists containing floats ?\r\n", "Just add a new config to cover that case. You can do this by replacing the current `features` dict with:\r\n```python\r\nfeatures=datasets.Features(\r\n {\r\n \"predictions\": datasets.Sequence(datasets.Value(\"float\")),\r\n \"references\": datasets.Sequence(datasets.Value(\"float\")),\r\n }\r\n if self.config_name == \"multioutput\"\r\n else {\r\n \"predictions\": datasets.Value(\"float\"),\r\n \"references\": datasets.Value(\"float\"),\r\n }\r\n),\r\n```\r\nFeel free to suggest a better name for the config than `multioutput`", "Also, could you please move the changes to a new branch and open a PR from there (for the 3rd time 😄) because the diff shows changes from unrelated PRs (maybe due to rebasing?).", "Thanks for the input, I have added new config to support multi-dimensional lists and updated the examples as well.\r\n\r\nSure. Will do that and open a new PR for these changes." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3845/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3845
https://github.com/huggingface/datasets/pull/3845
true
1,161,686,754
https://api.github.com/repos/huggingface/datasets/issues/3844/labels{/name}
This PR adds RMSE - Root Mean Squared Error and MAE - Mean Absolute Error to the metrics API. Both implementations are based on usage of sciket-learn. Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608) Any suggestions and changes required will be helpful.
2022-03-07T17:24:32Z
3,844
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-07T17:06:38Z
https://api.github.com/repos/huggingface/datasets/issues/3844/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3844/timeline
Add rmse and mae metrics.
https://api.github.com/repos/huggingface/datasets/issues/3844/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" }
[]
null
null
CONTRIBUTOR
2022-03-07T17:15:06Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3844.diff", "html_url": "https://github.com/huggingface/datasets/pull/3844", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3844.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3844" }
PR_kwDODunzps40DkYL
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3844). All of your documentation changes will be reflected on that endpoint.", "@dnaveenr This PR is in pretty good shape, so feel free to reopen it." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3844/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3844
https://github.com/huggingface/datasets/pull/3844
true
1,161,397,812
https://api.github.com/repos/huggingface/datasets/issues/3843/labels{/name}
The streaming version of https://github.com/huggingface/datasets/pull/3787. Fix #3835 CC: @albertvillanova
2022-03-15T12:30:25Z
3,843
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-07T13:09:19Z
https://api.github.com/repos/huggingface/datasets/issues/3843/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3843/timeline
Fix Google Drive URL to avoid Virus scan warning in streaming mode
https://api.github.com/repos/huggingface/datasets/issues/3843/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
null
null
CONTRIBUTOR
2022-03-15T12:30:23Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3843.diff", "html_url": "https://github.com/huggingface/datasets/pull/3843", "merged_at": "2022-03-15T12:30:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/3843.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3843" }
PR_kwDODunzps40Cm0D
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3843). All of your documentation changes will be reflected on that endpoint.", "Cool ! Looks like it breaks `test_streaming_gg_drive_gzipped` for some reason..." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3843/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3843
https://github.com/huggingface/datasets/pull/3843
true
1,161,336,483
https://api.github.com/repos/huggingface/datasets/issues/3842/labels{/name}
From #3444 , Dataset.shuffle can have the same API than IterableDataset.shuffle (i.e. in streaming mode). Currently you can pass an optional seed to both if you want, BUT currently IterableDataset.shuffle always requires a buffer_size, used for approximate shuffling. I propose using a reasonable default value (maybe 1000) instead. In this PR, I set the default `buffer_size` value to 1,000, and I reorder the `IterableDataset.shuffle` arguments to match `Dataset.shuffle`, i.e. making `seed` the first argument.
2022-03-07T19:03:43Z
3,842
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-07T12:10:46Z
https://api.github.com/repos/huggingface/datasets/issues/3842/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3842/timeline
Align IterableDataset.shuffle with Dataset.shuffle
https://api.github.com/repos/huggingface/datasets/issues/3842/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
null
null
MEMBER
2022-03-07T19:03:42Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3842.diff", "html_url": "https://github.com/huggingface/datasets/pull/3842", "merged_at": "2022-03-07T19:03:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/3842.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3842" }
PR_kwDODunzps40CZvE
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3842). All of your documentation changes will be reflected on that endpoint.", "We should also add `generator` as a param to `shuffle` to fully align the APIs, no?", "I added the `generator` argument.\r\n\r\nI had to make a few other adjustments to make it work. In particular when you call `set_epoch()` on a streaming dataset, it updates the underlying random generator by using a new effective seed. The effective seed is generated using the previous generator and the epoch number." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3842/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3842
https://github.com/huggingface/datasets/pull/3842
true
1,161,203,842
https://api.github.com/repos/huggingface/datasets/issues/3841/labels{/name}
## Describe the bug Pyright complains about module not exported. ## Steps to reproduce the bug Use an editor/IDE with Pyright Language server with default configuration: ```python from datasets import load_dataset ``` ## Expected results No complain from Pyright ## Actual results Pyright complain below: ``` `load_dataset` is not exported from module "datasets" Import from "datasets.load" instead [reportPrivateImportUsage] ``` Importing from `datasets.load` does indeed solves the problem but I believe importing directly from top level `datasets` is the intended usage per the documentation. ## Environment info - `datasets` version: 1.18.3 - Platform: macOS-12.2.1-arm64-arm-64bit - Python version: 3.9.10 - PyArrow version: 7.0.0
2023-02-18T19:14:03Z
3,841
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-03-07T10:24:04Z
https://api.github.com/repos/huggingface/datasets/issues/3841/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3841/timeline
Pyright reportPrivateImportUsage when `from datasets import load_dataset`
https://api.github.com/repos/huggingface/datasets/issues/3841/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/12573521?v=4", "events_url": "https://api.github.com/users/lkhphuc/events{/privacy}", "followers_url": "https://api.github.com/users/lkhphuc/followers", "following_url": "https://api.github.com/users/lkhphuc/following{/other_user}", "gists_url": "https://api.github.com/users/lkhphuc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lkhphuc", "id": 12573521, "login": "lkhphuc", "node_id": "MDQ6VXNlcjEyNTczNTIx", "organizations_url": "https://api.github.com/users/lkhphuc/orgs", "received_events_url": "https://api.github.com/users/lkhphuc/received_events", "repos_url": "https://api.github.com/users/lkhphuc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lkhphuc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lkhphuc/subscriptions", "type": "User", "url": "https://api.github.com/users/lkhphuc" }
[]
null
completed
CONTRIBUTOR
2023-02-13T13:48:41Z
null
I_kwDODunzps5FNpCC
[ "Hi! \r\n\r\nThis issue stems from `datasets` having `py.typed` defined (see https://github.com/microsoft/pyright/discussions/3764#discussioncomment-3282142) - to avoid it, we would either have to remove `py.typed` (added to be compliant with PEP-561) or export the names with `__all__`/`from .submodule import name as name`.\r\n\r\nTransformers is fine as it no longer has `py.typed` (removed in https://github.com/huggingface/transformers/pull/18485)\r\n\r\nWDYT @lhoestq @albertvillanova @polinaeterna \r\n\r\n@sgugger's point makes sense - we should either be \"properly typed\" (have py.typed + mypy tests) or drop `py.typed` as Transformers did (I like this option better).\r\n\r\n(cc @Wauplin since `huggingface_hub` has the same issue.)", "I'm fine with dropping it, but autotrain people won't be happy @SBrandeis ", "> (cc @Wauplin since huggingface_hub has the same issue.)\r\n\r\nHmm maybe we have the same issue but I haven't been able to reproduce something similar to `\"load_dataset\" is not exported from module \"datasets\"` message (using VSCode+Pylance -that is powered by Pyright). `huggingface_hub` contains a `py.typed` file but the package itself is actually typed. We are running `mypy` in our CI tests since ~3 months and so far it seems to be ok. But happy to change if it causes some issues with linters.\r\n\r\nAlso the top-level [`__init__.py`](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/__init__.py) is quite different in `hfh` than `datasets` (at first glance). We have a section at the bottom to import all high level methods/classes in a `if TYPE_CHECKING` block.", "@Wauplin I only get the error if I use Pyright's CLI tool or the Pyright extension (not sure why, but Pylance also doesn't report this issue on my machine)\r\n\r\n> Also the top-level [`__init__.py`](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/__init__.py) is quite different in `hfh` than `datasets` (at first glance). We have a section at the bottom to import all high level methods/classes in a `if TYPE_CHECKING` block.\r\n\r\nI tried to fix the issue with `TYPE_CHECKING`, but it still fails if `py.typed` is present.", "@mariosasko thank for the tip. I have been able to reproduce the issue as well. I would be up for including a (huge) static `__all__` variable in the `__init__.py` (since the file is already generated automatically in `hfh`) but honestly I don't think it's worth the hassle. \r\n\r\nI'll delete the `py.typed` file in `huggingface_hub` to be consistent between HF libraries. I opened a PR here: https://github.com/huggingface/huggingface_hub/pull/1329", "I am getting this error in google colab today:\r\n\r\n![image](https://user-images.githubusercontent.com/3464445/219883967-c7193a23-0388-4ba3-b00c-a53883fb6512.png)\r\n\r\nThe code runs just fine too." ]
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/3841/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3841
https://github.com/huggingface/datasets/issues/3841
false
1,161,183,773
https://api.github.com/repos/huggingface/datasets/issues/3840/labels{/name}
Temporarily fix CI for Windows by pinning `responses`. See: https://app.circleci.com/pipelines/github/huggingface/datasets/10292/workflows/83de4a55-bff7-43ec-96f7-0c335af5c050/jobs/63355 Fix: #3839
2022-03-07T10:12:36Z
3,840
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-07T10:06:53Z
https://api.github.com/repos/huggingface/datasets/issues/3840/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3840/timeline
Pin responses to fix CI for Windows
https://api.github.com/repos/huggingface/datasets/issues/3840/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
null
null
MEMBER
2022-03-07T10:07:24Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3840.diff", "html_url": "https://github.com/huggingface/datasets/pull/3840", "merged_at": "2022-03-07T10:07:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/3840.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3840" }
PR_kwDODunzps40B8eu
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3840). All of your documentation changes will be reflected on that endpoint." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3840/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3840
https://github.com/huggingface/datasets/pull/3840
true
1,161,183,482
https://api.github.com/repos/huggingface/datasets/issues/3839/labels{/name}
## Describe the bug See: https://app.circleci.com/pipelines/github/huggingface/datasets/10292/workflows/83de4a55-bff7-43ec-96f7-0c335af5c050/jobs/63355 ``` ___________________ test_datasetdict_from_text_split[test] ____________________ [gw0] win32 -- Python 3.7.11 C:\tools\miniconda3\envs\py37\python.exe split = 'test' text_path = 'C:\\Users\\circleci\\AppData\\Local\\Temp\\pytest-of-circleci\\pytest-0\\popen-gw0\\data6\\dataset.txt' tmp_path = WindowsPath('C:/Users/circleci/AppData/Local/Temp/pytest-of-circleci/pytest-0/popen-gw0/test_datasetdict_from_text_spl7') @pytest.mark.parametrize("split", [None, NamedSplit("train"), "train", "test"]) def test_datasetdict_from_text_split(split, text_path, tmp_path): if split: path = {split: text_path} else: split = "train" path = {"train": text_path, "test": text_path} cache_dir = tmp_path / "cache" expected_features = {"text": "string"} > dataset = TextDatasetReader(path, cache_dir=cache_dir).read() tests\io\test_text.py:118: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\io\text.py:43: in read use_auth_token=use_auth_token, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\builder.py:588: in download_and_prepare self._download_prepared_from_hf_gcs(dl_manager.download_config) C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\builder.py:630: in _download_prepared_from_hf_gcs reader.download_from_hf_gcs(download_config, relative_data_dir) C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\arrow_reader.py:260: in download_from_hf_gcs downloaded_dataset_info = cached_path(remote_dataset_info.replace(os.sep, "/")) C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:301: in cached_path download_desc=download_config.download_desc, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:560: in get_from_cache headers=headers, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:476: in http_head max_retries=max_retries, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:397: in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) C:\tools\miniconda3\envs\py37\lib\site-packages\requests\api.py:61: in request return session.request(method=method, url=url, **kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\requests\sessions.py:529: in request resp = self.send(prep, **send_kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\requests\sessions.py:645: in send r = adapter.send(request, **kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\responses\__init__.py:840: in unbound_on_send return self._on_request(adapter, request, *a, **kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\responses\__init__.py:780: in _on_request match, match_failed_reasons = self._find_match(request) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <responses.RequestsMock object at 0x000002048AD70588> request = <PreparedRequest [HEAD]> def _find_first_match(self, request): match_failed_reasons = [] > for i, match in enumerate(self._matches): E AttributeError: 'RequestsMock' object has no attribute '_matches' C:\tools\miniconda3\envs\py37\lib\site-packages\moto\core\models.py:289: AttributeError ```
2022-05-20T14:13:43Z
3,839
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-03-07T10:06:42Z
https://api.github.com/repos/huggingface/datasets/issues/3839/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/3839/timeline
CI is broken for Windows
https://api.github.com/repos/huggingface/datasets/issues/3839/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
completed
MEMBER
2022-03-07T10:07:24Z
null
I_kwDODunzps5FNkD6
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3839/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3839
https://github.com/huggingface/datasets/issues/3839
false
1,161,137,406
https://api.github.com/repos/huggingface/datasets/issues/3838/labels{/name}
It might be a mix of Image and ClassLabel, and the color palette might be generated automatically. --- ### Example every pixel in the images of the annotation column (in https://huggingface.co/datasets/scene_parse_150) has a value that gives its class, and the dataset itself is associated with a color palette (eg https://github.com/open-mmlab/mmsegmentation/blob/98a353b674c6052d319e7de4e5bcd65d670fcf84/mmseg/datasets/ade.py#L47) that maps every class with a color. So we might want to render the image as a colored image instead of a black and white one. <img width="785" alt="156741519-fbae6844-2606-4c28-837e-279d83d00865" src="https://user-images.githubusercontent.com/1676121/157005263-7058c584-2b70-465a-ad94-8a982f726cf4.png"> See https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/features/labeled_image.py for reference in Tensorflow
2022-04-10T13:34:59Z
3,838
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
2022-03-07T09:38:15Z
https://api.github.com/repos/huggingface/datasets/issues/3838/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/3838/timeline
Add a data type for labeled images (image segmentation)
https://api.github.com/repos/huggingface/datasets/issues/3838/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
null
CONTRIBUTOR
null
null
I_kwDODunzps5FNYz-
[]
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3838/reactions" }
open
false
https://api.github.com/repos/huggingface/datasets/issues/3838
https://github.com/huggingface/datasets/issues/3838
false
1,161,109,031
https://api.github.com/repos/huggingface/datasets/issues/3837/labels{/name}
null
2022-03-07T11:07:35Z
3,837
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-07T09:13:29Z
https://api.github.com/repos/huggingface/datasets/issues/3837/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3837/timeline
Release: 1.18.4
https://api.github.com/repos/huggingface/datasets/issues/3837/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
null
null
MEMBER
2022-03-07T11:07:02Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3837.diff", "html_url": "https://github.com/huggingface/datasets/pull/3837", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3837.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3837" }
PR_kwDODunzps40BwE1
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3837/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3837
https://github.com/huggingface/datasets/pull/3837
true
1,161,072,531
https://api.github.com/repos/huggingface/datasets/issues/3836/labels{/name}
<img width="1000" alt="Screenshot 2022-03-07 at 09 35 29" src="https://user-images.githubusercontent.com/11827707/156996422-339ba43e-932b-4849-babf-9321cb99c922.png">
2022-03-07T20:21:11Z
3,836
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-07T08:38:34Z
https://api.github.com/repos/huggingface/datasets/issues/3836/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3836/timeline
Logo float left
https://api.github.com/repos/huggingface/datasets/issues/3836/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mishig25", "id": 11827707, "login": "mishig25", "node_id": "MDQ6VXNlcjExODI3NzA3", "organizations_url": "https://api.github.com/users/mishig25/orgs", "received_events_url": "https://api.github.com/users/mishig25/received_events", "repos_url": "https://api.github.com/users/mishig25/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "type": "User", "url": "https://api.github.com/users/mishig25" }
[]
null
null
CONTRIBUTOR
2022-03-07T09:14:11Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3836.diff", "html_url": "https://github.com/huggingface/datasets/pull/3836", "merged_at": "2022-03-07T09:14:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/3836.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3836" }
PR_kwDODunzps40Bobr
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3836). All of your documentation changes will be reflected on that endpoint.", "Weird, the logo doesn't seem to be floating on my side (using Chrome) at https://huggingface.co/docs/datasets/master/en/index", "https://huggingface.co/docs/datasets/index\r\n\r\nThe needed css change from moon-landing just got deployed" ]
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3836/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3836
https://github.com/huggingface/datasets/pull/3836
true
1,161,029,205
https://api.github.com/repos/huggingface/datasets/issues/3835/labels{/name}
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
2022-03-15T12:30:23Z
3,835
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-03-07T07:56:42Z
https://api.github.com/repos/huggingface/datasets/issues/3835/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3835/timeline
The link given on the gigaword does not work
https://api.github.com/repos/huggingface/datasets/issues/3835/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/26357784?v=4", "events_url": "https://api.github.com/users/martin6336/events{/privacy}", "followers_url": "https://api.github.com/users/martin6336/followers", "following_url": "https://api.github.com/users/martin6336/following{/other_user}", "gists_url": "https://api.github.com/users/martin6336/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/martin6336", "id": 26357784, "login": "martin6336", "node_id": "MDQ6VXNlcjI2MzU3Nzg0", "organizations_url": "https://api.github.com/users/martin6336/orgs", "received_events_url": "https://api.github.com/users/martin6336/received_events", "repos_url": "https://api.github.com/users/martin6336/repos", "site_admin": false, "starred_url": "https://api.github.com/users/martin6336/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/martin6336/subscriptions", "type": "User", "url": "https://api.github.com/users/martin6336" }
[]
null
completed
NONE
2022-03-15T12:30:23Z
null
I_kwDODunzps5FM-ZV
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3835/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3835
https://github.com/huggingface/datasets/issues/3835
false
1,160,657,937
https://api.github.com/repos/huggingface/datasets/issues/3834/labels{/name}
Previous link gives 404 error. Updated with a new dataset scripts creation link.
2022-03-07T12:12:07Z
3,834
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-06T16:45:48Z
https://api.github.com/repos/huggingface/datasets/issues/3834/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3834/timeline
Fix dead dataset scripts creation link.
https://api.github.com/repos/huggingface/datasets/issues/3834/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" }
[]
null
null
CONTRIBUTOR
2022-03-07T12:12:07Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3834.diff", "html_url": "https://github.com/huggingface/datasets/pull/3834", "merged_at": "2022-03-07T12:12:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/3834.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3834" }
PR_kwDODunzps40ATVw
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3834/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3834
https://github.com/huggingface/datasets/pull/3834
true
1,160,543,713
https://api.github.com/repos/huggingface/datasets/issues/3833/labels{/name}
null
2022-03-07T12:35:33Z
3,833
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-06T07:49:49Z
https://api.github.com/repos/huggingface/datasets/issues/3833/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3833/timeline
Small typos in How-to-train tutorial.
https://api.github.com/repos/huggingface/datasets/issues/3833/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/12573521?v=4", "events_url": "https://api.github.com/users/lkhphuc/events{/privacy}", "followers_url": "https://api.github.com/users/lkhphuc/followers", "following_url": "https://api.github.com/users/lkhphuc/following{/other_user}", "gists_url": "https://api.github.com/users/lkhphuc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lkhphuc", "id": 12573521, "login": "lkhphuc", "node_id": "MDQ6VXNlcjEyNTczNTIx", "organizations_url": "https://api.github.com/users/lkhphuc/orgs", "received_events_url": "https://api.github.com/users/lkhphuc/received_events", "repos_url": "https://api.github.com/users/lkhphuc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lkhphuc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lkhphuc/subscriptions", "type": "User", "url": "https://api.github.com/users/lkhphuc" }
[]
null
null
CONTRIBUTOR
2022-03-07T12:13:17Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3833.diff", "html_url": "https://github.com/huggingface/datasets/pull/3833", "merged_at": "2022-03-07T12:13:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/3833.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3833" }
PR_kwDODunzps4z_99t
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3833/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3833
https://github.com/huggingface/datasets/pull/3833
true
1,160,503,446
https://api.github.com/repos/huggingface/datasets/issues/3832/labels{/name}
Let's make Hugging Face Datasets the central hub for GNN datasets :) **Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the GNN field. What are some datasets worth integrating into the Hugging Face hub? Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Special thanks to @napoles-uach for his collaboration on identifying the first ones: - [ ] [SNAP-Stanford OGB Datasets](https://github.com/snap-stanford/ogb). - [ ] [SNAP-Stanford Pretrained GNNs Chemistry and Biology Datasets](https://github.com/snap-stanford/pretrain-gnns). - [ ] [TUDatasets](https://chrsmrrs.github.io/datasets/) (A collection of benchmark datasets for graph classification and regression) cc @osanseviero
2022-03-14T07:45:38Z
3,832
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "7AFCAA", "default": false, "description": "Datasets for Graph Neural Networks", "id": 3898693527, "name": "graph", "node_id": "LA_kwDODunzps7oYVeX", "url": "https://api.github.com/repos/huggingface/datasets/labels/graph" } ]
2022-03-06T03:02:58Z
https://api.github.com/repos/huggingface/datasets/issues/3832/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3832/timeline
Making Hugging Face the place to go for Graph NNs datasets
https://api.github.com/repos/huggingface/datasets/issues/3832/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/omarespejel", "id": 4755430, "login": "omarespejel", "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "repos_url": "https://api.github.com/users/omarespejel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "type": "User", "url": "https://api.github.com/users/omarespejel" }
[]
null
null
NONE
null
null
I_kwDODunzps5FK-CW
[ "It will be indeed really great to add support to GNN datasets. Big :+1: for this initiative.", "@napoles-uach identifies the [TUDatasets](https://chrsmrrs.github.io/datasets/) (A collection of benchmark datasets for graph classification and regression). \r\n\r\nAdded to the Tasks in the initial issue.", "Thanks Omar, that is a great collection!", "Great initiative! Let's keep this issue for these 3 datasets, but moving forward maybe let's create a new issue per dataset :rocket: great work @napoles-uach and @omarespejel!" ]
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 2, "laugh": 0, "rocket": 0, "total_count": 5, "url": "https://api.github.com/repos/huggingface/datasets/issues/3832/reactions" }
open
false
https://api.github.com/repos/huggingface/datasets/issues/3832
https://github.com/huggingface/datasets/issues/3832
false
1,160,501,000
https://api.github.com/repos/huggingface/datasets/issues/3831/labels{/name}
## Describe the bug when converting a dataset to tf_dataset by using to_tf_dataset with shuffle true, the remainder is not converted to one batch ## Steps to reproduce the bug this is the sample code below https://colab.research.google.com/drive/1_oRXWsR38ElO1EYF9ayFoCU7Ou1AAej4?usp=sharing ## Expected results regardless of shuffle is true or not, 67 rows dataset should be 5 batches when batch size is 16. ## Actual results 4 batches ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 6.0.1
2022-03-08T15:18:56Z
3,831
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-03-06T02:43:50Z
https://api.github.com/repos/huggingface/datasets/issues/3831/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3831/timeline
when using to_tf_dataset with shuffle is true, not all completed batches are made
https://api.github.com/repos/huggingface/datasets/issues/3831/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42107709?v=4", "events_url": "https://api.github.com/users/greenned/events{/privacy}", "followers_url": "https://api.github.com/users/greenned/followers", "following_url": "https://api.github.com/users/greenned/following{/other_user}", "gists_url": "https://api.github.com/users/greenned/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/greenned", "id": 42107709, "login": "greenned", "node_id": "MDQ6VXNlcjQyMTA3NzA5", "organizations_url": "https://api.github.com/users/greenned/orgs", "received_events_url": "https://api.github.com/users/greenned/received_events", "repos_url": "https://api.github.com/users/greenned/repos", "site_admin": false, "starred_url": "https://api.github.com/users/greenned/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/greenned/subscriptions", "type": "User", "url": "https://api.github.com/users/greenned" }
[]
null
completed
NONE
2022-03-08T15:18:56Z
null
I_kwDODunzps5FK9cI
[ "Maybe @Rocketknight1 can help here", "Hi @greenned, this is expected behaviour for `to_tf_dataset`. By default, we drop the smaller 'remainder' batch during training (i.e. when `shuffle=True`). If you really want to keep that batch, you can set `drop_remainder=False` when calling `to_tf_dataset()`.", "@Rocketknight1 Oh, thank you. I didn't get **drop_remainder** Have a nice day!", "No problem!\r\n" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3831/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3831
https://github.com/huggingface/datasets/issues/3831
false
1,160,181,404
https://api.github.com/repos/huggingface/datasets/issues/3830/labels{/name}
When using datasets.load_dataset method to load cnn_dailymail dataset, got error as below: - windows os: FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'D:\\SourceCode\\DataScience\\HuggingFace\\Data\\downloads\\1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b\\cnn\\stories' - google colab: NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' The code is to load dataset: windows os: ``` from datasets import load_dataset dataset = load_dataset("cnn_dailymail", "3.0.0", cache_dir="D:\\SourceCode\\DataScience\\HuggingFace\\Data") ``` google colab: ``` import datasets train_data = datasets.load_dataset("cnn_dailymail", "3.0.0", split="train") ```
2022-03-07T06:53:41Z
3,830
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
2022-03-05T01:43:12Z
https://api.github.com/repos/huggingface/datasets/issues/3830/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3830/timeline
Got error when load cnn_dailymail dataset
https://api.github.com/repos/huggingface/datasets/issues/3830/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/78331051?v=4", "events_url": "https://api.github.com/users/wgong0510/events{/privacy}", "followers_url": "https://api.github.com/users/wgong0510/followers", "following_url": "https://api.github.com/users/wgong0510/following{/other_user}", "gists_url": "https://api.github.com/users/wgong0510/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wgong0510", "id": 78331051, "login": "wgong0510", "node_id": "MDQ6VXNlcjc4MzMxMDUx", "organizations_url": "https://api.github.com/users/wgong0510/orgs", "received_events_url": "https://api.github.com/users/wgong0510/received_events", "repos_url": "https://api.github.com/users/wgong0510/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wgong0510/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wgong0510/subscriptions", "type": "User", "url": "https://api.github.com/users/wgong0510" }
[]
null
completed
NONE
2022-03-07T06:53:41Z
null
I_kwDODunzps5FJvac
[ "Was able to reproduce the issue on Colab; full logs below. \r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNotADirectoryError Traceback (most recent call last)\r\n[<ipython-input-2-39967739ba7f>](https://localhost:8080/#) in <module>()\r\n 1 import datasets\r\n 2 \r\n----> 3 train_data = datasets.load_dataset(\"cnn_dailymail\", \"3.0.0\", split=\"train\")\r\n\r\n5 frames\r\n[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)\r\n 1705 ignore_verifications=ignore_verifications,\r\n 1706 try_from_hf_gcs=try_from_hf_gcs,\r\n-> 1707 use_auth_token=use_auth_token,\r\n 1708 )\r\n 1709 \r\n\r\n[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 593 if not downloaded_from_gcs:\r\n 594 self._download_and_prepare(\r\n--> 595 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 596 )\r\n 597 # Sync info\r\n\r\n[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 659 split_dict = SplitDict(dataset_name=self.name)\r\n 660 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 661 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 662 \r\n 663 # Checksums verification\r\n\r\n[/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py](https://localhost:8080/#) in _split_generators(self, dl_manager)\r\n 253 def _split_generators(self, dl_manager):\r\n 254 dl_paths = dl_manager.download_and_extract(_DL_URLS)\r\n--> 255 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN)\r\n 256 # Generate shared vocabulary\r\n 257 \r\n\r\n[/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py](https://localhost:8080/#) in _subset_filenames(dl_paths, split)\r\n 154 else:\r\n 155 logger.fatal(\"Unsupported split: %s\", split)\r\n--> 156 cnn = _find_files(dl_paths, \"cnn\", urls)\r\n 157 dm = _find_files(dl_paths, \"dm\", urls)\r\n 158 return cnn + dm\r\n\r\n[/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py](https://localhost:8080/#) in _find_files(dl_paths, publisher, url_dict)\r\n 133 else:\r\n 134 logger.fatal(\"Unsupported publisher: %s\", publisher)\r\n--> 135 files = sorted(os.listdir(top_dir))\r\n 136 \r\n 137 ret_files = []\r\n\r\nNotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'\r\n```", "Hi @jon-tow, thanks for reporting. And hi @dynamicwebpaige, thanks for your investigation. \r\n\r\nThis issue was already reported \r\n- #3784\r\n\r\nand its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it. See:\r\n- #3787 \r\n\r\nWe are planning to make a patch release today (indeed, we were planning to do it last Friday).\r\n\r\nIn the meantime, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```\r\n\r\nCC: @lhoestq " ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3830/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3830
https://github.com/huggingface/datasets/issues/3830
false
1,160,154,352
https://api.github.com/repos/huggingface/datasets/issues/3829/labels{/name}
## Brief Overview Downloading, saving, and preprocessing large datasets from the `datasets` library can often result in [performance bottlenecks](https://github.com/huggingface/datasets/issues/3735). These performance snags can be challenging to identify and to debug, especially for users who are less experienced with building deep learning experiments. ## Feature Request Could we create a performance guide for using `datasets`, similar to: * [Better performance with the `tf.data` API](https://github.com/huggingface/datasets/issues/3735) * [Analyze `tf.data` performance with the TF Profiler](https://www.tensorflow.org/guide/data_performance_analysis) This performance guide should detail practical options for improving performance with `datasets`, and enumerate any common best practices. It should also show how to use tools like the PyTorch Profiler or the TF Profiler to identify any performance bottlenecks (example below). ![image](https://user-images.githubusercontent.com/3712347/156859152-a3cb9565-3ec6-4d39-8e77-56d0a75a4954.png) ## Related Issues * [wiki_dpr pre-processing performance #1670](https://github.com/huggingface/datasets/issues/1670) * [Adjusting chunk size for streaming datasets #3499](https://github.com/huggingface/datasets/issues/3499) * [how large datasets are handled under the hood #1004](https://github.com/huggingface/datasets/issues/1004) * [using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? #1830](https://github.com/huggingface/datasets/issues/1830) * [Best way to batch a large dataset? #315](https://github.com/huggingface/datasets/issues/315) * [Saving processed dataset running infinitely #1911](https://github.com/huggingface/datasets/issues/1911)
2022-03-10T16:24:27Z
3,829
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
2022-03-05T00:28:06Z
https://api.github.com/repos/huggingface/datasets/issues/3829/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3829/timeline
[📄 Docs] Create a `datasets` performance guide.
https://api.github.com/repos/huggingface/datasets/issues/3829/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/3712347?v=4", "events_url": "https://api.github.com/users/dynamicwebpaige/events{/privacy}", "followers_url": "https://api.github.com/users/dynamicwebpaige/followers", "following_url": "https://api.github.com/users/dynamicwebpaige/following{/other_user}", "gists_url": "https://api.github.com/users/dynamicwebpaige/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dynamicwebpaige", "id": 3712347, "login": "dynamicwebpaige", "node_id": "MDQ6VXNlcjM3MTIzNDc=", "organizations_url": "https://api.github.com/users/dynamicwebpaige/orgs", "received_events_url": "https://api.github.com/users/dynamicwebpaige/received_events", "repos_url": "https://api.github.com/users/dynamicwebpaige/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dynamicwebpaige/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dynamicwebpaige/subscriptions", "type": "User", "url": "https://api.github.com/users/dynamicwebpaige" }
[]
null
null
NONE
null
null
I_kwDODunzps5FJozw
[ "Hi ! Yes this is definitely something we'll explore, since optimizing processing pipelines can be challenging and because performance is key here: we want anyone to be able to play with large-scale datasets more easily.\r\n\r\nI think we'll start by documenting the performance of the dataset transforms we provide, and then we can have some tools to help debugging/optimizing them" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3829/reactions" }
open
false
https://api.github.com/repos/huggingface/datasets/issues/3829
https://github.com/huggingface/datasets/issues/3829
false
1,160,064,029
https://api.github.com/repos/huggingface/datasets/issues/3828/labels{/name}
## Describe the bug If you look at https://huggingface.co/datasets/the_pile/blob/main/the_pile.py: For "all" * the pile_set_name is never set for data * there's actually an id field inside of "meta" For subcorpora pubmed_central and hacker_news: * the meta is specified to be a string, but it's actually a dict with an id field inside. ## Steps to reproduce the bug ## Expected results Feature spec should match the data I'd think? ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
2022-03-08T09:30:49Z
3,828
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-03-04T21:25:32Z
https://api.github.com/repos/huggingface/datasets/issues/3828/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3828/timeline
The Pile's _FEATURE spec seems to be incorrect
https://api.github.com/repos/huggingface/datasets/issues/3828/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/9633?v=4", "events_url": "https://api.github.com/users/dlwh/events{/privacy}", "followers_url": "https://api.github.com/users/dlwh/followers", "following_url": "https://api.github.com/users/dlwh/following{/other_user}", "gists_url": "https://api.github.com/users/dlwh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dlwh", "id": 9633, "login": "dlwh", "node_id": "MDQ6VXNlcjk2MzM=", "organizations_url": "https://api.github.com/users/dlwh/orgs", "received_events_url": "https://api.github.com/users/dlwh/received_events", "repos_url": "https://api.github.com/users/dlwh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dlwh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dlwh/subscriptions", "type": "User", "url": "https://api.github.com/users/dlwh" }
[]
null
completed
NONE
2022-03-08T09:30:48Z
null
I_kwDODunzps5FJSwd
[ "Hi @dlwh, thanks for reporting.\r\n\r\nPlease note, that the source data files for \"all\" config are different from the other configurations.\r\n\r\nThe \"all\" config contains the official Pile data files, from https://mystic.the-eye.eu/public/AI/pile/\r\nAll data examples contain a \"meta\" dict with a single \"pile_set_name\" key:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ds = load_dataset(\"the_pile\", \"all\", split=\"train\", streaming=True)\r\n item = next(iter(ds))\r\nDownloading builder script: 9.09kB [00:00, 4.42MB/s]\r\n\r\nIn [3]: item[\"meta\"]\r\nOut[3]: {'pile_set_name': 'Pile-CC'}\r\n```\r\n\r\nOn the other hand, all the other subset configs data files come from the Pile preliminary components directory: https://mystic.the-eye.eu/public/AI/pile_preliminary_components/\r\nFor theses components, the \"meta\" field may have different keys depending on the subset: \"id\", \"language\", \"pmid\",... Because of that, if we had kept the `dict` data format for the \"meta\" field, we would have an error when trying to concatenate different subsets, whose \"meta\" keys are not identical. In order to avoid that, the \"meta\" field is cast to `str` in all these cases, so that there is no incompatibility in their \"meta\" data type when concatenating.\r\n\r\nYou can check, for example, that for \"pubmed_central\" the \"meta\" field is cast to `str`:\r\n```python\r\nIn [4]: from datasets import load_dataset\r\n ds = load_dataset(\"the_pile\", \"pubmed_central\", split=\"train\", streaming=True)\r\n item = next(iter(ds))\r\n\r\nIn [5]: item[\"meta\"]\r\nOut[5]: \"{'id': 'PMC6071596'}\"\r\n```\r\n\r\nFeel free to reopen this issue if you have further questions. " ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3828/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3828
https://github.com/huggingface/datasets/issues/3828
false
1,159,878,436
https://api.github.com/repos/huggingface/datasets/issues/3827/labels{/name}
A leftover from #3803.
2022-03-07T12:37:52Z
3,827
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-04T17:23:26Z
https://api.github.com/repos/huggingface/datasets/issues/3827/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3827/timeline
Remove deprecated `remove_columns` param in `filter`
https://api.github.com/repos/huggingface/datasets/issues/3827/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
null
null
CONTRIBUTOR
2022-03-07T12:37:51Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3827.diff", "html_url": "https://github.com/huggingface/datasets/pull/3827", "merged_at": "2022-03-07T12:37:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/3827.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3827" }
PR_kwDODunzps4z95dj
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3827). All of your documentation changes will be reflected on that endpoint." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3827/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3827
https://github.com/huggingface/datasets/pull/3827
true
1,159,851,110
https://api.github.com/repos/huggingface/datasets/issues/3826/labels{/name}
_Needs https://github.com/huggingface/datasets/pull/3801 to be merged first_ I added `IterableDataset.filter` with an API that is a subset of `Dataset.filter`: ```python def filter(self, function, batched=False, batch_size=1000, with_indices=false, input_columns=None): ``` TODO: - [x] tests - [x] docs related to https://github.com/huggingface/datasets/issues/3444 and https://github.com/huggingface/datasets/issues/3753
2022-03-09T17:23:13Z
3,826
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-04T16:57:23Z
https://api.github.com/repos/huggingface/datasets/issues/3826/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3826/timeline
Add IterableDataset.filter
https://api.github.com/repos/huggingface/datasets/issues/3826/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
null
null
MEMBER
2022-03-09T17:23:11Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3826.diff", "html_url": "https://github.com/huggingface/datasets/pull/3826", "merged_at": "2022-03-09T17:23:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/3826.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3826" }
PR_kwDODunzps4z90JU
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3826). All of your documentation changes will be reflected on that endpoint.", "Indeed ! If `batch_size` is `None` or `<=0` then the full dataset should be passed. It's been mentioned in the docs for a while but never actually implemented. We can fix that later" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3826/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3826
https://github.com/huggingface/datasets/pull/3826
true
1,159,802,345
https://api.github.com/repos/huggingface/datasets/issues/3825/labels{/name}
CC: @geohci
2022-03-04T17:24:37Z
3,825
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-04T16:05:27Z
https://api.github.com/repos/huggingface/datasets/issues/3825/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3825/timeline
Update version and date in Wikipedia dataset
https://api.github.com/repos/huggingface/datasets/issues/3825/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
null
null
MEMBER
2022-03-04T17:24:36Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3825.diff", "html_url": "https://github.com/huggingface/datasets/pull/3825", "merged_at": "2022-03-04T17:24:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/3825.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3825" }
PR_kwDODunzps4z9p4b
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3825). All of your documentation changes will be reflected on that endpoint." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3825/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3825
https://github.com/huggingface/datasets/pull/3825
true
1,159,574,186
https://api.github.com/repos/huggingface/datasets/issues/3824/labels{/name}
Fix #3818
2022-03-04T18:04:22Z
3,824
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-04T12:04:40Z
https://api.github.com/repos/huggingface/datasets/issues/3824/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3824/timeline
Allow not specifying feature cols other than `predictions`/`references` in `Metric.compute`
https://api.github.com/repos/huggingface/datasets/issues/3824/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
null
null
CONTRIBUTOR
2022-03-04T18:04:21Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3824.diff", "html_url": "https://github.com/huggingface/datasets/pull/3824", "merged_at": "2022-03-04T18:04:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/3824.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3824" }
PR_kwDODunzps4z85SO
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3824). All of your documentation changes will be reflected on that endpoint." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3824/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3824
https://github.com/huggingface/datasets/pull/3824
true
1,159,497,844
https://api.github.com/repos/huggingface/datasets/issues/3823/labels{/name}
## Describe the bug The dataset [openclimatefix/mrms](https://huggingface.co/datasets/openclimatefix/mrms) gives a 500 server error when trying to open it on the website, or through code. The dataset doesn't have a loading script yet, and I did push two [xarray](https://docs.xarray.dev/en/stable/) Zarr stores of data there recentlyish. The Zarr stores are composed of lots of small files, which I am guessing is probably the problem, as we have another [OCF dataset](https://huggingface.co/datasets/openclimatefix/eumetsat_uk_hrv) using xarray and Zarr, but with the Zarr stored on GCP public datasets instead of directly in HF datasets, and that one opens fine. In general, we were hoping to use HF datasets to release some more public geospatial datasets as benchmarks, which are commonly stored as Zarr stores as they can be compressed well and deal with the multi-dimensional data and coordinates fairly easily compared to other formats, but with this error, I'm assuming we should try a different format? For context, we are trying to have complete public model+data reimplementations of some SOTA weather and solar nowcasting models, like [MetNet, MetNet-2,](https://github.com/openclimatefix/metnet) [DGMR](https://github.com/openclimatefix/skillful_nowcasting), and [others](https://github.com/openclimatefix/graph_weather), which all have large, complex datasets. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("openclimatefix/mrms") ``` ## Expected results The dataset should be downloaded or open up ## Actual results A 500 internal server error ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.15.25-1-MANJARO-x86_64-with-glibc2.35 - Python version: 3.9.10 - PyArrow version: 7.0.0
2022-03-08T09:47:39Z
3,823
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-03-04T10:37:14Z
https://api.github.com/repos/huggingface/datasets/issues/3823/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3823/timeline
500 internal server error when trying to open a dataset composed of Zarr stores
https://api.github.com/repos/huggingface/datasets/issues/3823/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/7170359?v=4", "events_url": "https://api.github.com/users/jacobbieker/events{/privacy}", "followers_url": "https://api.github.com/users/jacobbieker/followers", "following_url": "https://api.github.com/users/jacobbieker/following{/other_user}", "gists_url": "https://api.github.com/users/jacobbieker/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jacobbieker", "id": 7170359, "login": "jacobbieker", "node_id": "MDQ6VXNlcjcxNzAzNTk=", "organizations_url": "https://api.github.com/users/jacobbieker/orgs", "received_events_url": "https://api.github.com/users/jacobbieker/received_events", "repos_url": "https://api.github.com/users/jacobbieker/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jacobbieker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jacobbieker/subscriptions", "type": "User", "url": "https://api.github.com/users/jacobbieker" }
[]
null
completed
NONE
2022-03-08T09:47:39Z
null
I_kwDODunzps5FHIh0
[ "Hi @jacobbieker, thanks for reporting!\r\n\r\nI have transferred this issue to our Hub team and they are investigating it. I keep you informed. ", "Hi @jacobbieker, we are investigating this issue on our side and we'll see if we can fix it, but please note that your repo is considered problematic for git. Here are the results of running https://github.com/github/git-sizer on it:\r\n\r\n```\r\nProcessing blobs: 147448 \r\nProcessing trees: 27 \r\nProcessing commits: 4 \r\nMatching commits to trees: 4 \r\nProcessing annotated tags: 0 \r\nProcessing references: 3 \r\n| Name | Value | Level of concern |\r\n| ---------------------------- | --------- | ------------------------------ |\r\n| Biggest objects | | |\r\n| * Trees | | |\r\n| * Maximum entries [1] | 167 k | !!!!!!!!!!!!!!!!!!!!!!!!!!!!!! |\r\n| | | |\r\n| Biggest checkouts | | |\r\n| * Number of files [2] | 189 k | *** |\r\n\r\n[1] aa057d2667c34c70c6146efc631f5c9917ff326e (refs/heads/main:2016.zarr/unknown)\r\n[2] 6897b7bf6440fdd16b2c39d08085a669e7eaa59d (refs/heads/main^{tree})\r\n```\r\n\r\nYou can check https://github.com/github/git-sizer for more information on how to avoid such pathological structures.", "Hi, thanks for getting back to me so quick! And yeah, I figured that was probably the problem. I was going to try to delete the repo, but couldn't through the website, so if that's the easiest way to solve it, I can regenerate the dataset in a different format with less tiny files, and you guys can delete the repo as it is. Zarr just saves everything as lots of small files to make chunks easy to load, which is why I was preferring that format, but maybne that just doesn't work well for HF datasets.", "Hi @jacobbieker,\r\n\r\nFor future use cases, our Hub team is still pondering whether to limit the maximum number of files per repo to avoid technical issues...\r\n\r\nOn the meantime, they have made a fix and your dataset is working: https://huggingface.co/datasets/openclimatefix/mrms" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3823/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3823
https://github.com/huggingface/datasets/issues/3823
false
1,159,395,728
https://api.github.com/repos/huggingface/datasets/issues/3822/labels{/name}
## Adding a Dataset - **Name:** Biwi Kinect Head Pose Database - **Description:** Over 15K images of 20 people recorded with a Kinect while turning their heads around freely. For each frame, depth and rgb images are provided, together with ground in the form of the 3D location of the head and its rotation angles. - **Data:** [*link to the Github repository or current dataset location*](https://icu.ee.ethz.ch/research/datsets.html) - **Motivation:** Useful pose estimation dataset Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2022-06-01T13:00:47Z
3,822
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
2022-03-04T08:48:39Z
https://api.github.com/repos/huggingface/datasets/issues/3822/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" }
https://api.github.com/repos/huggingface/datasets/issues/3822/timeline
Add Biwi Kinect Head Pose Database
https://api.github.com/repos/huggingface/datasets/issues/3822/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osanseviero", "id": 7246357, "login": "osanseviero", "node_id": "MDQ6VXNlcjcyNDYzNTc=", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "repos_url": "https://api.github.com/users/osanseviero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "type": "User", "url": "https://api.github.com/users/osanseviero" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" } ]
null
completed
MEMBER
2022-06-01T13:00:47Z
null
I_kwDODunzps5FGvmQ
[ "Official dataset location : https://icu.ee.ethz.ch/research/datsets.html\r\nIn the \"Biwi Kinect Head Pose Database\" section, I do not find any information regarding \"Downloading the dataset.\" . Do we mail the authors regarding this ?\r\n\r\nI found the dataset on Kaggle : [Link](https://www.kaggle.com/kmader/biwi-kinect-head-pose-database) , but since 🤗 does not host any of the datasets, this would require the user to provide their Kaggle username and API key to download. \r\n\r\nAny inputs on how we could proceed ? Thank you.\r\n[ Need your inputs here, @lhoestq or @mariosasko ]", "Hi @dnaveenr! Thanks for tackling this issue. This link should work: https://data.vision.ee.ethz.ch/cvl/gfanelli/kinect_head_pose_db.tgz", "#self-assign", "Added in https://github.com/huggingface/datasets/pull/3903, thanks @dnaveenr !" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3822/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3822
https://github.com/huggingface/datasets/issues/3822
false
1,159,371,927
https://api.github.com/repos/huggingface/datasets/issues/3821/labels{/name}
This PR combines all updates to Wikipedia dataset. Once approved, this will be used to generate the pre-processed Wikipedia datasets. Finally, this PR will be able to be merged into master: - NOT using squash - BUT a regular MERGE (or REBASE+MERGE), so that all commits are preserved TODO: - [x] #3435 - [x] #3789 - [x] #3825 - [x] Run to get the pre-processed data for big languages (backward compatibility) - [x] #3958 CC: @geohci
2022-03-21T12:35:23Z
3,821
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-04T08:19:21Z
https://api.github.com/repos/huggingface/datasets/issues/3821/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3821/timeline
Update Wikipedia dataset
https://api.github.com/repos/huggingface/datasets/issues/3821/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
null
null
MEMBER
2022-03-21T12:31:00Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3821.diff", "html_url": "https://github.com/huggingface/datasets/pull/3821", "merged_at": "2022-03-21T12:31:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/3821.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3821" }
PR_kwDODunzps4z8O5J
[ "_The documentation is not available anymore as the PR was closed or merged._", "I'm starting to generate the pre-processed data for some of the languages (for backward compatibility).\r\n\r\nOnce this merged, we will create the pre-processed data on the Hub under the Wikimedia namespace.", "All steps have been properly done.\r\n\r\nI'm merging all these commits into master." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3821/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3821
https://github.com/huggingface/datasets/pull/3821
true
1,159,106,603
https://api.github.com/repos/huggingface/datasets/issues/3820/labels{/name}
## Describe the bug Loading [`pubmed_qa`](https://huggingface.co/datasets/pubmed_qa) results in a mismatched checksum error. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import datasets try: datasets.load_dataset("pubmed_qa", "pqa_labeled") except Exception as e: print(e) try: datasets.load_dataset("pubmed_qa", "pqa_unlabeled") except Exception as e: print(e) try: datasets.load_dataset("pubmed_qa", "pqa_artificial") except Exception as e: print(e) ``` ## Expected results Successful download. ## Actual results Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 665, in _download_and_prepare verify_checksums( File "/usr/local/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1RsGLINVce-0GsDkCLDuLZmoLuzfmoCuQ', 'https://drive.google.com/uc?export=download&id=15v1x6aQDlZymaHGP7cZJZZYFfeJt2NdS'] ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: macOS - Python version: 3.8.1 - PyArrow version: 3.0.0
2022-03-04T09:42:32Z
3,820
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
2022-03-04T00:28:08Z
https://api.github.com/repos/huggingface/datasets/issues/3820/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3820/timeline
`pubmed_qa` checksum mismatch
https://api.github.com/repos/huggingface/datasets/issues/3820/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/41410219?v=4", "events_url": "https://api.github.com/users/jon-tow/events{/privacy}", "followers_url": "https://api.github.com/users/jon-tow/followers", "following_url": "https://api.github.com/users/jon-tow/following{/other_user}", "gists_url": "https://api.github.com/users/jon-tow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jon-tow", "id": 41410219, "login": "jon-tow", "node_id": "MDQ6VXNlcjQxNDEwMjE5", "organizations_url": "https://api.github.com/users/jon-tow/orgs", "received_events_url": "https://api.github.com/users/jon-tow/received_events", "repos_url": "https://api.github.com/users/jon-tow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jon-tow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jon-tow/subscriptions", "type": "User", "url": "https://api.github.com/users/jon-tow" }
[]
null
completed
CONTRIBUTOR
2022-03-04T09:42:32Z
null
I_kwDODunzps5FFpAr
[ "Hi @jon-tow, thanks for reporting.\r\n\r\nThis issue was already reported and its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it. See:\r\n- #3787 \r\n\r\nWe are planning to make a patch release today.\r\n\r\nIn the meantime, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3820/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3820
https://github.com/huggingface/datasets/issues/3820
false
1,158,848,288
https://api.github.com/repos/huggingface/datasets/issues/3819/labels{/name}
cc: @lhoestq
2022-03-04T13:07:41Z
3,819
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-03T20:08:44Z
https://api.github.com/repos/huggingface/datasets/issues/3819/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3819/timeline
Fix typo in doc build yml
https://api.github.com/repos/huggingface/datasets/issues/3819/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mishig25", "id": 11827707, "login": "mishig25", "node_id": "MDQ6VXNlcjExODI3NzA3", "organizations_url": "https://api.github.com/users/mishig25/orgs", "received_events_url": "https://api.github.com/users/mishig25/received_events", "repos_url": "https://api.github.com/users/mishig25/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "type": "User", "url": "https://api.github.com/users/mishig25" }
[]
null
null
CONTRIBUTOR
2022-03-04T13:07:41Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3819.diff", "html_url": "https://github.com/huggingface/datasets/pull/3819", "merged_at": "2022-03-04T13:07:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/3819.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3819" }
PR_kwDODunzps4z6fvn
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3819). All of your documentation changes will be reflected on that endpoint." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3819/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3819
https://github.com/huggingface/datasets/pull/3819
true
1,158,788,545
https://api.github.com/repos/huggingface/datasets/issues/3818/labels{/name}
**Is your feature request related to a problem? Please describe.** The methods `add_batch` and `add` from the `Metric` [class](https://github.com/huggingface/datasets/blob/1675ad6a958435b675a849eafa8a7f10fe0f43bc/src/datasets/metric.py) does not work with [SARI](https://github.com/huggingface/datasets/blob/master/metrics/sari/sari.py) metric. This metric not only relies on the predictions and references, but also in the input. For example, when the `add_batch` method is used, then the `compute()` method fails: ``` metric = load_metric("sari") metric.add_batch( predictions=["About 95 you now get in ."], references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]]) metric.compute() > TypeError: _compute() missing 1 required positional argument: 'sources' ``` Therefore, the `compute() `method can only be used standalone: ``` metric = load_metric("sari") result = metric.compute( sources=["About 95 species are currently accepted ."], predictions=["About 95 you now get in ."], references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]]) > {'sari': 26.953601953601954} ``` **Describe the solution you'd like** Support for an additional parameter `sources` in the `add_batch` and `add` of the `Metric` class. ``` add_batch(*, sources=None, predictions=None, references=None, **kwargs) add(*, sources=None, predictions=None, references=None, **kwargs) compute() ``` **Describe alternatives you've considered** I've tried to override the `add_batch` and `add`, however, these are highly dependent to the `Metric` class. We could also write a simple function that compute the scores of a sentences list, but then we lose the functionality from the original [add](https://huggingface.co/docs/datasets/_modules/datasets/metric.html#Metric.add) and [add_batch method](https://huggingface.co/docs/datasets/_modules/datasets/metric.html#Metric.add_batch). **Additional context** These methods are used in the transformers [pytorch examples](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization_no_trainer.py).
2022-03-04T18:04:21Z
3,818
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
2022-03-03T18:57:54Z
https://api.github.com/repos/huggingface/datasets/issues/3818/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3818/timeline
Support for "sources" parameter in the add() and add_batch() methods in datasets.metric - SARI
https://api.github.com/repos/huggingface/datasets/issues/3818/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/6901031?v=4", "events_url": "https://api.github.com/users/lmvasque/events{/privacy}", "followers_url": "https://api.github.com/users/lmvasque/followers", "following_url": "https://api.github.com/users/lmvasque/following{/other_user}", "gists_url": "https://api.github.com/users/lmvasque/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lmvasque", "id": 6901031, "login": "lmvasque", "node_id": "MDQ6VXNlcjY5MDEwMzE=", "organizations_url": "https://api.github.com/users/lmvasque/orgs", "received_events_url": "https://api.github.com/users/lmvasque/received_events", "repos_url": "https://api.github.com/users/lmvasque/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lmvasque/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lmvasque/subscriptions", "type": "User", "url": "https://api.github.com/users/lmvasque" }
[]
null
completed
NONE
2022-03-04T18:04:21Z
null
I_kwDODunzps5FEbXB
[ "Hi, thanks for reporting! We can add a `sources: datasets.Value(\"string\")` feature to the `Features` dict in the `SARI` script to fix this. Would you be interested in submitting a PR?", "Hi Mario,\r\n\r\nThanks for your message. I did try to add `sources` into the `Features` dict using a script for the metric:\r\n```\r\n features=datasets.Features(\r\n {\r\n \"sources\": datasets.Value(\"string\", id=\"sequence\"),\r\n \"predictions\": datasets.Value(\"string\", id=\"sequence\"),\r\n \"references\": datasets.Sequence(datasets.Value(\"string\", id=\"sequence\"), id=\"references\"),\r\n }\r\n ),\r\n```\r\n\r\nBut that only avoids a failure in `encode_batch` in the `add_batch` method:\r\n```\r\n batch = {\"predictions\": predictions, \"references\": references}\r\n batch = self.info.features.encode_batch(batch)\r\n```\r\n\r\nThe real problem is that `add_batch()`, `add()` and `compute()` does not receive a `sources` param:\r\n```\r\ndef add_batch(self, *, predictions=None, references=None):\r\ndef add(self, *, prediction=None, reference=None):\r\ndef compute(self, *, predictions=None, references=None, **kwargs)\r\n```\r\n\r\nAnd then, it fails:\r\n`TypeError: add_batch() got an unexpected keyword argument sources`\r\n\r\nI need this for adding any metric based on SARI or alike, not only for sari.py :)\r\n\r\nLet me know if I understood correctly the proposed solution.\r\n", "The `Metric` class has been modified recently to support this use-case, but the `add_batch` + `compute` pattern still doesn't work correctly. I'll open a PR." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3818/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3818
https://github.com/huggingface/datasets/issues/3818
false
1,158,592,335
https://api.github.com/repos/huggingface/datasets/issues/3817/labels{/name}
In #3736 we introduced one method to generate examples when streaming, that is different from the one when not streaming. In this PR I propose a new implementation which is simpler: it only has one function, based on `iter_archive`. And you still have access to local audio files when loading the dataset in non-streaming mode. cc @patrickvonplaten @polinaeterna @anton-l @albertvillanova since this will become the template for many audio datasets to come. This change can also trivially be applied to the other audio datasets that already exist. Using this line, you can get access to local files in non-streaming mode: ```python local_extracted_archive = dl_manager.extract(archive_path) if not dl_manager.is_streaming else None ```
2022-03-04T14:51:48Z
3,817
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-03T16:01:21Z
https://api.github.com/repos/huggingface/datasets/issues/3817/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3817/timeline
Simplify Common Voice code
https://api.github.com/repos/huggingface/datasets/issues/3817/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
null
null
MEMBER
2022-03-04T12:39:23Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3817.diff", "html_url": "https://github.com/huggingface/datasets/pull/3817", "merged_at": "2022-03-04T12:39:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/3817.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3817" }
PR_kwDODunzps4z5pQ7
[ "I think the script looks pretty clean and readable now! cool!\r\n" ]
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/3817/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3817
https://github.com/huggingface/datasets/pull/3817
true
1,158,589,913
https://api.github.com/repos/huggingface/datasets/issues/3816/labels{/name}
null
2022-10-04T09:35:53Z
3,816
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-03T15:59:14Z
https://api.github.com/repos/huggingface/datasets/issues/3816/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3816/timeline
Doc new UI test workflows2
https://api.github.com/repos/huggingface/datasets/issues/3816/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mishig25", "id": 11827707, "login": "mishig25", "node_id": "MDQ6VXNlcjExODI3NzA3", "organizations_url": "https://api.github.com/users/mishig25/orgs", "received_events_url": "https://api.github.com/users/mishig25/received_events", "repos_url": "https://api.github.com/users/mishig25/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "type": "User", "url": "https://api.github.com/users/mishig25" }
[]
null
null
CONTRIBUTOR
2022-03-03T16:42:15Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3816.diff", "html_url": "https://github.com/huggingface/datasets/pull/3816", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3816.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3816" }
PR_kwDODunzps4z5owP
[ "<img src=\"https://www.bikevillastravel.com/cms/static/images/loading.gif\" alt=\"Girl in a jacket\" width=\"50\" >" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3816/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3816
https://github.com/huggingface/datasets/pull/3816
true
1,158,589,512
https://api.github.com/repos/huggingface/datasets/issues/3815/labels{/name}
The `DownloadManager.iter_archive` method currently returns an iterator - which is **empty** once you iter over it once. This means you can't pass the same archive iterator to several splits. To fix that, I changed the ouput of `DownloadManager.iter_archive` to be an iterable that you can iterate over several times, instead of a one-time-use iterator. The `StreamingDownloadManager.iter_archive` already returns an appropriate iterable, and the code added in this PR is inspired from the one in `streaming_download_manager.py`
2022-03-03T18:06:37Z
3,815
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-03T15:58:52Z
https://api.github.com/repos/huggingface/datasets/issues/3815/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3815/timeline
Fix iter_archive getting reset
https://api.github.com/repos/huggingface/datasets/issues/3815/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
null
null
MEMBER
2022-03-03T18:06:13Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3815.diff", "html_url": "https://github.com/huggingface/datasets/pull/3815", "merged_at": "2022-03-03T18:06:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/3815.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3815" }
PR_kwDODunzps4z5oq-
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3815/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3815
https://github.com/huggingface/datasets/pull/3815
true
1,158,518,995
https://api.github.com/repos/huggingface/datasets/issues/3814/labels{/name}
This PR fixes an issue introduced by #3575 where `None` values stored in PyArrow arrays/structs would get ignored by `cast_storage` or by the `pa.array(cast_to_python_objects(..))` pattern. To fix the former, it also bumps the minimal PyArrow version to v5.0.0 to use the `mask` param in `pa.SturctArray`.
2022-03-03T16:37:44Z
3,814
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-03T15:03:35Z
https://api.github.com/repos/huggingface/datasets/issues/3814/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3814/timeline
Handle Nones in PyArrow struct
https://api.github.com/repos/huggingface/datasets/issues/3814/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
null
null
CONTRIBUTOR
2022-03-03T16:37:43Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3814.diff", "html_url": "https://github.com/huggingface/datasets/pull/3814", "merged_at": "2022-03-03T16:37:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/3814.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3814" }
PR_kwDODunzps4z5Zk4
[ "Looks like I added my comments while you were editing - sorry about that" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3814/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3814
https://github.com/huggingface/datasets/pull/3814
true
1,158,474,859
https://api.github.com/repos/huggingface/datasets/issues/3813/labels{/name}
## Adding a Dataset - **Name:** MetaShift - **Description:** collection of 12,868 sets of natural images across 410 classes- - **Paper:** https://arxiv.org/abs/2202.06523v1 - **Data:** https://github.com/weixin-liang/metashift Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2022-04-10T13:39:59Z
3,813
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
2022-03-03T14:26:45Z
https://api.github.com/repos/huggingface/datasets/issues/3813/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" }
https://api.github.com/repos/huggingface/datasets/issues/3813/timeline
Add MetaShift dataset
https://api.github.com/repos/huggingface/datasets/issues/3813/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osanseviero", "id": 7246357, "login": "osanseviero", "node_id": "MDQ6VXNlcjcyNDYzNTc=", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "repos_url": "https://api.github.com/users/osanseviero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "type": "User", "url": "https://api.github.com/users/osanseviero" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" } ]
null
completed
MEMBER
2022-04-10T13:39:59Z
null
I_kwDODunzps5FDOxr
[ "I would like to take this up and give it a shot. Any image specific - dataset guidelines to keep in mind ? Thank you.", "#self-assign", "I've started working on adding this dataset. I require some inputs on the following : \r\n\r\nRef for the initial draft [here](https://github.com/dnaveenr/datasets/blob/add_metashift_dataset/datasets/metashift/metashift.py)\r\n1. The dataset does not have a typical - train/test/val split. What do we do for the _split_generators() function ? How do we go about this ?\r\n2. This dataset builds on the Visual Genome dataset, using a metadata file. The dataset is generated using generate_full_MetaShift.py script. By default, the authors choose to generate the dataset only for a SELECTED_CLASSES. The following script is used : \r\nCode : https://github.com/Weixin-Liang/MetaShift/blob/main/dataset/generate_full_MetaShift.py \r\nInfo : https://metashift.readthedocs.io/en/latest/sub_pages/download_MetaShift.html#generate-the-full-metashift-dataset\r\nCan I just copy over the required functions into the metashift.py to generate the dataset ?\r\n3. How do we complete the _generate_examples for this dataset ?\r\n\r\nThe user has the ability to use default selected classes, get the complete dataset or add more specific additional classes. I think config would be a good option here.\r\n\r\nInputs, suggestions would be helpful. Thank you.", "I think @mariosasko and @lhoestq should be able to help here 😄 ", "Hi ! Thanks for adding this dataset :) Let me answer your questions:\r\n\r\n1. in this case you can put everything in the \"train\" split\r\n2. Yes you can copy the script (provided you also include the MIT license of the code in the file header for example). Though we ideally try to not create new directories nor files when generating dataset, so if possible this script should be adapted to not create the file structure they mentioned, but instead yield the images one by one in `_generate_examples`. Let me know if you think this is feasible\r\n3. see point 2 haha\r\n\r\n> The user has the ability to use default selected classes, get the complete dataset or add more specific additional classes. I think config would be a good option here.\r\n\r\nYup ! We can also define a `selected_classes` parameter such that users can do\r\n```python\r\nload_dataset(\"metashift\", selected_classes=[\"cat\", \"dog\", ...])\r\n```", "Great. This is helpful. Thanks @lhoestq .\r\nRegarding Point 2, I'll try using yield instead of creating the directories and see if its feasible. selected_classes config sounds good.", "Closed via #3900 " ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3813/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3813
https://github.com/huggingface/datasets/issues/3813
false
1,158,369,995
https://api.github.com/repos/huggingface/datasets/issues/3812/labels{/name}
# do not merge ## Hypothesis packing data into a single zip archive could allow us not to care about splitting data into several tar archives for efficient streaming which is annoying (since data creators usually host the data in a single tar) ## Data I host it [here](https://huggingface.co/datasets/polinaeterna/benchmark_dataset/) ## I checked three configurations: 1. All data in one zip archive, streaming only those files that exist in split metadata file (we can access them directrly with no need to iterate over full archive), see [this func](https://github.com/huggingface/datasets/compare/master...polinaeterna:benchmark-tar-zip?expand=1#diff-4f5200d4586aec5b2a89fcf34441c5f92156f9e9d408acc7e50666f9a1921ddcR196) 2. All data in three splits, the standart way to make streaming efficient, see [this func](https://github.com/huggingface/datasets/compare/master...polinaeterna:benchmark-tar-zip?expand=1#diff-4f5200d4586aec5b2a89fcf34441c5f92156f9e9d408acc7e50666f9a1921ddcR174) 3. All data in single tar, iterate over the full archive and take only files existing in split metadata file, see [this func](https://github.com/huggingface/datasets/compare/master...polinaeterna:benchmark-tar-zip?expand=1#diff-4f5200d4586aec5b2a89fcf34441c5f92156f9e9d408acc7e50666f9a1921ddcR150) ## Results 1. one zip ![image](https://user-images.githubusercontent.com/16348744/156567611-e3652087-7147-4cf0-9047-9cbc00ec71f5.png) 2. three tars ![image](https://user-images.githubusercontent.com/16348744/156567688-2a462107-f83e-4722-8ea3-71a13b56c998.png) 3. one tar ![image](https://user-images.githubusercontent.com/16348744/156567772-1bceb5f7-e7d9-4fa3-b31b-17fec5f9a5a7.png) didn't check on the full data as it's time consuming but anyway it's pretty obvious that one-zip-way is not a good idea. here it's even worse than full iteration over tar containing all three splits (but that would depend on the case).
2022-03-03T14:55:34Z
3,812
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-03T12:48:41Z
https://api.github.com/repos/huggingface/datasets/issues/3812/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3812/timeline
benchmark streaming speed with tar vs zip archives
https://api.github.com/repos/huggingface/datasets/issues/3812/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
null
null
CONTRIBUTOR
2022-03-03T14:55:33Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3812.diff", "html_url": "https://github.com/huggingface/datasets/pull/3812", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3812.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3812" }
PR_kwDODunzps4z46C4
[ "I'm closing the PR since we're not going to merge it" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3812/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3812
https://github.com/huggingface/datasets/pull/3812
true
1,158,234,407
https://api.github.com/repos/huggingface/datasets/issues/3811/labels{/name}
Reflect changes from https://github.com/huggingface/transformers/pull/15891
2022-10-04T09:35:54Z
3,811
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-03T10:29:01Z
https://api.github.com/repos/huggingface/datasets/issues/3811/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3811/timeline
Update dev doc gh workflows
https://api.github.com/repos/huggingface/datasets/issues/3811/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mishig25", "id": 11827707, "login": "mishig25", "node_id": "MDQ6VXNlcjExODI3NzA3", "organizations_url": "https://api.github.com/users/mishig25/orgs", "received_events_url": "https://api.github.com/users/mishig25/received_events", "repos_url": "https://api.github.com/users/mishig25/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "type": "User", "url": "https://api.github.com/users/mishig25" }
[]
null
null
CONTRIBUTOR
2022-03-03T10:45:54Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3811.diff", "html_url": "https://github.com/huggingface/datasets/pull/3811", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3811.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3811" }
PR_kwDODunzps4z4dHS
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3811/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3811
https://github.com/huggingface/datasets/pull/3811
true
1,158,202,093
https://api.github.com/repos/huggingface/datasets/issues/3810/labels{/name}
Note that there was a version update of the `xcopa` dataset: https://github.com/cambridgeltl/xcopa/releases We updated our loading script, but we did not bump a new version number: - #3254 This PR updates our loading script version from `1.0.0` to `1.1.0`.
2022-03-03T10:44:30Z
3,810
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-03T09:58:25Z
https://api.github.com/repos/huggingface/datasets/issues/3810/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3810/timeline
Update version of xcopa dataset
https://api.github.com/repos/huggingface/datasets/issues/3810/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
null
null
MEMBER
2022-03-03T10:44:29Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3810.diff", "html_url": "https://github.com/huggingface/datasets/pull/3810", "merged_at": "2022-03-03T10:44:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/3810.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3810" }
PR_kwDODunzps4z4WUW
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3810/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3810
https://github.com/huggingface/datasets/pull/3810
true
1,158,143,480
https://api.github.com/repos/huggingface/datasets/issues/3809/labels{/name}
## Describe the bug Datasets hosted on Google Drive do not seem to work right now. Loading them fails with a checksum error. ## Steps to reproduce the bug ```python from datasets import load_dataset for dataset in ["head_qa", "yelp_review_full"]: try: load_dataset(dataset) except Exception as exception: print("Error", dataset, exception) ``` Here is a [colab](https://colab.research.google.com/drive/1wOtHBmL8I65NmUYakzPV5zhVCtHhi7uQ#scrollTo=cDzdCLlk-Bo4). ## Expected results The datasets should be loaded. ## Actual results ``` Downloading and preparing dataset head_qa/es (download: 75.69 MiB, generated: 2.86 MiB, post-processed: Unknown size, total: 78.55 MiB) to /root/.cache/huggingface/datasets/head_qa/es/1.1.0/583ab408e8baf54aab378c93715fadc4d8aa51b393e27c3484a877e2ac0278e9... Error head_qa Checksums didn't match for dataset source files: ['https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t'] Downloading and preparing dataset yelp_review_full/yelp_review_full (download: 187.06 MiB, generated: 496.94 MiB, post-processed: Unknown size, total: 684.00 MiB) to /root/.cache/huggingface/datasets/yelp_review_full/yelp_review_full/1.0.0/13c31a618ba62568ec8572a222a283dfc29a6517776a3ac5945fb508877dde43... Error yelp_review_full Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0'] ``` ## Environment info - `datasets` version: 1.18.3 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 6.0.1
2022-03-03T09:24:58Z
3,809
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
2022-03-03T09:01:10Z
https://api.github.com/repos/huggingface/datasets/issues/3809/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/3809/timeline
Checksums didn't match for datasets on Google Drive
https://api.github.com/repos/huggingface/datasets/issues/3809/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/11507045?v=4", "events_url": "https://api.github.com/users/muelletm/events{/privacy}", "followers_url": "https://api.github.com/users/muelletm/followers", "following_url": "https://api.github.com/users/muelletm/following{/other_user}", "gists_url": "https://api.github.com/users/muelletm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/muelletm", "id": 11507045, "login": "muelletm", "node_id": "MDQ6VXNlcjExNTA3MDQ1", "organizations_url": "https://api.github.com/users/muelletm/orgs", "received_events_url": "https://api.github.com/users/muelletm/received_events", "repos_url": "https://api.github.com/users/muelletm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/muelletm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muelletm/subscriptions", "type": "User", "url": "https://api.github.com/users/muelletm" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
completed
NONE
2022-03-03T09:24:05Z
null
I_kwDODunzps5FB934
[ "Hi @muelletm, thanks for reporting.\r\n\r\nThis issue was already reported and its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it. See:\r\n- #3787 \r\n\r\nUntil our next `datasets` library release, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3809/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3809
https://github.com/huggingface/datasets/issues/3809
false
1,157,650,043
https://api.github.com/repos/huggingface/datasets/issues/3808/labels{/name}
## Describe the bug If you utilize a pre-processing function which is created using a factory pattern, the function hash changes on each run (even if the function is identical) and therefore the data will be reproduced each time. ## Steps to reproduce the bug ```python def preprocess_function_factory(augmentation=None): def preprocess_function(examples): # Tokenize the texts if augmentation: conversions1 = [ augmentation(example) for example in examples[sentence1_key] ] if sentence2_key is None: args = (conversions1,) else: conversions2 = [ augmentation(example) for example in examples[sentence2_key] ] args = (conversions1, conversions2) else: args = ( (examples[sentence1_key],) if sentence2_key is None else (examples[sentence1_key], examples[sentence2_key]) ) result = tokenizer( *args, padding=padding, max_length=max_seq_length, truncation=True ) # Map labels to IDs (not necessary for GLUE tasks) if label_to_id is not None and "label" in examples: result["label"] = [ (label_to_id[l] if l != -1 else -1) for l in examples["label"] ] return result return preprocess_function capitalize = lambda x: x.capitalize() preprocess_function = preprocess_function_factory(augmentation=capitalize) print(hash(preprocess_function)) # This will change on each run raw_datasets = raw_datasets.map( preprocess_function, batched=True, load_from_cache_file=True, desc="Running transformation and tokenizer on dataset", ) ``` ## Expected results Running the code twice will cause the cache to be re-used. ## Actual results Running the code twice causes the whole dataset to be re-processed
2022-03-10T23:01:47Z
3,808
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-03-02T20:18:43Z
https://api.github.com/repos/huggingface/datasets/issues/3808/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3808/timeline
Pre-Processing Cache Fails when using a Factory pattern
https://api.github.com/repos/huggingface/datasets/issues/3808/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/9847335?v=4", "events_url": "https://api.github.com/users/Helw150/events{/privacy}", "followers_url": "https://api.github.com/users/Helw150/followers", "following_url": "https://api.github.com/users/Helw150/following{/other_user}", "gists_url": "https://api.github.com/users/Helw150/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Helw150", "id": 9847335, "login": "Helw150", "node_id": "MDQ6VXNlcjk4NDczMzU=", "organizations_url": "https://api.github.com/users/Helw150/orgs", "received_events_url": "https://api.github.com/users/Helw150/received_events", "repos_url": "https://api.github.com/users/Helw150/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Helw150/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Helw150/subscriptions", "type": "User", "url": "https://api.github.com/users/Helw150" }
[]
null
completed
NONE
2022-03-10T23:01:47Z
null
I_kwDODunzps5FAFZ7
[ "Ok - this is still an issue but I believe the root cause is different than I originally thought. I'm now able to get caching to work consistently with the above example as long as I fix the python hash seed `export PYTHONHASHSEED=1234`", "Hi! \r\n\r\nYes, our hasher should work with decorators. For instance, this dummy example:\r\n```python\r\ndef f(arg):\r\n def f1(ex):\r\n return {\"a\": ex[\"col1\"] + arg}\r\n return f1\r\n```\r\ngives the same hash across different Python sessions (`datasets.fingerprint.Hasher.hash(f(\"string1\")` returns `\"408c9059f89dbd6c\"` on my machine).\r\n\r\nCould you please make the example self-contained? This way, we can reproduce the bug. Additionally, you can try to find the problematic object yourself by testing their hash with `datasets.fingerprint.Hasher.hash(obj)`\r\n\r\nThis could be related to https://github.com/huggingface/datasets/issues/3638.", "#3638 was indeed my issue. Thanks!" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3808/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3808
https://github.com/huggingface/datasets/issues/3808
false
1,157,531,812
https://api.github.com/repos/huggingface/datasets/issues/3807/labels{/name}
## Describe the bug Loading the xcopa dataset doesn't work, it fails due to a mismatch in the checksum. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("xcopa", "it") ``` ## Expected results The dataset should be loaded correctly. ## Actual results Fails with: ```python in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/cambridgeltl/xcopa/archive/master.zip'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3, and 1.18.4.dev0 - Platform: - Python version: 3.8 - PyArrow version:
2022-05-20T06:00:42Z
3,807
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-03-02T18:10:19Z
https://api.github.com/repos/huggingface/datasets/issues/3807/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/3807/timeline
NonMatchingChecksumError in xcopa dataset
https://api.github.com/repos/huggingface/datasets/issues/3807/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/93286455?v=4", "events_url": "https://api.github.com/users/afcruzs-ms/events{/privacy}", "followers_url": "https://api.github.com/users/afcruzs-ms/followers", "following_url": "https://api.github.com/users/afcruzs-ms/following{/other_user}", "gists_url": "https://api.github.com/users/afcruzs-ms/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/afcruzs-ms", "id": 93286455, "login": "afcruzs-ms", "node_id": "U_kgDOBY9wNw", "organizations_url": "https://api.github.com/users/afcruzs-ms/orgs", "received_events_url": "https://api.github.com/users/afcruzs-ms/received_events", "repos_url": "https://api.github.com/users/afcruzs-ms/repos", "site_admin": false, "starred_url": "https://api.github.com/users/afcruzs-ms/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/afcruzs-ms/subscriptions", "type": "User", "url": "https://api.github.com/users/afcruzs-ms" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
completed
NONE
2022-03-03T17:40:31Z
null
I_kwDODunzps5E_oik
[ "@albertvillanova here's a separate issue for a bug similar to #3792", "Hi @afcruzs-ms, thanks for opening this separate issue for your problem.\r\n\r\nThe root problem in the other issue (#3792) was a change in the service of Google Drive.\r\n\r\nBut in your case, the `xcopa` dataset is not hosted on Google Drive. Therefore, the root cause should be a different one.\r\n\r\nLet me look at it... ", "@afcruzs-ms, I'm not able to reproduce the issue you reported:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ...: dataset = load_dataset(\"xcopa\", \"it\")\r\nDownloading builder script: 5.21kB [00:00, 2.75MB/s] \r\nDownloading metadata: 28.6kB [00:00, 14.5MB/s] \r\nDownloading and preparing dataset xcopa/it (download: 627.09 KiB, generated: 76.43 KiB, post-processed: Unknown size, total: 703.52 KiB) to .../.cache/huggingface/datasets/xcopa/it/1.0.0/e1fab65f984b24c8b66bcf7ac27a26a1182f84adfb2e74035861be65e214b9e6...\r\nDownloading data: 642kB [00:00, 5.42MB/s]\r\nDataset xcopa downloaded and prepared to .../.cache/huggingface/datasets/xcopa/it/1.0.0/e1fab65f984b24c8b66bcf7ac27a26a1182f84adfb2e74035861be65e214b9e6. Subsequent calls will reuse this data. \r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 733.27it/s]\r\n\r\nIn [2]: dataset\r\nOut[2]: \r\nDatasetDict({\r\n test: Dataset({\r\n features: ['premise', 'choice1', 'choice2', 'question', 'label', 'idx', 'changed'],\r\n num_rows: 500\r\n })\r\n validation: Dataset({\r\n features: ['premise', 'choice1', 'choice2', 'question', 'label', 'idx', 'changed'],\r\n num_rows: 100\r\n })\r\n})\r\n```\r\n\r\nMaybe you have some issue with your cached data... Could you please try to force the redownload of the data?\r\n```python\r\ndataset = load_dataset(\"xcopa\", \"it\", download_mode=\"force_redownload\")\r\n```", "It works indeed, thanks! ", "unfortunately, i am having a similar problem with the irc_disentaglement dataset :/\r\nmy code:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"irc_disentangle\", download_mode=\"force_redownload\")\r\n```\r\n\r\nhowever, it produces the same error as @afcruzs-ms \r\n```\r\n[38](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=37) if len(bad_urls) > 0:\r\n [39](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=38) error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> [40](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=39) raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n [41](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=40) logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/jkkummerfeld/irc-disentanglement/tarball/master']\r\n```\r\n\r\nI attempted to use the `ignore_verifications' as such:\r\n```\r\nds = datasets.load_dataset('irc_disentangle', download_mode=\"force_redownload\", ignore_verifications=True)\r\n\r\n```\r\n```\r\nDownloading builder script: 12.0kB [00:00, 5.92MB/s] \r\nDownloading metadata: 7.58kB [00:00, 3.48MB/s] \r\nNo config specified, defaulting to: irc_disentangle/ubuntu\r\nDownloading and preparing dataset irc_disentangle/ubuntu (download: 112.98 MiB, generated: 60.05 MiB, post-processed: Unknown size, total: 173.03 MiB) to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5...\r\nDownloading data: 118MB [00:09, 12.1MB/s] \r\n \r\nDataset irc_disentangle downloaded and prepared to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5. Subsequent calls will reuse this data.\r\n100%|██████████| 3/3 [00:00<00:00, 675.38it/s]\r\n```\r\nbut, this returns an empty set?\r\n\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n test: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n validation: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n})\r\n```\r\n\r\nnot sure what else to try at this point?\r\nThanks in advanced🤗", "Thanks @labouz for reporting: yes, better opening a new GitHub issue as you did. I'm addressing it:\r\n- #4376" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3807/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3807
https://github.com/huggingface/datasets/issues/3807
false
1,157,505,826
https://api.github.com/repos/huggingface/datasets/issues/3806/labels{/name}
This PR fixes the URL for Spanish data file. Previously, Spanish had the same URL as Vietnamese data file.
2022-03-03T08:38:17Z
3,806
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-02T17:43:42Z
https://api.github.com/repos/huggingface/datasets/issues/3806/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3806/timeline
Fix Spanish data file URL in wiki_lingua dataset
https://api.github.com/repos/huggingface/datasets/issues/3806/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
null
null
MEMBER
2022-03-03T08:38:16Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3806.diff", "html_url": "https://github.com/huggingface/datasets/pull/3806", "merged_at": "2022-03-03T08:38:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/3806.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3806" }
PR_kwDODunzps4z2FeI
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3806/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3806
https://github.com/huggingface/datasets/pull/3806
true
1,157,454,884
https://api.github.com/repos/huggingface/datasets/issues/3805/labels{/name}
This was erroneously added in https://github.com/huggingface/datasets/commit/701f128de2594e8dc06c0b0427c0ba1e08be3054. This PR removes it.
2022-03-07T12:13:36Z
3,805
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-02T16:58:34Z
https://api.github.com/repos/huggingface/datasets/issues/3805/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3805/timeline
Remove decode: true for image feature in head_qa
https://api.github.com/repos/huggingface/datasets/issues/3805/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/417568?v=4", "events_url": "https://api.github.com/users/craffel/events{/privacy}", "followers_url": "https://api.github.com/users/craffel/followers", "following_url": "https://api.github.com/users/craffel/following{/other_user}", "gists_url": "https://api.github.com/users/craffel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/craffel", "id": 417568, "login": "craffel", "node_id": "MDQ6VXNlcjQxNzU2OA==", "organizations_url": "https://api.github.com/users/craffel/orgs", "received_events_url": "https://api.github.com/users/craffel/received_events", "repos_url": "https://api.github.com/users/craffel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/craffel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/craffel/subscriptions", "type": "User", "url": "https://api.github.com/users/craffel" }
[]
null
null
CONTRIBUTOR
2022-03-07T12:13:35Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3805.diff", "html_url": "https://github.com/huggingface/datasets/pull/3805", "merged_at": "2022-03-07T12:13:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/3805.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3805" }
PR_kwDODunzps4z16os
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3805/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3805
https://github.com/huggingface/datasets/pull/3805
true
1,157,297,278
https://api.github.com/repos/huggingface/datasets/issues/3804/labels{/name}
**Is your feature request related to a problem? Please describe.** The current [Text](https://github.com/huggingface/datasets/blob/207be676bffe9d164740a41a883af6125edef135/src/datasets/packaged_modules/text/text.py#L23) builder implementation splits texts with `splitlines()` which splits the text on several line boundaries. Not all of them are always wanted. **Describe the solution you'd like** ```python if self.config.sample_by == "line": batch_idx = 0 while True: batch = f.read(self.config.chunksize) if not batch: break batch += f.readline() # finish current line if self.config.custom_newline is None: batch = batch.splitlines(keepends=self.config.keep_linebreaks) else: batch = batch.split(self.config.custom_newline)[:-1] pa_table = pa.Table.from_arrays([pa.array(batch)], schema=schema) # Uncomment for debugging (will print the Arrow table size and elements) # logger.warning(f"pa_table: {pa_table} num rows: {pa_table.num_rows}") # logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows))) yield (file_idx, batch_idx), pa_table batch_idx += 1 ``` **A clear and concise description of what you want to happen.** Creating the dataset rows with a subset of the `splitlines()` line boundaries.
2022-03-16T15:53:59Z
3,804
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
2022-03-02T14:50:16Z
https://api.github.com/repos/huggingface/datasets/issues/3804/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/3804/timeline
Text builder with custom separator line boundaries
https://api.github.com/repos/huggingface/datasets/issues/3804/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4", "events_url": "https://api.github.com/users/cronoik/events{/privacy}", "followers_url": "https://api.github.com/users/cronoik/followers", "following_url": "https://api.github.com/users/cronoik/following{/other_user}", "gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cronoik", "id": 18630848, "login": "cronoik", "node_id": "MDQ6VXNlcjE4NjMwODQ4", "organizations_url": "https://api.github.com/users/cronoik/orgs", "received_events_url": "https://api.github.com/users/cronoik/received_events", "repos_url": "https://api.github.com/users/cronoik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cronoik/subscriptions", "type": "User", "url": "https://api.github.com/users/cronoik" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
null
NONE
null
null
I_kwDODunzps5E-vR-
[ "Gently pinging @lhoestq", "Hi ! Interresting :)\r\n\r\nCould you give more details on what kind of separators you would like to use instead ?", "In my case, I just want to use `\\n` but not `U+2028`.", "Ok I see, maybe there can be a `sep` parameter to allow users to specify what line/paragraph separator they'd like to use", "Related to:\r\n- #3729 \r\n- #3910", "Thanks for requesting this enhancement. We have recently found a somehow related issue with another dataset:\r\n- #3704\r\n\r\nLet me make a PR proposal." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3804/reactions" }
open
false
https://api.github.com/repos/huggingface/datasets/issues/3804
https://github.com/huggingface/datasets/issues/3804
false
1,157,271,679
https://api.github.com/repos/huggingface/datasets/issues/3803/labels{/name}
This PR removes the following deprecated methos/params: * `Dataset.cast_`/`DatasetDict.cast_` * `Dataset.dictionary_encode_column_`/`DatasetDict.dictionary_encode_column_` * `Dataset.remove_columns_`/`DatasetDict.remove_columns_` * `Dataset.rename_columns_`/`DatasetDict.rename_columns_` * `prepare_module` * param `script_version` in `load_dataset`/`load_metric` * param `version` in `hf_github_url`
2022-03-02T14:53:21Z
3,803
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-02T14:29:12Z
https://api.github.com/repos/huggingface/datasets/issues/3803/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3803/timeline
Remove deprecated methods/params (preparation for v2.0)
https://api.github.com/repos/huggingface/datasets/issues/3803/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
null
null
CONTRIBUTOR
2022-03-02T14:53:21Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3803.diff", "html_url": "https://github.com/huggingface/datasets/pull/3803", "merged_at": "2022-03-02T14:53:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/3803.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3803" }
PR_kwDODunzps4z1T48
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3803/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3803
https://github.com/huggingface/datasets/pull/3803
true
1,157,009,964
https://api.github.com/repos/huggingface/datasets/issues/3802/labels{/name}
**FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing** We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian, and Chinese), and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP. *Ilias Chalkidis, Tommaso Pasini, Sheng Zhang, Letizia Tomada, Letizia, Sebastian Felix Schwemer, Anders Søgaard. FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing. 2022. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.* Note: Please review this initial commit, and I'll update the publication link, once I'll have the ArXived version. Thanks!
2022-03-02T15:21:10Z
3,802
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-02T10:40:18Z
https://api.github.com/repos/huggingface/datasets/issues/3802/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3802/timeline
Release of FairLex dataset
https://api.github.com/repos/huggingface/datasets/issues/3802/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4", "events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}", "followers_url": "https://api.github.com/users/iliaschalkidis/followers", "following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}", "gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iliaschalkidis", "id": 1626984, "login": "iliaschalkidis", "node_id": "MDQ6VXNlcjE2MjY5ODQ=", "organizations_url": "https://api.github.com/users/iliaschalkidis/orgs", "received_events_url": "https://api.github.com/users/iliaschalkidis/received_events", "repos_url": "https://api.github.com/users/iliaschalkidis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions", "type": "User", "url": "https://api.github.com/users/iliaschalkidis" }
[]
null
null
CONTRIBUTOR
2022-03-02T15:18:54Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3802.diff", "html_url": "https://github.com/huggingface/datasets/pull/3802", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3802.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3802" }
PR_kwDODunzps4z0biM
[ "This is awesome ! The dataset card and the dataset script look amazing :)\r\n\r\nI wanted to ask you if you'd be interested to have this dataset under the namespace of you research group at https://huggingface.co/coastalcph ? If yes, then you can actually create a dataset repository under your research group name and upload the files from this PR there", "Hi @lhoestq,\r\n\r\nYeah, I could do that. I see that people do that a lot of models, but not for datasets. \r\n\r\nIs there any good reason to have it under the organization domain instead of the general domain?\r\n\r\n Thanks!", "It's nice to have it under your namespace:\r\n- it will appear on your research group page, along with your models\r\n- you can edit or create datasets at any time - you don't need to open PRs on GitHub\r\n\r\nAll the datasets that are not under a namespace are this way because we started adding datasets from GitHub. Now we encourage users to upload them directly to make things simpler, and aligned with the workflow for models\r\n\r\n(the documentation will be updated in the following days)\r\n\r\nNote that we will keep accepting PRs here though when there is no clear namespace under which a dataset should be, or for users that want a review from us", "Ok, I'll do that. So, I'll just have to upload all the files under the `/fairlex` directory in my PR, right?", "Yes exactly !", "Ok, I uploaded most of them from the UI environment (https://huggingface.co/datasets/coastalcph/fairlex). Can I possibly upload the dummy data as well from the UI environment. I really want to avoid the CLI right now 😄 ", "Yea sure, feel free to use the UI of the website, even for the dummy data ^^", "Did you upload them yourself? Because I see the data preview, and I'm pretty sure, I didn't do that 😄 ", "The preview is computed from the real data ;)\r\n\r\nThe dummy data are used for testing only", "Haha, ok I was shocked! Cool, I close this PR, then. Thanks, again! ", "Thank you 🤗" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3802/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3802
https://github.com/huggingface/datasets/pull/3802
true
1,155,649,279
https://api.github.com/repos/huggingface/datasets/issues/3801/labels{/name}
Currently the datasets in streaming mode and in non-streaming mode have two distinct API for `map` processing. In this PR I'm aligning the two by changing `map` in streamign mode. This includes a **major breaking change** and will require a major release of the library: **Datasets 2.0** In particular, `Dataset.map` adds new columns (with dict.update) BUT `IterableDataset.map` used to discard previous columns (it overwrites the dict). In this PR I'm chaning the `IterableDataset.map` to behave the same way as `Dataset.map`: it will update the examples instead of overwriting them. I'm also adding those missing parameters to streaming `map`: with_indices, input_columns, remove_columns ### TODO - [x] tests - [x] docs Related to https://github.com/huggingface/datasets/issues/3444
2022-03-07T16:30:30Z
3,801
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-01T18:06:43Z
https://api.github.com/repos/huggingface/datasets/issues/3801/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3801/timeline
[Breaking] Align `map` when streaming: update instead of overwrite + add missing parameters
https://api.github.com/repos/huggingface/datasets/issues/3801/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
null
null
MEMBER
2022-03-07T16:30:29Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3801.diff", "html_url": "https://github.com/huggingface/datasets/pull/3801", "merged_at": "2022-03-07T16:30:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/3801.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3801" }
PR_kwDODunzps4zvqjN
[ "Right ! Will add it in another PR :)" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3801/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3801
https://github.com/huggingface/datasets/pull/3801
true
1,155,620,761
https://api.github.com/repos/huggingface/datasets/issues/3800/labels{/name}
Previous PR was in my fork so thought it'd be easier if I do it from a branch. Added computer vision task datasets according to HF tasks.
2022-03-04T07:15:55Z
3,800
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-01T17:37:46Z
https://api.github.com/repos/huggingface/datasets/issues/3800/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3800/timeline
Added computer vision tasks
https://api.github.com/repos/huggingface/datasets/issues/3800/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/merveenoyan", "id": 53175384, "login": "merveenoyan", "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "repos_url": "https://api.github.com/users/merveenoyan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "type": "User", "url": "https://api.github.com/users/merveenoyan" }
[]
null
null
CONTRIBUTOR
2022-03-04T07:15:55Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3800.diff", "html_url": "https://github.com/huggingface/datasets/pull/3800", "merged_at": "2022-03-04T07:15:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/3800.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3800" }
PR_kwDODunzps4zvkjA
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3800/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3800
https://github.com/huggingface/datasets/pull/3800
true
1,155,356,102
https://api.github.com/repos/huggingface/datasets/issues/3799/labels{/name}
**Added datasets (TODO)**: - [x] MLS - [x] Covost2 - [x] Minds-14 - [x] Voxpopuli - [x] FLoRes (need data) **Metrics**: Done
2022-03-16T14:40:29Z
3,799
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-03-01T13:42:28Z
https://api.github.com/repos/huggingface/datasets/issues/3799/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3799/timeline
Xtreme-S Metrics
https://api.github.com/repos/huggingface/datasets/issues/3799/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
null
null
CONTRIBUTOR
2022-03-16T14:40:26Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3799.diff", "html_url": "https://github.com/huggingface/datasets/pull/3799", "merged_at": "2022-03-16T14:40:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/3799.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3799" }
PR_kwDODunzps4zus9R
[ "@lhoestq - if you could take a final review here this would be great (if you have 5min :-) ) ", "Don't think the failures are related but not 100% sure", "Yes the CI fail is unrelated - you can ignore it" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3799/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3799
https://github.com/huggingface/datasets/pull/3799
true
1,154,411,066
https://api.github.com/repos/huggingface/datasets/issues/3798/labels{/name}
Fix the error message in the CSV loader for `Pandas >= 1.4`. To fix this, I directly print the current file name in the for-loop. An alternative would be to use a check similar to this: ```python csv_file_reader.handle.handle if datasets.config.PANDAS_VERSION >= version.parse("1.4") else csv_file_reader.f ``` CC: @SBrandeis
2022-02-28T18:51:39Z
3,798
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-28T18:24:10Z
https://api.github.com/repos/huggingface/datasets/issues/3798/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3798/timeline
Fix error message in CSV loader for newer Pandas versions
https://api.github.com/repos/huggingface/datasets/issues/3798/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
null
null
CONTRIBUTOR
2022-02-28T18:51:38Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3798.diff", "html_url": "https://github.com/huggingface/datasets/pull/3798", "merged_at": "2022-02-28T18:51:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/3798.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3798" }
PR_kwDODunzps4zrl5Y
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3798/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3798
https://github.com/huggingface/datasets/pull/3798
true
1,154,383,063
https://api.github.com/repos/huggingface/datasets/issues/3797/labels{/name}
Description tags for webis-tldr-17 added.
2023-03-09T22:08:58Z
3,797
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-28T17:53:18Z
https://api.github.com/repos/huggingface/datasets/issues/3797/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3797/timeline
Reddit dataset card contribution
https://api.github.com/repos/huggingface/datasets/issues/3797/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/56791604?v=4", "events_url": "https://api.github.com/users/anna-kay/events{/privacy}", "followers_url": "https://api.github.com/users/anna-kay/followers", "following_url": "https://api.github.com/users/anna-kay/following{/other_user}", "gists_url": "https://api.github.com/users/anna-kay/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anna-kay", "id": 56791604, "login": "anna-kay", "node_id": "MDQ6VXNlcjU2NzkxNjA0", "organizations_url": "https://api.github.com/users/anna-kay/orgs", "received_events_url": "https://api.github.com/users/anna-kay/received_events", "repos_url": "https://api.github.com/users/anna-kay/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anna-kay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anna-kay/subscriptions", "type": "User", "url": "https://api.github.com/users/anna-kay" }
[]
null
null
CONTRIBUTOR
2022-03-01T12:58:57Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3797.diff", "html_url": "https://github.com/huggingface/datasets/pull/3797", "merged_at": "2022-03-01T12:58:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/3797.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3797" }
PR_kwDODunzps4zrgAD
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3797/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3797
https://github.com/huggingface/datasets/pull/3797
true
1,154,298,629
https://api.github.com/repos/huggingface/datasets/issues/3796/labels{/name}
This will speed up the loading of the datasets where the number of data files is large (can easily happen with `imagefoler`, for instance)
2022-02-28T17:03:46Z
3,796
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-28T16:28:45Z
https://api.github.com/repos/huggingface/datasets/issues/3796/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3796/timeline
Skip checksum computation if `ignore_verifications` is `True`
https://api.github.com/repos/huggingface/datasets/issues/3796/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
null
null
CONTRIBUTOR
2022-02-28T17:03:46Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3796.diff", "html_url": "https://github.com/huggingface/datasets/pull/3796", "merged_at": "2022-02-28T17:03:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/3796.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3796" }
PR_kwDODunzps4zrOQ4
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3796/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3796
https://github.com/huggingface/datasets/pull/3796
true
1,153,261,281
https://api.github.com/repos/huggingface/datasets/issues/3795/labels{/name}
## Describe the bug after downloading the natural_questions dataset, can not flatten the dataset considering there are `long answer` and `short answer` in `annotations`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('natural_questions',cache_dir = 'data/dataset_cache_dir') dataset['train'].flatten() ``` ## Expected results a dataset with `long_answer` as features ## Actual results Traceback (most recent call last): File "temp.py", line 5, in <module> dataset['train'].flatten() File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/fingerprint.py", line 413, in wrapper out = func(self, *args, **kwargs) File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1296, in flatten dataset._data = update_metadata_with_features(dataset._data, dataset.features) File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 536, in update_metadata_with_features features = Features({col_name: features[col_name] for col_name in table.column_names}) File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 536, in <dictcomp> features = Features({col_name: features[col_name] for col_name in table.column_names}) KeyError: 'annotations.long_answer' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.13 - Platform: MBP - Python version: 3.8 - PyArrow version: 6.0.1
2022-03-21T14:36:12Z
3,795
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-02-27T13:57:40Z
https://api.github.com/repos/huggingface/datasets/issues/3795/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/3795/timeline
can not flatten natural_questions dataset
https://api.github.com/repos/huggingface/datasets/issues/3795/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4", "events_url": "https://api.github.com/users/Hannibal046/events{/privacy}", "followers_url": "https://api.github.com/users/Hannibal046/followers", "following_url": "https://api.github.com/users/Hannibal046/following{/other_user}", "gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Hannibal046", "id": 38466901, "login": "Hannibal046", "node_id": "MDQ6VXNlcjM4NDY2OTAx", "organizations_url": "https://api.github.com/users/Hannibal046/orgs", "received_events_url": "https://api.github.com/users/Hannibal046/received_events", "repos_url": "https://api.github.com/users/Hannibal046/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions", "type": "User", "url": "https://api.github.com/users/Hannibal046" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
completed
NONE
2022-03-21T14:36:12Z
null
I_kwDODunzps5EvV7h
[ "same issue. downgrade it to a lower version.", "Thanks for reporting, I'll take a look tomorrow :)" ]
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3795/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3795
https://github.com/huggingface/datasets/issues/3795
false
1,153,185,343
https://api.github.com/repos/huggingface/datasets/issues/3794/labels{/name}
Mahalanobis distance is a very useful metric to measure the distance from one datapoint X to a distribution P. In this PR I implement the metric in a simple way with the help of numpy only. Similar to the [MAUVE implementation](https://github.com/huggingface/datasets/blob/master/metrics/mauve/mauve.py), we can make this metric accept texts as input and encode them with a featurize model, if that is desirable.
2022-03-02T14:46:15Z
3,794
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-27T10:56:31Z
https://api.github.com/repos/huggingface/datasets/issues/3794/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3794/timeline
Add Mahalanobis distance metric
https://api.github.com/repos/huggingface/datasets/issues/3794/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/17574157?v=4", "events_url": "https://api.github.com/users/JoaoLages/events{/privacy}", "followers_url": "https://api.github.com/users/JoaoLages/followers", "following_url": "https://api.github.com/users/JoaoLages/following{/other_user}", "gists_url": "https://api.github.com/users/JoaoLages/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JoaoLages", "id": 17574157, "login": "JoaoLages", "node_id": "MDQ6VXNlcjE3NTc0MTU3", "organizations_url": "https://api.github.com/users/JoaoLages/orgs", "received_events_url": "https://api.github.com/users/JoaoLages/received_events", "repos_url": "https://api.github.com/users/JoaoLages/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JoaoLages/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JoaoLages/subscriptions", "type": "User", "url": "https://api.github.com/users/JoaoLages" }
[]
null
null
CONTRIBUTOR
2022-03-02T14:46:15Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3794.diff", "html_url": "https://github.com/huggingface/datasets/pull/3794", "merged_at": "2022-03-02T14:46:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/3794.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3794" }
PR_kwDODunzps4zniT4
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3794/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3794
https://github.com/huggingface/datasets/pull/3794
true
1,150,974,950
https://api.github.com/repos/huggingface/datasets/issues/3793/labels{/name}
Removes the need to have a self-hosted runner for the dev documentation
2022-03-01T15:55:29Z
3,793
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-25T23:48:55Z
https://api.github.com/repos/huggingface/datasets/issues/3793/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3793/timeline
Docs new UI actions no self hosted
https://api.github.com/repos/huggingface/datasets/issues/3793/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LysandreJik", "id": 30755778, "login": "LysandreJik", "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "repos_url": "https://api.github.com/users/LysandreJik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "type": "User", "url": "https://api.github.com/users/LysandreJik" }
[]
null
null
MEMBER
2022-03-01T15:55:28Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3793.diff", "html_url": "https://github.com/huggingface/datasets/pull/3793", "merged_at": "2022-03-01T15:55:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/3793.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3793" }
PR_kwDODunzps4zfdL0
[ "It seems like the doc can't be compiled right now because of the following:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/doc-builder\", line 33, in <module>\r\n sys.exit(load_entry_point('doc-builder', 'console_scripts', 'doc-builder')())\r\n File \"/__w/datasets/datasets/doc-builder/src/doc_builder/commands/doc_builder_cli.py\", line 39, in main\r\n args.func(args)\r\n File \"/__w/datasets/datasets/doc-builder/src/doc_builder/commands/build.py\", line 95, in build_command\r\n build_doc(\r\n File \"/__w/datasets/datasets/doc-builder/src/doc_builder/build_doc.py\", line 361, in build_doc\r\n anchors_mapping = build_mdx_files(package, doc_folder, output_dir, page_info)\r\n File \"/__w/datasets/datasets/doc-builder/src/doc_builder/build_doc.py\", line 200, in build_mdx_files\r\n raise type(e)(f\"There was an error when converting {file} to the MDX format.\\n\" + e.args[0]) from e\r\nTypeError: There was an error when converting datasets/docs/source/package_reference/table_classes.mdx to the MDX format.\r\nexpected string or bytes-like object\r\n```", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3793). All of your documentation changes will be reflected on that endpoint.", "This is due to the injection of docstrings from PyArrow. I think I can fix that by moving all the docstrings and fix them manually.", "> It seems like the doc can't be compiled right now because of the following:\r\n\r\nit is expected since there is something I need to change on doc-builder side.\r\n\r\n> This is due to the injection of docstrings from PyArrow. I think I can fix that by moving all the docstrings and fix them manually.\r\n\r\n@lhoestq I will let you know if we need to change it manually.\r\n\r\n@LysandreJik thanks a lot for this PR! I only had one question [here](https://github.com/huggingface/datasets/pull/3793#discussion_r816100194)", "> @lhoestq I will let you know if we need to change it manually.\r\n\r\nIt would be simpler to change it manually anyway - I don't want our documentation to break if PyArrow has documentation issues", "For some reason it fails when `Installing node dependencies` when running `npm ci` from the `kit` directory, any idea why @mishig25 ?", "Checking it rn", "It's very likely linked to an OOM error: https://github.com/huggingface/transformers/pull/15710#issuecomment-1051737337" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3793/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3793
https://github.com/huggingface/datasets/pull/3793
true
1,150,812,404
https://api.github.com/repos/huggingface/datasets/issues/3792/labels{/name}
## Dataset viewer issue for 'wiki_lingua*' **Link:** *link to the dataset viewer page* `data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]") ` *short description of the issue* ``` [NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=11wMGqNVSwwk6zUnDaJEgm3qT71kAHeff']]() ``` Am I the one who added this dataset ? No
2024-03-13T12:25:08Z
3,792
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
2022-02-25T19:55:09Z
https://api.github.com/repos/huggingface/datasets/issues/3792/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3792/timeline
Checksums didn't match for dataset source
https://api.github.com/repos/huggingface/datasets/issues/3792/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/13174842?v=4", "events_url": "https://api.github.com/users/rafikg/events{/privacy}", "followers_url": "https://api.github.com/users/rafikg/followers", "following_url": "https://api.github.com/users/rafikg/following{/other_user}", "gists_url": "https://api.github.com/users/rafikg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rafikg", "id": 13174842, "login": "rafikg", "node_id": "MDQ6VXNlcjEzMTc0ODQy", "organizations_url": "https://api.github.com/users/rafikg/orgs", "received_events_url": "https://api.github.com/users/rafikg/received_events", "repos_url": "https://api.github.com/users/rafikg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rafikg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafikg/subscriptions", "type": "User", "url": "https://api.github.com/users/rafikg" }
[]
null
completed
NONE
2022-02-28T08:44:18Z
null
I_kwDODunzps5EmAD0
[ "Same issue with `dataset = load_dataset(\"dbpedia_14\")`\r\n```\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k']", "I think this is a side-effect of #3787. The checksums won't match because the URLs have changed. @rafikg @Y0mingZhang, while this is fixed, maybe you can load the datasets as such:\r\n\r\n`data = datasets.load_dataset(\"wiki_lingua\", name=language, split=\"train[:2000]\", ignore_verifications=True)`\r\n`dataset = load_dataset(\"dbpedia_14\", ignore_verifications=True)`\r\n\r\nThis will, most probably, skip the verifications and integrity checks listed [here](https://huggingface.co/docs/datasets/loading_datasets.html#integrity-verifications)", "Hi! Installing the `datasets` package from master (`pip install git+https://github.com/huggingface/datasets.git`) and then redownloading the datasets with `download_mode` set to `force_redownload` (e.g. `dataset = load_dataset(\"dbpedia_14\", download_mode=\"force_redownload\")`) should fix the issue.", "Hi @rafikg and @Y0mingZhang, thanks for reporting.\r\n\r\nIndeed it seems that Google Drive changed their way to access their data files. We have recently handled that change:\r\n- #3787\r\n\r\nbut it will be accessible to users only in our next release of the `datasets` version.\r\n- Note that our latest release (version 1.18.3) was made before this fix: https://github.com/huggingface/datasets/releases/tag/1.18.3\r\n\r\nIn the meantime, as @mariosasko explained, you can incorporate this \"fix\" by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, you should force the redownload of the data (before the fix, you are just downloading/caching the virus scan warning page, instead of the data file):\r\n```shell\r\ndata = datasets.load_dataset(\"wiki_lingua\", name=language, split=\"train[:2000]\", download_mode=\"force_redownload\")", "@albertvillanova by running:\r\n```\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\ndata = datasets.load_dataset(\"wiki_lingua\", name=language, split=\"train[:2000]\", download_mode=\"force_redownload\", ignore_verifications=True)\r\n```\r\n\r\nI had a pickle error **UnpicklingError: invalid load key, '<'** in this part of code both `locally and on google colab`:\r\n\r\n```\r\n\"\"\"Yields examples.\"\"\"\r\nwith open(filepath, \"rb\") as f:\r\n data = pickle.load(f)\r\nfor id_, row in enumerate(data.items()):\r\n yield id_, {\"url\": row[0], \"article\": self._process_article(row[1])}\r\n```\r\n", "This issue impacts many more datasets than the ones mention in this thread. Can we post # of downloads for each dataset by day (by successes and failures)? If so, it should be obvious which ones are failing.", "I can see this problem too in xcopa, unfortunately installing the latest master (1.18.4.dev0) doesn't work, @albertvillanova .\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"xcopa\", \"it\")\r\n```\r\n\r\nThrows\r\n\r\n```\r\nin verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 38 if len(bad_urls) > 0:\r\n 39 error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n 41 logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n 42 \r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/cambridgeltl/xcopa/archive/master.zip']\r\n```", "Hi @rafikg, I think that is another different issue. Let me check it... \r\n\r\nI guess maybe you are using a different Python version that the one the dataset owner used to create the pickle file...", "@kwchurch the datasets impacted for this specific issue are the ones which are hosted at Google Drive.", "@afcruzs-ms I think your issue is a different one, because that dataset is not hosted at Google Drive. Would you mind open another issue for that other problem, please? Thanks! :)", "@albertvillanova just to let you know that I tried it locally and on colab and it is the same error", "There are many many datasets on HugggingFace that are receiving this checksum error. Some of these datasets are very popular. There must be a way to track these errors, or to do regression testing. We don't want to catch each of these errors on each dataset, one at a time.", "@rafikg I am sorry, but I can't reproduce your issue. For me it works OK for all languages. See: https://colab.research.google.com/drive/1yIcLw1it118-TYE3ZlFmV7gJcsF6UCsH?usp=sharing", "@kwchurch the PR #3787 fixes this issue (generated by a change in Google Drive service) for ALL datasets with this issue. Once we make our next library release (in a couple of days), the fix will be accessible to all users that update our library from PyPI.", "By the way, @rafikg, I discovered the URL for Spanish was wrong. I've created a PR to fix it:\r\n- #3806 ", "I have the same problem with \"wider_face\" dataset. It seems that \"load_dataset\" function can not download the dataset from google drive.\r\n", "still getting this issue with datasets==2.2.2 for \r\ndataset_fever_original_dev = load_dataset('fever', \"v1.0\", split=\"labelled_dev\")\r\n(this one seems to be hosted by aws though)\r\n\r\nupdate: also tried to install from source to get the latest 2.2.3.dev0, but still get the error below (and also force-redownloaded)\r\n\r\nupdate2: Seems like this issues is linked to a change in the links in the specific fever datasets: https://fever.ai/\r\n\"28/04/2022\r\nDataset download URLs have changed\r\nDownload URLs for shared task data for FEVER, FEVER2.0 and FEVEROUS have been updated. New URLS begin with https://fever.ai/download/[task name]/[filename]. All resource pages have been updated with the new URLs. Previous dataset URLs may not work and should be updated if you require these in your scripts. \"\r\n\r\n=> I don't know how to update the links for HF datasets - would be great if someone could update them :) \r\n\r\n```\r\n\r\nDownloading and preparing dataset fever/v1.0 (download: 42.78 MiB, generated: 38.39 MiB, post-processed: Unknown size, total: 81.17 MiB) to /root/.cache/huggingface/datasets/fever/v1.0/1.0.0/956b0a9c4b05e126fd956be73e09da5710992b5c85c30f0e5e1c500bc6051d0a...\r\n\r\nDownloading data files: 100%\r\n6/6 [00:07<00:00, 1.21s/it]\r\nDownloading data:\r\n278/? [00:00<00:00, 2.34kB/s]\r\nDownloading data:\r\n278/? [00:00<00:00, 1.53kB/s]\r\nDownloading data:\r\n278/? [00:00<00:00, 7.43kB/s]\r\nDownloading data:\r\n278/? [00:00<00:00, 5.54kB/s]\r\nDownloading data:\r\n278/? [00:00<00:00, 6.19kB/s]\r\nDownloading data:\r\n278/? [00:00<00:00, 7.51kB/s]\r\nExtracting data files: 100%\r\n6/6 [00:00<00:00, 108.05it/s]\r\n\r\n---------------------------------------------------------------------------\r\n\r\nNonMatchingChecksumError Traceback (most recent call last)\r\n\r\n[<ipython-input-20-92ec5c728ecf>](https://localhost:8080/#) in <module>()\r\n 27 # get labels for fever-nli-dev from original fever - only works for dev\r\n 28 # \"(The labels for both dev and test are hidden but you can retrieve the label for dev using the cid and the original FEVER data.)\"\" https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md\r\n---> 29 dataset_fever_original_dev = load_dataset('fever', \"v1.0\", split=\"labelled_dev\")\r\n 30 df_fever_original_dev = pd.DataFrame(data={\"id\": dataset_fever_original_dev[\"id\"], \"label\": dataset_fever_original_dev[\"label\"], \"claim\": dataset_fever_original_dev[\"claim\"], \"evidence_id\": dataset_fever_original_dev[\"evidence_id\"]})\r\n 31 df_fever_dev = pd.merge(df_fever_dev, df_fever_original_dev, how=\"left\", left_on=\"cid\", right_on=\"id\")\r\n\r\n4 frames\r\n\r\n[/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 38 if len(bad_urls) > 0:\r\n 39 error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n 41 logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n 42 \r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://s3-eu-west-1.amazonaws.com/fever.public/train.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_dev.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_dev_public.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_test.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/paper_dev.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/paper_test.jsonl']\r\n```\r\n", "I think this has to be fixed on the google drive side, but you also have to delete the bad stuff from your local cache. This is not a great design, but it is what it is.", "We have fixed the issues with the datasets:\r\n- wider_face: by hosting their data files on the HuggingFace Hub (CC: @HosseynGT)\r\n- fever: by updating to their new data URLs (CC: @MoritzLaurer)", "The yelp_review_full datasets has this problem as well and can't be fixed with the suggestion.", "This is a super-common failure mode. We really need to find a better workaround. My solution was to wait until the owner of the dataset in question did the right thing, and then I had to delete my cached versions of the datasets with the bad checksums. I don't understand why this happens. Would it be possible to maintain a copy of the most recent version that was known to work, and roll back to that automatically if the checksums fail? And if the checksums fail, couldn't the system automatically flush the cached versions with the bad checksums? It feels like we are blaming the provider of the dataset, when in fact, there are things that the system could do to ease the pain. Let's take these error messages seriously. There are too many of them involving too many different datasets.", "the [exams](https://huggingface.co/datasets/exams) dataset also has this issue and the provided fix above doesn't work", "Same for [DART dataset](https://huggingface.co/datasets/dart):\r\n```\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://raw.githubusercontent.com/Yale-LILY/dart/master/data/v1.1.1/dart-v1.1.1-full-train.json', 'https://raw.githubusercontent.com/Yale-LILY/dart/master/data/v1.1.1/dart-v1.1.1-full-dev.json', 'https://raw.githubusercontent.com/Yale-LILY/dart/master/data/v1.1.1/dart-v1.1.1-full-test.json']\r\n```", "same for multi_news dataset", "- @thesofakillers the issue with `exams` was fixed on 16 Aug by this PR:\r\n - #4853\r\n- @Aktsvigun the issue with `dart` has been transferred to the Hub: https://huggingface.co/datasets/dart/discussions/1\r\n - and fixed by PR: https://huggingface.co/datasets/dart/discussions/2\r\n- @Carol-gutianle the issue with `multi_news` have been transferred to the Hub as well: https://huggingface.co/datasets/multi_news/discussions/1\r\n - not reproducible: maybe you should try to update `datasets`\r\n\r\nFor information to everybody, we are removing the checksum verifications (that were creating a bad user experience). This will be in place in the following weeks.", "auto_gptq is required for real quantization\r\n['/home/sam/Doctorproject/OmniQuant-main/main.py', '--model', '/home/sam/Doctorproject/OmniQuant-main/PATH/TO/LLaMA/llama-7b/', '--epochs', '20', '--output_dir', '/home/sam/Doctorproject/OmniQuant-main/outdir/llama-7b-w3a16/', '--eval_ppl', '--wbits', '3', '--abits', '16', '--lwc', '--net', 'llama-7b', '--aug_loss']\r\n[2024-03-13 17:58:48 root](main.py 262): INFO Namespace(model='/home/sam/Doctorproject/OmniQuant-main/PATH/TO/LLaMA/llama-7b/', cache_dir='./cache', output_dir='/home/sam/Doctorproject/OmniQuant-main/outdir/llama-7b-w3a16/', save_dir=None, resume=None, real_quant=False, calib_dataset='wikitext2', nsamples=128, batch_size=1, seed=2, tasks='', eval_ppl=True, num_fewshot=0, wbits=3, abits=16, group_size=None, alpha=0.5, let_lr=0.005, lwc_lr=0.01, wd=0, epochs=20, let=False, lwc=True, aug_loss=True, symmetric=False, disable_zero_point=False, a_dynamic_method='per_token', w_dynamic_method='per_channel', limit=-1, multigpu=False, deactive_amp=False, attn_implementation='eager', net='llama-7b', act_scales=None, act_shifts=None)\r\nLoading checkpoint shards: 0%| | 0/33 [00:00<?, ?it/s]/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\r\n return self.fget.__get__(instance, owner)()\r\nLoading checkpoint shards: 100%|██████████| 33/33 [00:11<00:00, 2.98it/s]\r\nvocab size: 32000\r\n[2024-03-13 17:58:59 root](main.py 331): INFO === start quantization ===\r\nget_wikitext2\r\n[2024-03-13 18:02:20 datasets.load](load.py 1586): WARNING Using the latest cached version of the module from /home/sam/.cache/huggingface/modules/datasets_modules/datasets/wikitext/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126 (last modified on Wed Mar 13 16:54:26 2024) since it couldn't be found locally at wikitext, or remotely on the Hugging Face Hub.\r\nUsing the latest cached version of the module from /home/sam/.cache/huggingface/modules/datasets_modules/datasets/wikitext/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126 (last modified on Wed Mar 13 16:54:26 2024) since it couldn't be found locally at wikitext, or remotely on the Hugging Face Hub.\r\nDownloading data: 243B [00:00, 877kB/s]\r\nGenerating test split: 0%| | 0/4358 [00:00<?, ? examples/s]\r\nTraceback (most recent call last):\r\n File \"/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/datasets/builder.py\", line 1742, in _prepare_split_single\r\n for key, record in generator:\r\n File \"/home/sam/.cache/huggingface/modules/datasets_modules/datasets/wikitext/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/wikitext.py\", line 187, in _generate_examples\r\n with open(data_file, encoding=\"utf-8\") as f:\r\n File \"/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/datasets/streaming.py\", line 75, in wrapper\r\n return function(*args, download_config=download_config, **kwargs)\r\n File \"/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py\", line 507, in xopen\r\n return open(main_hop, mode, *args, **kwargs)\r\nNotADirectoryError: [Errno 20] Not a directory: '/home/sam/.cache/huggingface/datasets/downloads/94be2a7b3fff32ae7379658c8d3821035b666baddad3a06d29b55ab3a4ab3115/wikitext-2-raw/wiki.test.raw'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/sam/Doctorproject/OmniQuant-main/main.py\", line 382, in <module>\r\n main()\r\n File \"/home/sam/Doctorproject/OmniQuant-main/main.py\", line 339, in main\r\n dataloader, _ = get_loaders(\r\n File \"/home/sam/Doctorproject/OmniQuant-main/datautils.py\", line 178, in get_loaders\r\n return get_wikitext2(nsamples, seed, seqlen, model)\r\n File \"/home/sam/Doctorproject/OmniQuant-main/datautils.py\", line 37, in get_wikitext2\r\n traindata = load_dataset(path='wikitext', name='wikitext-2-raw-v1', split='train', download_mode=\"force_redownload\")\r\n File \"/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/datasets/load.py\", line 2598, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/datasets/builder.py\", line 1021, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/datasets/builder.py\", line 1783, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/datasets/builder.py\", line 1116, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/datasets/builder.py\", line 1621, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/datasets/builder.py\", line 1778, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset\r\n\r\n\r\n@albertvillanova @Y0mingZhang @kwchurch @HosseynGT @rafikg I tried the solutions you provided above, but none of them worked. Could you please give me some guidance\r\n" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3792/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3792
https://github.com/huggingface/datasets/issues/3792
false
1,150,733,475
https://api.github.com/repos/huggingface/datasets/issues/3791/labels{/name}
As discussed in https://github.com/huggingface/datasets/pull/2830#issuecomment-1048989764, this PR adds a QOL improvement to easily reference the files inside a directory in `load_dataset` using the `data_dir` param (very handy for ImageFolder because it avoids globbing, but also useful for the other loaders). Additionally, it fixes the issue with `HfFileSystem.isdir`, which would previously always return `False`, and aligns the path-handling logic in `HfFileSystem` with `fsspec.GitHubFileSystem`.
2022-03-01T13:10:43Z
3,791
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-25T18:26:35Z
https://api.github.com/repos/huggingface/datasets/issues/3791/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3791/timeline
Add `data_dir` to `data_files` resolution and misc improvements to HfFileSystem
https://api.github.com/repos/huggingface/datasets/issues/3791/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
null
null
CONTRIBUTOR
2022-03-01T13:10:42Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3791.diff", "html_url": "https://github.com/huggingface/datasets/pull/3791", "merged_at": "2022-03-01T13:10:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/3791.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3791" }
PR_kwDODunzps4zevU2
[]
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/3791/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3791
https://github.com/huggingface/datasets/pull/3791
true
1,150,646,899
https://api.github.com/repos/huggingface/datasets/issues/3790/labels{/name}
I added the three scripts: - build_dev_documentation.yml - build_documentation.yml - delete_dev_documentation.yml I got them from `transformers` and did a few changes: - I removed the `transformers`-specific dependencies - I changed all the paths to be "datasets" instead of "transformers" - I passed the `--library_name datasets` arg to the `doc-builder build` command (according to https://github.com/huggingface/doc-builder/pull/94/files#diff-bcc33cf7c223511e498776684a9a433810b527a0a38f483b1487e8a42b6575d3R26) cc @LysandreJik @mishig25
2022-03-01T15:55:42Z
3,790
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-25T16:38:47Z
https://api.github.com/repos/huggingface/datasets/issues/3790/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3790/timeline
Add doc builder scripts
https://api.github.com/repos/huggingface/datasets/issues/3790/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
null
null
MEMBER
2022-03-01T15:55:41Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3790.diff", "html_url": "https://github.com/huggingface/datasets/pull/3790", "merged_at": "2022-03-01T15:55:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/3790.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3790" }
PR_kwDODunzps4zedMa
[ "I think we're only missing the hosted runner to be configured for this repository and we should be good", "Regarding the self-hosted runner, I actually encourage using the approach defined here: https://github.com/huggingface/transformers/pull/15710, which doesn't leverage a self-hosted runner. This prevents queuing jobs, which is important when we expect several concurrent jobs.", "Opened a PR for that on your branch here: https://github.com/huggingface/datasets/pull/3793" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3790/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3790
https://github.com/huggingface/datasets/pull/3790
true
1,150,587,404
https://api.github.com/repos/huggingface/datasets/issues/3789/labels{/name}
This PR adds the URL field, so that we conform to proper attribution, required by their license: provide credit to the authors by including a hyperlink (where possible) or URL to the page or pages you are re-using. About the conversion from title to URL, I found that apart from replacing blanks with underscores, some other special character must also be percent-encoded (e.g. `"` to `%22`): https://meta.wikimedia.org/wiki/Help:URL Therefore, I have finally used `urllib.parse.quote` function. This additionally percent-encodes non-ASCII characters, but Wikimedia docs say these are equivalent: > For the other characters either the code or the character can be used in internal and external links, they are equivalent. The system does a conversion when needed. > [[%C3%80_propos_de_M%C3%A9ta]] > is rendered as [À_propos_de_Méta](https://meta.wikimedia.org/wiki/%C3%80_propos_de_M%C3%A9ta), almost like [À propos de Méta](https://meta.wikimedia.org/wiki/%C3%80_propos_de_M%C3%A9ta), which leads to this page on Meta with in the address bar the URL > [http://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta](https://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta) > while [http://meta.wikipedia.org/wiki/À_propos_de_Méta](https://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta) leads to the same. Fix #3398. CC: @geohci
2022-03-04T08:24:24Z
3,789
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-25T15:34:37Z
https://api.github.com/repos/huggingface/datasets/issues/3789/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3789/timeline
Add URL and ID fields to Wikipedia dataset
https://api.github.com/repos/huggingface/datasets/issues/3789/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
null
null
MEMBER
2022-03-04T08:24:23Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3789.diff", "html_url": "https://github.com/huggingface/datasets/pull/3789", "merged_at": "2022-03-04T08:24:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/3789.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3789" }
PR_kwDODunzps4zeQpx
[ "Do you think we have a dedicated branch for all the changes we want to do to wikipedia ? Then once everything looks good + we have preprocessed the main languages, we can merge it on the `master` branch", "Yes, @lhoestq, I agree with you.\r\n\r\nI have just created the dedicated branch [`update-wikipedia`](https://github.com/huggingface/datasets/tree/update-wikipedia). We can merge every PR (once validated) to that branch; once all changes are merged to that branch, we could create the preprocessed datasets and then merge the branch to master. ", "@lhoestq I guess you approve this PR?" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3789/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3789
https://github.com/huggingface/datasets/pull/3789
true
1,150,375,720
https://api.github.com/repos/huggingface/datasets/issues/3788/labels{/name}
## Describe the bug As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`.
2022-02-28T11:22:22Z
3,788
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-02-25T12:11:39Z
https://api.github.com/repos/huggingface/datasets/issues/3788/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3788/timeline
Only-data dataset loaded unexpectedly as validation split
https://api.github.com/repos/huggingface/datasets/issues/3788/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
null
null
MEMBER
null
null
I_kwDODunzps5EkVco
[ "I see two options:\r\n1. drop the \"dev\" keyword since it can be considered too generic\r\n2. improve the pattern to something more reasonable, e.g. asking for a separator before and after \"dev\"\r\n```python\r\n[\"*[ ._-]dev[ ._-]*\", \"dev[ ._-]*\"]\r\n```\r\n\r\nI think 2. is nice. If we agree on this one we can even decide to require the separation for the other split keywords \"train\", \"test\" etc.", "Yes, I had something like that on mind: \"dev\" not being part of a word.\r\n```\r\n\"[^a-zA-Z]dev[^a-zA-Z]\"", "Is there a reason why we want that regex? It feels like something that'll still be an issue for some weird case. \"my_dataset_dev\" doesn't match your regex, \"my_dataset_validation\" doesn't either ... Why not always \"train\" unless specified?", "The regex is needed as part of our effort to make datasets configurable without code. In particular we define some generic dataset repository structures that users can follow\r\n\r\n> ```\r\n> \"[^a-zA-Z]*dev[^a-zA-Z]*\"\r\n> ```\r\n\r\nunfortunately our glob doesn't support \"^\": \r\n\r\nhttps://github.com/fsspec/filesystem_spec/blob/3e739db7e53f5b408319dcc9d11e92bc1f938902/fsspec/spec.py#L465-L479", "> \"my_dataset_dev\" doesn't match your regex, \"my_dataset_validation\" doesn't either ... Why not always \"train\" unless specified?\r\n\r\nAnd `my_dataset_dev.foo` would match the pattern, and we also have the same pattern but for the \"validation\" keyword so `my_dataset_validation.foo` would work too", "> The regex is needed as part of our effort to make datasets configurable without code\r\n\r\nThis feels like coding with the filename ^^'", "This is still much easier than having to write a full dataset script right ? :p" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3788/reactions" }
open
false
https://api.github.com/repos/huggingface/datasets/issues/3788
https://github.com/huggingface/datasets/issues/3788
false
1,150,235,569
https://api.github.com/repos/huggingface/datasets/issues/3787/labels{/name}
This PR fixes, in the datasets library instead of in every specific dataset, the issue of downloading the Virus scan warning page instead of the actual data file for Google Drive URLs. Fix #3786, fix #3784.
2022-03-04T20:43:32Z
3,787
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-25T09:35:12Z
https://api.github.com/repos/huggingface/datasets/issues/3787/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3787/timeline
Fix Google Drive URL to avoid Virus scan warning
https://api.github.com/repos/huggingface/datasets/issues/3787/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
null
null
MEMBER
2022-02-25T11:56:35Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3787.diff", "html_url": "https://github.com/huggingface/datasets/pull/3787", "merged_at": "2022-02-25T11:56:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/3787.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3787" }
PR_kwDODunzps4zdE7b
[ "Thanks for this @albertvillanova!", "Once this PR merged into master and until our next `datasets` library release, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```", "Thanks, that solved a bunch of problems we had downstream!\r\ncf. https://github.com/ElementAI/picard/issues/61" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3787/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3787
https://github.com/huggingface/datasets/pull/3787
true
1,150,233,067
https://api.github.com/repos/huggingface/datasets/issues/3786/labels{/name}
## Describe the bug Recently, some issues were reported with URLs from Google Drive, where we were downloading the Virus scan warning page instead of the data file itself. See: - #3758 - #3773 - #3784
2022-03-03T09:25:59Z
3,786
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-02-25T09:32:23Z
https://api.github.com/repos/huggingface/datasets/issues/3786/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/3786/timeline
Bug downloading Virus scan warning page from Google Drive URLs
https://api.github.com/repos/huggingface/datasets/issues/3786/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
completed
MEMBER
2022-02-25T11:56:35Z
null
I_kwDODunzps5Ejynr
[ "Once the PR merged into master and until our next `datasets` library release, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3786/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3786
https://github.com/huggingface/datasets/issues/3786
false
1,150,069,801
https://api.github.com/repos/huggingface/datasets/issues/3785/labels{/name}
This commit fixes the issue described in #3784. By adding an extra parameter to the end of Google Drive links, we are able to bypass the virus check and download the datasets. So, if the original link looked like https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ The new link now looks like https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ&confirm=t Fixes #3784
2022-03-03T16:43:47Z
3,785
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-25T05:48:57Z
https://api.github.com/repos/huggingface/datasets/issues/3785/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3785/timeline
Fix: Bypass Virus Checks in Google Drive Links (CNN-DM dataset)
https://api.github.com/repos/huggingface/datasets/issues/3785/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/58678541?v=4", "events_url": "https://api.github.com/users/AngadSethi/events{/privacy}", "followers_url": "https://api.github.com/users/AngadSethi/followers", "following_url": "https://api.github.com/users/AngadSethi/following{/other_user}", "gists_url": "https://api.github.com/users/AngadSethi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AngadSethi", "id": 58678541, "login": "AngadSethi", "node_id": "MDQ6VXNlcjU4Njc4NTQx", "organizations_url": "https://api.github.com/users/AngadSethi/orgs", "received_events_url": "https://api.github.com/users/AngadSethi/received_events", "repos_url": "https://api.github.com/users/AngadSethi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AngadSethi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AngadSethi/subscriptions", "type": "User", "url": "https://api.github.com/users/AngadSethi" }
[]
null
null
NONE
2022-03-03T14:03:37Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3785.diff", "html_url": "https://github.com/huggingface/datasets/pull/3785", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3785.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3785" }
PR_kwDODunzps4zciES
[ "Thank you, @albertvillanova!", "Got it. Thanks for explaining this, @albertvillanova!\r\n\r\n> On the other hand, the tests are not passing because the dummy data should also be fixed. Once done, this PR will be able to be merged into master.\r\n\r\nWill do this 👍", "Hi ! I think we need to fix the issue for every dataset. This can be done simply by fixing how we handle Google Drive links, see my comment https://github.com/huggingface/datasets/pull/3775#issuecomment-1050970157", "Hi @lhoestq! I think @albertvillanova has already fixed this in #3787", "Cool ! I missed this one :) thanks", "No problem!", "Hi, @AngadSethi, I think that once:\r\n- #3787 \r\n\r\nwas merged, issue:\r\n- #3784 \r\n\r\nwas also fixed.\r\n\r\nTherefore, I think this PR is no longer necessary. I'm closing it. Let me know if you agree.", "Yes, absolutely @albertvillanova! I agree :)" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3785/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3785
https://github.com/huggingface/datasets/pull/3785
true
1,150,057,955
https://api.github.com/repos/huggingface/datasets/issues/3784/labels{/name}
## Describe the bug I am unable to download the CNN-Dailymail dataset. Upon closer investigation, I realised why this was happening: - The dataset sits in Google Drive, and both the CNN and DM datasets are large. - Google is unable to scan the folder for viruses, **so the link which would originally download the dataset, now downloads the source code of this web page:** ![image](https://user-images.githubusercontent.com/58678541/155658435-c2f497d7-7601-4332-94b1-18a62dd96422.png) - **This leads to the following error**: ```python NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` ## Steps to reproduce the bug ```python import datasets dataset = datasets.load_dataset("cnn_dailymail", "3.0.0", split="train") ``` ## Expected results That the dataset is downloaded and processed just like other datasets. ## Actual results Hit with this error: ```python NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 6.0.1
2022-03-03T14:05:17Z
3,784
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-02-25T05:24:47Z
https://api.github.com/repos/huggingface/datasets/issues/3784/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/58678541?v=4", "events_url": "https://api.github.com/users/AngadSethi/events{/privacy}", "followers_url": "https://api.github.com/users/AngadSethi/followers", "following_url": "https://api.github.com/users/AngadSethi/following{/other_user}", "gists_url": "https://api.github.com/users/AngadSethi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AngadSethi", "id": 58678541, "login": "AngadSethi", "node_id": "MDQ6VXNlcjU4Njc4NTQx", "organizations_url": "https://api.github.com/users/AngadSethi/orgs", "received_events_url": "https://api.github.com/users/AngadSethi/received_events", "repos_url": "https://api.github.com/users/AngadSethi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AngadSethi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AngadSethi/subscriptions", "type": "User", "url": "https://api.github.com/users/AngadSethi" }
https://api.github.com/repos/huggingface/datasets/issues/3784/timeline
Unable to Download CNN-Dailymail Dataset
https://api.github.com/repos/huggingface/datasets/issues/3784/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/58678541?v=4", "events_url": "https://api.github.com/users/AngadSethi/events{/privacy}", "followers_url": "https://api.github.com/users/AngadSethi/followers", "following_url": "https://api.github.com/users/AngadSethi/following{/other_user}", "gists_url": "https://api.github.com/users/AngadSethi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AngadSethi", "id": 58678541, "login": "AngadSethi", "node_id": "MDQ6VXNlcjU4Njc4NTQx", "organizations_url": "https://api.github.com/users/AngadSethi/orgs", "received_events_url": "https://api.github.com/users/AngadSethi/received_events", "repos_url": "https://api.github.com/users/AngadSethi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AngadSethi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AngadSethi/subscriptions", "type": "User", "url": "https://api.github.com/users/AngadSethi" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/58678541?v=4", "events_url": "https://api.github.com/users/AngadSethi/events{/privacy}", "followers_url": "https://api.github.com/users/AngadSethi/followers", "following_url": "https://api.github.com/users/AngadSethi/following{/other_user}", "gists_url": "https://api.github.com/users/AngadSethi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AngadSethi", "id": 58678541, "login": "AngadSethi", "node_id": "MDQ6VXNlcjU4Njc4NTQx", "organizations_url": "https://api.github.com/users/AngadSethi/orgs", "received_events_url": "https://api.github.com/users/AngadSethi/received_events", "repos_url": "https://api.github.com/users/AngadSethi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AngadSethi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AngadSethi/subscriptions", "type": "User", "url": "https://api.github.com/users/AngadSethi" } ]
null
completed
NONE
2022-03-03T14:05:17Z
null
I_kwDODunzps5EjH3j
[ "#self-assign", "@AngadSethi thanks for reporting and thanks for your PR!", "Glad to help @albertvillanova! Just fine-tuning the PR, will comment once I am able to get it up and running 😀", "Fixed by:\r\n- #3787" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3784/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3784
https://github.com/huggingface/datasets/issues/3784
false
1,149,256,744
https://api.github.com/repos/huggingface/datasets/issues/3783/labels{/name}
null
2022-02-24T16:01:40Z
3,783
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-24T12:58:15Z
https://api.github.com/repos/huggingface/datasets/issues/3783/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3783/timeline
Support passing str to iter_files
https://api.github.com/repos/huggingface/datasets/issues/3783/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
null
null
MEMBER
2022-02-24T16:01:40Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3783.diff", "html_url": "https://github.com/huggingface/datasets/pull/3783", "merged_at": "2022-02-24T16:01:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/3783.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3783" }
PR_kwDODunzps4zZ1jR
[ "@mariosasko it was indeed while reading that PR, that I remembered this change I wanted to do long ago... 😉" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3783/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3783
https://github.com/huggingface/datasets/pull/3783
true
1,148,994,022
https://api.github.com/repos/huggingface/datasets/issues/3782/labels{/name}
## 1. Case ``` dataset.map( batched=True, disable_nullable=True, ) ``` will get the following error at here https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_writer.py#L516 `pyarrow.lib.ArrowInvalid: Tried to write record batch with different schema` ## 2. Debugging ### 2.1 tracing During `_map_single`, the following are called https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_dataset.py#L2523 https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_writer.py#L511 ### 2.2. Observation The problem is, even after `table_cast`, `pa_table.schema != self._schema` `pa_table.schema` (before/after `table_cast`) ``` input_ids: list<item: int32> child 0, item: int32 ``` `self._schema` ``` input_ids: list<item: int32> not null child 0, item: int32 ``` ### 2.3. Reason https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/table.py#L1121 Here we lose nullability stored in `schema` because it seems that `Features` is always nullable and don't store nullability. https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/table.py#L1103 So, casting to schema from such `Features` loses nullability, and eventually causes error of writing with different schema ## 3. Solution 1. Let `Features` stores nullability. 2. Directly cast table with original schema but not schema from converted `Features`. (this PR) 3. Don't `cast_table` when `write_table`
2022-03-03T14:54:39Z
3,782
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-24T08:23:07Z
https://api.github.com/repos/huggingface/datasets/issues/3782/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3782/timeline
Error of writing with different schema, due to nonpreservation of nullability
https://api.github.com/repos/huggingface/datasets/issues/3782/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang" }
[]
null
null
CONTRIBUTOR
2022-03-03T14:54:39Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3782.diff", "html_url": "https://github.com/huggingface/datasets/pull/3782", "merged_at": "2022-03-03T14:54:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/3782.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3782" }
PR_kwDODunzps4zY-Xb
[ "Hi ! Thanks for reporting, indeed `disable_nullable` doesn't seem to be supported in this case. Maybe at one point we can have `disable_nullable` as a parameter of certain feature types" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3782/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3782
https://github.com/huggingface/datasets/pull/3782
true
1,148,599,680
https://api.github.com/repos/huggingface/datasets/issues/3781/labels{/name}
The changes proposed are based on the "TL;DR: Mining Reddit to Learn Automatic Summarization" paper & https://zenodo.org/record/1043504#.YhaKHpbQC38 It is a Reddit dataset indeed, but the name given to the dataset by the authors is Webis-TLDR-17 (corpus), so perhaps it should be modified as well. The task at which the dataset is aimed is abstractive summarization.
2022-02-28T18:00:40Z
3,781
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-23T21:29:16Z
https://api.github.com/repos/huggingface/datasets/issues/3781/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3781/timeline
Reddit dataset card additions
https://api.github.com/repos/huggingface/datasets/issues/3781/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/56791604?v=4", "events_url": "https://api.github.com/users/anna-kay/events{/privacy}", "followers_url": "https://api.github.com/users/anna-kay/followers", "following_url": "https://api.github.com/users/anna-kay/following{/other_user}", "gists_url": "https://api.github.com/users/anna-kay/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anna-kay", "id": 56791604, "login": "anna-kay", "node_id": "MDQ6VXNlcjU2NzkxNjA0", "organizations_url": "https://api.github.com/users/anna-kay/orgs", "received_events_url": "https://api.github.com/users/anna-kay/received_events", "repos_url": "https://api.github.com/users/anna-kay/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anna-kay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anna-kay/subscriptions", "type": "User", "url": "https://api.github.com/users/anna-kay" }
[]
null
null
CONTRIBUTOR
2022-02-28T11:21:14Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3781.diff", "html_url": "https://github.com/huggingface/datasets/pull/3781", "merged_at": "2022-02-28T11:21:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/3781.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3781" }
PR_kwDODunzps4zXr_O
[ "Hello! I added the tags and created a PR. Just to note, regarding the paperswithcode_id tag, that currently has the value \"reddit\"; the dataset described as reddit in paperswithcode is https://paperswithcode.com/dataset/reddit and it isn't the Webis-tldr-17. I could not find Webis-tldr-17 in paperswithcode neither in the Summarization category nor using the keywords reddit, webis, & tldr. I didn't change this tag." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3781/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3781
https://github.com/huggingface/datasets/pull/3781
true
1,148,186,272
https://api.github.com/repos/huggingface/datasets/issues/3780/labels{/name}
null
2022-03-04T19:04:29Z
3,780
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-23T14:44:17Z
https://api.github.com/repos/huggingface/datasets/issues/3780/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3780/timeline
Add ElkarHizketak v1.0 dataset
https://api.github.com/repos/huggingface/datasets/issues/3780/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/7646055?v=4", "events_url": "https://api.github.com/users/antxa/events{/privacy}", "followers_url": "https://api.github.com/users/antxa/followers", "following_url": "https://api.github.com/users/antxa/following{/other_user}", "gists_url": "https://api.github.com/users/antxa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/antxa", "id": 7646055, "login": "antxa", "node_id": "MDQ6VXNlcjc2NDYwNTU=", "organizations_url": "https://api.github.com/users/antxa/orgs", "received_events_url": "https://api.github.com/users/antxa/received_events", "repos_url": "https://api.github.com/users/antxa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/antxa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antxa/subscriptions", "type": "User", "url": "https://api.github.com/users/antxa" }
[]
null
null
CONTRIBUTOR
2022-03-04T19:04:29Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3780.diff", "html_url": "https://github.com/huggingface/datasets/pull/3780", "merged_at": "2022-03-04T19:04:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/3780.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3780" }
PR_kwDODunzps4zWVSM
[ "I also filled some missing sections in the dataset card" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3780/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3780
https://github.com/huggingface/datasets/pull/3780
true
1,148,050,636
https://api.github.com/repos/huggingface/datasets/issues/3779/labels{/name}
Fix #3778.
2022-02-23T13:26:41Z
3,779
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-23T12:49:07Z
https://api.github.com/repos/huggingface/datasets/issues/3779/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3779/timeline
Update manual download URL in newsroom dataset
https://api.github.com/repos/huggingface/datasets/issues/3779/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
null
null
MEMBER
2022-02-23T13:26:40Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3779.diff", "html_url": "https://github.com/huggingface/datasets/pull/3779", "merged_at": "2022-02-23T13:26:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/3779.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3779" }
PR_kwDODunzps4zV4qr
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3779/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3779
https://github.com/huggingface/datasets/pull/3779
true
1,147,898,946
https://api.github.com/repos/huggingface/datasets/issues/3778/labels{/name}
Hello, I tried to download the **newsroom** dataset but it didn't work out for me. it said me to **download it manually**! For manually, Link is also didn't work! It is sawing some ad or something! If anybody has solved this issue please help me out or if somebody has this dataset please share your google drive link, it would be a great help! Thanks Darshan Tank
2022-02-23T17:05:04Z
3,778
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
2022-02-23T10:15:50Z
https://api.github.com/repos/huggingface/datasets/issues/3778/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/3778/timeline
Not be able to download dataset - "Newsroom"
https://api.github.com/repos/huggingface/datasets/issues/3778/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/61326242?v=4", "events_url": "https://api.github.com/users/Darshan2104/events{/privacy}", "followers_url": "https://api.github.com/users/Darshan2104/followers", "following_url": "https://api.github.com/users/Darshan2104/following{/other_user}", "gists_url": "https://api.github.com/users/Darshan2104/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Darshan2104", "id": 61326242, "login": "Darshan2104", "node_id": "MDQ6VXNlcjYxMzI2MjQy", "organizations_url": "https://api.github.com/users/Darshan2104/orgs", "received_events_url": "https://api.github.com/users/Darshan2104/received_events", "repos_url": "https://api.github.com/users/Darshan2104/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Darshan2104/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Darshan2104/subscriptions", "type": "User", "url": "https://api.github.com/users/Darshan2104" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
completed
NONE
2022-02-23T13:26:40Z
null
I_kwDODunzps5Ea4xC
[ "Hi @Darshan2104, thanks for reporting.\r\n\r\nPlease note that at Hugging Face we do not host the data of this dataset, but just a loading script pointing to the host of the data owners.\r\n\r\nApparently the data owners changed their data host server. After googling it, I found their new website at: https://lil.nlp.cornell.edu/newsroom/index.html\r\n- Download page: https://lil.nlp.cornell.edu/newsroom/download/index.html\r\n\r\nI'm fixing the link in our Datasets library.", "@albertvillanova Thanks for the solution and link you made my day!" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3778/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3778
https://github.com/huggingface/datasets/issues/3778
false
1,147,232,875
https://api.github.com/repos/huggingface/datasets/issues/3777/labels{/name}
I updated the source code and the documentation to start removing the "canonical datasets" logic. Indeed this makes the documentation confusing and we don't want this distinction anymore in the future. Ideally users should share their datasets on the Hub directly. ### Changes - the documentation about dataset loading mentions the datasets on the Hub (no difference between canonical and community, since they all have their own repository now) - the documentation about adding a dataset doesn't explain the technical differences between canonical and community anymore, and only presents how to add a community dataset. There is still a small section at the bottom that mentions the datasets that are still on GitHub and redirects to the `ADD_NEW_DATASET.md` guide on GitHub about how to contribute a dataset to the `datasets` library - the code source doesn't mention "canonical" anymore anywhere. There is still a `GitHubDatasetModuleFactory` class that is left, but I updated the docstring to say that it will be eventually removed in favor of the `HubDatasetModuleFactory` classes that already exist Would love to have your feedbacks on this ! cc @julien-c @thomwolf @SBrandeis
2022-02-24T15:04:37Z
3,777
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-22T18:23:30Z
https://api.github.com/repos/huggingface/datasets/issues/3777/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3777/timeline
Start removing canonical datasets logic
https://api.github.com/repos/huggingface/datasets/issues/3777/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
null
null
MEMBER
2022-02-24T15:04:36Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3777.diff", "html_url": "https://github.com/huggingface/datasets/pull/3777", "merged_at": "2022-02-24T15:04:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/3777.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3777" }
PR_kwDODunzps4zTVrz
[ "I'm not sure if the documentation explains why the dataset identifiers might have a namespace or not (the user/org): 'glue' vs 'severo/glue'. Do you think we should explain it, and relate it to the GitHub/Hub distinction?", "> I'm not sure if the documentation explains why the dataset identifiers might have a namespace or not (the user/org): 'glue' vs 'severo/glue'. Do you think we should explain it, and relate it to the GitHub/Hub distinction?\r\n\r\nI added an explanation, let me know if it sounds good to you:\r\n\r\n```\r\nDatasets used to be hosted on our GitHub repository, but all datasets have now been migrated to the Hugging Face Hub.\r\nThe legacy GitHub datasets were added originally on our GitHub repository and therefore don't have a namespace: \"squad\", \"glue\", etc. unlike the other datasets that are named \"username/dataset_name\" or \"org/dataset_name\".\r\n```\r\n", "Thanks for the feedbacks ! Merging this now - if you have some comments I can take care of them in a subsequent PR\r\n\r\nI'll also take care of resolving the conflicts with https://github.com/huggingface/datasets/pull/3690" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 3, "hooray": 0, "laugh": 0, "rocket": 2, "total_count": 5, "url": "https://api.github.com/repos/huggingface/datasets/issues/3777/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3777
https://github.com/huggingface/datasets/pull/3777
true
1,146,932,871
https://api.github.com/repos/huggingface/datasets/issues/3776/labels{/name}
**Is your feature request related to a problem? Please describe.** The Wikipedia dataset can be really big. This is a problem if you want to use it locally in a laptop with the Apache Beam `DirectRunner`. Even if your laptop have a considerable amount of memory (e.g. 32gb). **Describe the solution you'd like** I would like to use the `data_files` argument in the `load_dataset` function to define which file in the wikipedia dataset I would like to download. Thus, I can work with the dataset in a smaller machine using the Apache Beam `DirectRunner`. **Describe alternatives you've considered** I've tried to use the `simple` Wikipedia dataset. But it's in English and I would like to use Portuguese texts in my model.
2022-02-22T14:50:02Z
3,776
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
2022-02-22T13:46:41Z
https://api.github.com/repos/huggingface/datasets/issues/3776/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3776/timeline
Allow download only some files from the Wikipedia dataset
https://api.github.com/repos/huggingface/datasets/issues/3776/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/1514798?v=4", "events_url": "https://api.github.com/users/jvanz/events{/privacy}", "followers_url": "https://api.github.com/users/jvanz/followers", "following_url": "https://api.github.com/users/jvanz/following{/other_user}", "gists_url": "https://api.github.com/users/jvanz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jvanz", "id": 1514798, "login": "jvanz", "node_id": "MDQ6VXNlcjE1MTQ3OTg=", "organizations_url": "https://api.github.com/users/jvanz/orgs", "received_events_url": "https://api.github.com/users/jvanz/received_events", "repos_url": "https://api.github.com/users/jvanz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jvanz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jvanz/subscriptions", "type": "User", "url": "https://api.github.com/users/jvanz" }
[]
null
null
NONE
null
null
I_kwDODunzps5EXM6H
[ "Hi @jvanz, thank you for your proposal.\r\n\r\nIn fact, we are aware that it is very common the problem you mention. Because of that, we are currently working in implementing a new version of wikipedia on the Hub, with all data preprocessed (no need to use Apache Beam), from where you will be able to use `data_files` to load only a specific subset of the data files.\r\n\r\nSee:\r\n- #3401 " ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3776/reactions" }
open
false
https://api.github.com/repos/huggingface/datasets/issues/3776
https://github.com/huggingface/datasets/issues/3776
false
1,146,849,454
https://api.github.com/repos/huggingface/datasets/issues/3775/labels{/name}
Reported on the forum: https://discuss.huggingface.co/t/error-loading-dataset/14999
2022-02-28T11:35:24Z
3,775
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-22T12:27:16Z
https://api.github.com/repos/huggingface/datasets/issues/3775/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3775/timeline
Update gigaword card and info
https://api.github.com/repos/huggingface/datasets/issues/3775/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
null
null
CONTRIBUTOR
2022-02-28T11:35:24Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3775.diff", "html_url": "https://github.com/huggingface/datasets/pull/3775", "merged_at": "2022-02-28T11:35:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/3775.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3775" }
PR_kwDODunzps4zSEd4
[ "I think it actually comes from an issue here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/810b12f763f5cf02f2e43565b8890d278b7398cd/src/datasets/utils/file_utils.py#L575-L579\r\n\r\nand \r\n\r\nhttps://github.com/huggingface/datasets/blob/810b12f763f5cf02f2e43565b8890d278b7398cd/src/datasets/utils/streaming_download_manager.py#L386-L389\r\n\r\nThis code doesn't seem to work anymore. This can probably be fixed with\r\n\r\n```python\r\nif url.startswith(\"https://drive.google.com/\"): \r\n url += \"&confirm=t\"\r\n cookies = response.cookies \r\n```\r\n\r\nbecause Google Drive doesn't return the `download_warning` cookie anymore.", "Actually it seems that is has been fixed already in https://github.com/huggingface/datasets/pull/3787 :)\r\n\r\nI think it should have fixed the gigaword dataset loading", "@lhoestq The linked PR indeed fixes the issue. This PR is still worth merging IMO to update `gigaword`'s card." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3775/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3775
https://github.com/huggingface/datasets/pull/3775
true
1,146,843,177
https://api.github.com/repos/huggingface/datasets/issues/3774/labels{/name}
Fix #3773.
2022-02-22T12:38:45Z
3,774
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-22T12:21:15Z
https://api.github.com/repos/huggingface/datasets/issues/3774/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3774/timeline
Fix reddit_tifu data URL
https://api.github.com/repos/huggingface/datasets/issues/3774/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
null
null
MEMBER
2022-02-22T12:38:44Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3774.diff", "html_url": "https://github.com/huggingface/datasets/pull/3774", "merged_at": "2022-02-22T12:38:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/3774.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3774" }
PR_kwDODunzps4zSDHC
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3774/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3774
https://github.com/huggingface/datasets/pull/3774
true
1,146,758,335
https://api.github.com/repos/huggingface/datasets/issues/3773/labels{/name}
## Describe the bug A checksum occurs when downloading the reddit_tifu data (both long & short). ## Steps to reproduce the bug reddit_tifu_dataset = load_dataset('reddit_tifu', 'long') ## Expected results The expected result is for the dataset to be downloaded and cached locally. ## Actual results File "/.../lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF'] ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 7.0.0
2022-02-25T19:27:49Z
3,773
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-02-22T10:57:07Z
https://api.github.com/repos/huggingface/datasets/issues/3773/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/3773/timeline
Checksum mismatch for the reddit_tifu dataset
https://api.github.com/repos/huggingface/datasets/issues/3773/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/56791604?v=4", "events_url": "https://api.github.com/users/anna-kay/events{/privacy}", "followers_url": "https://api.github.com/users/anna-kay/followers", "following_url": "https://api.github.com/users/anna-kay/following{/other_user}", "gists_url": "https://api.github.com/users/anna-kay/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anna-kay", "id": 56791604, "login": "anna-kay", "node_id": "MDQ6VXNlcjU2NzkxNjA0", "organizations_url": "https://api.github.com/users/anna-kay/orgs", "received_events_url": "https://api.github.com/users/anna-kay/received_events", "repos_url": "https://api.github.com/users/anna-kay/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anna-kay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anna-kay/subscriptions", "type": "User", "url": "https://api.github.com/users/anna-kay" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
completed
CONTRIBUTOR
2022-02-22T12:38:44Z
null
I_kwDODunzps5EWiS_
[ "Thanks for reporting, @anna-kay. We are fixing it.", "@albertvillanova Thank you for the fast response! However I am still getting the same error:\r\n\r\nDownloading: 2.23kB [00:00, ?B/s]\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Anna\\PycharmProjects\\summarization\\main.py\", line 17, in <module>\r\n dataset = load_dataset('reddit_tifu', 'long')\r\n File \"C:\\Users\\Anna\\Desktop\\summarization\\summarization_env\\lib\\site-packages\\datasets\\load.py\", line 1702, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\Anna\\Desktop\\summarization\\summarization_env\\lib\\site-packages\\datasets\\builder.py\", line 594, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"C:\\Users\\Anna\\Desktop\\summarization\\summarization_env\\lib\\site-packages\\datasets\\builder.py\", line 665, in _download_and_prepare\r\n verify_checksums(\r\n File \"C:\\Users\\Anna\\Desktop\\summarization\\summarization_env\\lib\\site-packages\\datasets\\utils\\info_utils.py\", line 40, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF']\r\n\r\nI have cleaned the cache/huggingface/datasets & cache/huggingface/modules files and also tried on another machine with a fresh installation of trasnformers & datasets. \r\nThe reddit_tifu.py that gets downloaded still has the previous url on line 51, _URL = \"https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF\" ", "Hi @anna-kay, I'm sorry I didn't clearly explain the details to you:\r\n- the error has been fixed in our `master` branch on GitHub: https://github.com/huggingface/datasets/commit/8ae21bf6a77175dc803ce2f1b93d18b8fbf45586\r\n- the fix will not be accessible to users in PyPI until our next release of the `datasets` library\r\n - our latest release (version 1.18.3) was made 23 days ago: https://github.com/huggingface/datasets/releases/tag/1.18.3\r\n- in the meantime, you can get the fix if you install datasets from our GitHub `master` branch:\r\n ```\r\n pip install git+https://github.com/huggingface/datasets#egg=datasets\r\n ```", "@albertvillanova Ok great, makes sence. Thank you very much for the explanation!" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3773/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3773
https://github.com/huggingface/datasets/issues/3773
false
1,146,718,630
https://api.github.com/repos/huggingface/datasets/issues/3772/labels{/name}
null
2022-02-22T11:08:34Z
3,772
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-22T10:20:37Z
https://api.github.com/repos/huggingface/datasets/issues/3772/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3772/timeline
Fix: dataset name is stored in keys
https://api.github.com/repos/huggingface/datasets/issues/3772/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomasw21", "id": 24695242, "login": "thomasw21", "node_id": "MDQ6VXNlcjI0Njk1MjQy", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "repos_url": "https://api.github.com/users/thomasw21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "type": "User", "url": "https://api.github.com/users/thomasw21" }
[]
null
null
CONTRIBUTOR
2022-02-22T11:08:33Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3772.diff", "html_url": "https://github.com/huggingface/datasets/pull/3772", "merged_at": "2022-02-22T11:08:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/3772.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3772" }
PR_kwDODunzps4zRor8
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3772/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3772
https://github.com/huggingface/datasets/pull/3772
true
1,146,561,140
https://api.github.com/repos/huggingface/datasets/issues/3771/labels{/name}
Fix #3770.
2022-02-22T08:12:40Z
3,771
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-22T07:44:24Z
https://api.github.com/repos/huggingface/datasets/issues/3771/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3771/timeline
Fix DuplicatedKeysError on msr_sqa dataset
https://api.github.com/repos/huggingface/datasets/issues/3771/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
null
null
MEMBER
2022-02-22T08:12:39Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3771.diff", "html_url": "https://github.com/huggingface/datasets/pull/3771", "merged_at": "2022-02-22T08:12:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/3771.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3771" }
PR_kwDODunzps4zRHsd
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3771/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3771
https://github.com/huggingface/datasets/pull/3771
true
1,146,336,667
https://api.github.com/repos/huggingface/datasets/issues/3770/labels{/name}
### Describe the bug Failure to generate dataset msr_sqa because of duplicate keys. ### Steps to reproduce the bug ``` from datasets import load_dataset load_dataset("msr_sqa") ``` ### Expected results The examples keys should be unique. **Actual results** ``` >>> load_dataset("msr_sqa") Downloading: 6.72k/? [00:00<00:00, 148kB/s] Downloading: 2.93k/? [00:00<00:00, 53.8kB/s] Using custom data configuration default Downloading and preparing dataset msr_sqa/default (download: 4.57 MiB, generated: 26.25 MiB, post-processed: Unknown size, total: 30.83 MiB) to /root/.cache/huggingface/datasets/msr_sqa/default/0.0.0/70b2a497bd3cc8fc960a3557d2bad1eac5edde824505e15c9c8ebe4c260fd4d1... Downloading: 100% 4.80M/4.80M [00:00<00:00, 7.49MB/s] --------------------------------------------------------------------------- DuplicatedKeysError Traceback (most recent call last) [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator) 1080 example = self.info.features.encode_example(record) -> 1081 writer.write(example, key) 1082 finally: 8 frames DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: nt-639 Keys should be unique and deterministic in nature During handling of the above exception, another exception occurred: DuplicatedKeysError Traceback (most recent call last) [/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in check_duplicate_keys(self) 449 for hash, key in self.hkey_record: 450 if hash in tmp_record: --> 451 raise DuplicatedKeysError(key) 452 else: 453 tmp_record.add(hash) DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: nt-639 Keys should be unique and deterministic in nature ``` ### Environment info datasets version: 1.18.3 Platform: Google colab notebook Python version: 3.7 PyArrow version: 6.0.1
2022-02-22T08:12:39Z
3,770
null
https://api.github.com/repos/huggingface/datasets
null
[]
2022-02-22T00:43:33Z
https://api.github.com/repos/huggingface/datasets/issues/3770/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/3770/timeline
DuplicatedKeysError on msr_sqa dataset
https://api.github.com/repos/huggingface/datasets/issues/3770/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/9049591?v=4", "events_url": "https://api.github.com/users/kolk/events{/privacy}", "followers_url": "https://api.github.com/users/kolk/followers", "following_url": "https://api.github.com/users/kolk/following{/other_user}", "gists_url": "https://api.github.com/users/kolk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kolk", "id": 9049591, "login": "kolk", "node_id": "MDQ6VXNlcjkwNDk1OTE=", "organizations_url": "https://api.github.com/users/kolk/orgs", "received_events_url": "https://api.github.com/users/kolk/received_events", "repos_url": "https://api.github.com/users/kolk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kolk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kolk/subscriptions", "type": "User", "url": "https://api.github.com/users/kolk" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
completed
NONE
2022-02-22T08:12:39Z
null
I_kwDODunzps5EU7Wb
[ "Thanks for reporting, @kolk.\r\n\r\nWe are fixing it. " ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3770/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3770
https://github.com/huggingface/datasets/issues/3770
false
1,146,258,023
https://api.github.com/repos/huggingface/datasets/issues/3769/labels{/name}
## Describe the bug assigning the resulted dataset to original dataset causes lost of the faiss index ## Steps to reproduce the bug `my_dataset` is a regular loaded dataset. It's a part of a customed dataset structure ```python self.dataset.add_faiss_index('embeddings') self.dataset.list_indexes() # ['embeddings'] dataset2 = my_dataset.map( lambda x: self._get_nearest_examples_batch(x['text']), batch=True ) # the unexpected result: dataset2.list_indexes() # [] self.dataset.list_indexes() # ['embeddings'] ``` in case something wrong with my `_get_nearest_examples_batch()`, it's like this ```python def _get_nearest_examples_batch(self, examples, k=5): queries = embed(examples) scores_batch, retrievals_batch = self.dataset.get_nearest_examples_batch(self.faiss_column, queries, k) return { 'neighbors': [batch['text'] for batch in retrievals_batch], 'scores': scores_batch } ``` ## Expected results `map` shouldn't drop the indexes, in another word, indexes should be carried to the generated dataset ## Actual results map drops the indexes ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Ubuntu 20.04.3 LTS - Python version: 3.8.12 - PyArrow version: 7.0.0
2022-06-27T14:56:29Z
3,769
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-02-21T21:59:23Z
https://api.github.com/repos/huggingface/datasets/issues/3769/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3769/timeline
`dataset = dataset.map()` causes faiss index lost
https://api.github.com/repos/huggingface/datasets/issues/3769/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/13076552?v=4", "events_url": "https://api.github.com/users/Oaklight/events{/privacy}", "followers_url": "https://api.github.com/users/Oaklight/followers", "following_url": "https://api.github.com/users/Oaklight/following{/other_user}", "gists_url": "https://api.github.com/users/Oaklight/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Oaklight", "id": 13076552, "login": "Oaklight", "node_id": "MDQ6VXNlcjEzMDc2NTUy", "organizations_url": "https://api.github.com/users/Oaklight/orgs", "received_events_url": "https://api.github.com/users/Oaklight/received_events", "repos_url": "https://api.github.com/users/Oaklight/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Oaklight/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Oaklight/subscriptions", "type": "User", "url": "https://api.github.com/users/Oaklight" }
[]
null
null
NONE
null
null
I_kwDODunzps5EUoJn
[ "Hi ! Indeed `map` is dropping the index right now, because one can create a dataset with more or fewer rows using `map` (and therefore the index might not be relevant anymore)\r\n\r\nI guess we could check the resulting dataset length, and if the user hasn't changed the dataset size we could keep the index, what do you think ?", "doing `.add_column(\"x\",x_data)` also removes the index. the new column might be irrelevant to the index so I don't think it should drop. \r\n\r\nMinimal example\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nimport numpy as np\r\n\r\ndata=load_dataset(\"ceyda/cats_vs_dogs_sample\") #just a test dataset\r\ndata=data[\"train\"]\r\nembd_data=data.map(lambda x: {\"emb\":np.random.uniform(-1,0,50).astype(np.float32)})\r\nembd_data.add_faiss_index(column=\"emb\")\r\nprint(embd_data.list_indexes())\r\nembd_data=embd_data.add_column(\"x\",[0]*data.num_rows)\r\nprint(embd_data.list_indexes())\r\n```", "I agree `add_column` shouldn't drop the index indeed ! Is it something you'd like to contribute ? I think it's just a matter of copying the `self._indexes` dictionary to the output dataset" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3769/reactions" }
open
false
https://api.github.com/repos/huggingface/datasets/issues/3769
https://github.com/huggingface/datasets/issues/3769
false
1,146,102,442
https://api.github.com/repos/huggingface/datasets/issues/3768/labels{/name}
null
2022-02-22T09:13:03Z
3,768
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-21T18:14:40Z
https://api.github.com/repos/huggingface/datasets/issues/3768/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3768/timeline
Fix HfFileSystem docstring
https://api.github.com/repos/huggingface/datasets/issues/3768/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
null
null
MEMBER
2022-02-22T09:13:02Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3768.diff", "html_url": "https://github.com/huggingface/datasets/pull/3768", "merged_at": "2022-02-22T09:13:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/3768.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3768" }
PR_kwDODunzps4zPobl
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3768/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3768
https://github.com/huggingface/datasets/pull/3768
true
1,146,036,648
https://api.github.com/repos/huggingface/datasets/issues/3767/labels{/name}
A fix + expose a new method, following https://github.com/huggingface/datasets/pull/3670
2022-02-22T08:35:03Z
3,767
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-21T16:57:47Z
https://api.github.com/repos/huggingface/datasets/issues/3767/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3767/timeline
Expose method and fix param
https://api.github.com/repos/huggingface/datasets/issues/3767/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[]
null
null
CONTRIBUTOR
2022-02-22T08:35:02Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3767.diff", "html_url": "https://github.com/huggingface/datasets/pull/3767", "merged_at": "2022-02-22T08:35:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/3767.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3767" }
PR_kwDODunzps4zPahh
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3767/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3767
https://github.com/huggingface/datasets/pull/3767
true
1,145,829,289
https://api.github.com/repos/huggingface/datasets/issues/3766/labels{/name}
Fix #3758.
2022-02-21T14:39:20Z
3,766
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-21T13:52:50Z
https://api.github.com/repos/huggingface/datasets/issues/3766/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3766/timeline
Fix head_qa data URL
https://api.github.com/repos/huggingface/datasets/issues/3766/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
null
null
MEMBER
2022-02-21T14:39:19Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3766.diff", "html_url": "https://github.com/huggingface/datasets/pull/3766", "merged_at": "2022-02-21T14:39:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/3766.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3766" }
PR_kwDODunzps4zOujH
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3766/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3766
https://github.com/huggingface/datasets/pull/3766
true
1,145,126,881
https://api.github.com/repos/huggingface/datasets/issues/3765/labels{/name}
This PR updates the URL for the tagging app to be the one on Spaces.
2022-02-20T20:36:10Z
3,765
null
https://api.github.com/repos/huggingface/datasets
false
[]
2022-02-20T20:34:31Z
https://api.github.com/repos/huggingface/datasets/issues/3765/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3765/timeline
Update URL for tagging app
https://api.github.com/repos/huggingface/datasets/issues/3765/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[]
null
null
MEMBER
2022-02-20T20:36:06Z
{ "diff_url": "https://github.com/huggingface/datasets/pull/3765.diff", "html_url": "https://github.com/huggingface/datasets/pull/3765", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3765.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3765" }
PR_kwDODunzps4zMdIL
[ "Oh, this URL shouldn't be updated to the tagging app as it's actually used for creating the README - closing this." ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3765/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3765
https://github.com/huggingface/datasets/pull/3765
true
1,145,107,050
https://api.github.com/repos/huggingface/datasets/issues/3764/labels{/name}
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
2022-02-21T08:55:58Z
3,764
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
2022-02-20T19:05:43Z
https://api.github.com/repos/huggingface/datasets/issues/3764/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3764/timeline
!
https://api.github.com/repos/huggingface/datasets/issues/3764/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/77545307?v=4", "events_url": "https://api.github.com/users/LesiaFedorenko/events{/privacy}", "followers_url": "https://api.github.com/users/LesiaFedorenko/followers", "following_url": "https://api.github.com/users/LesiaFedorenko/following{/other_user}", "gists_url": "https://api.github.com/users/LesiaFedorenko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LesiaFedorenko", "id": 77545307, "login": "LesiaFedorenko", "node_id": "MDQ6VXNlcjc3NTQ1MzA3", "organizations_url": "https://api.github.com/users/LesiaFedorenko/orgs", "received_events_url": "https://api.github.com/users/LesiaFedorenko/received_events", "repos_url": "https://api.github.com/users/LesiaFedorenko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LesiaFedorenko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LesiaFedorenko/subscriptions", "type": "User", "url": "https://api.github.com/users/LesiaFedorenko" }
[]
null
completed
NONE
2022-02-21T08:55:58Z
null
I_kwDODunzps5EQPJq
[]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3764/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3764
https://github.com/huggingface/datasets/issues/3764
false
1,145,099,878
https://api.github.com/repos/huggingface/datasets/issues/3763/labels{/name}
## Describe the bug The dataset `20200501.pt` is broken. The available datasets: https://dumps.wikimedia.org/ptwiki/ ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner') ``` ## Expected results I expect to download the dataset locally. ## Actual results ``` >>> from datasets import load_dataset >>> dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner') Downloading and preparing dataset wikipedia/20200501.pt to /home/jvanz/.cache/huggingface/datasets/wikipedia/20200501.pt/1.0.0/009f923d9b6dd00c00c8cdc7f408f2b47f45dd4f5fb7982a21f9448f4afbe475... /home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/apache_beam/__init__.py:79: UserWarning: This version of Apache Beam has not been sufficiently tested on Python 3.9. You may encounter bugs or missing features. warnings.warn( 0%| | 0/1 [00:00<?, ?it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset builder_instance.download_and_prepare( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare self._download_and_prepare( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 1245, in _download_and_prepare super()._download_and_prepare( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 661, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/jvanz/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/009f923d9b6dd00c00c8cdc7f408f2b47f45dd4f5fb7982a21f9448f4afbe475/wikipedia.py", line 420, in _split_generators downloaded_files = dl_manager.download_and_extract({"info": info_url}) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 307, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 195, in download downloaded_path_or_paths = map_nested( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 260, in map_nested mapped = [ File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 261, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 196, in _single_map_nested return function(data_struct) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 216, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 612, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/ptwiki/20200501/dumpstatus.json ``` ## Environment info ``` - `datasets` version: 1.18.3 - Platform: Linux-5.3.18-150300.59.49-default-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 6.0.1 ```
2022-02-21T12:06:12Z
3,763
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
2022-02-20T18:34:58Z
https://api.github.com/repos/huggingface/datasets/issues/3763/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3763/timeline
It's not possible download `20200501.pt` dataset
https://api.github.com/repos/huggingface/datasets/issues/3763/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/1514798?v=4", "events_url": "https://api.github.com/users/jvanz/events{/privacy}", "followers_url": "https://api.github.com/users/jvanz/followers", "following_url": "https://api.github.com/users/jvanz/following{/other_user}", "gists_url": "https://api.github.com/users/jvanz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jvanz", "id": 1514798, "login": "jvanz", "node_id": "MDQ6VXNlcjE1MTQ3OTg=", "organizations_url": "https://api.github.com/users/jvanz/orgs", "received_events_url": "https://api.github.com/users/jvanz/received_events", "repos_url": "https://api.github.com/users/jvanz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jvanz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jvanz/subscriptions", "type": "User", "url": "https://api.github.com/users/jvanz" }
[]
null
completed
NONE
2022-02-21T09:25:06Z
null
I_kwDODunzps5EQNZm
[ "Hi @jvanz, thanks for reporting.\r\n\r\nPlease note that Wikimedia website does not longer host Wikipedia dumps for so old dates.\r\n\r\nFor a list of accessible dump dates of `pt` Wikipedia, please see: https://dumps.wikimedia.org/ptwiki/\r\n\r\nYou can load for example `20220220` `pt` Wikipedia:\r\n```python\r\ndataset = load_dataset(\"wikipedia\", language=\"pt\", date=\"20220220\", beam_runner=\"DirectRunner\")\r\n```", "> ```python\r\n> dataset = load_dataset(\"wikipedia\", language=\"pt\", date=\"20220220\", beam_runner=\"DirectRunner\")\r\n> ```\r\n\r\nThank you! I did not know that I can do this. I was following the example in the error message when I do not define which language dataset I'm trying to download.\r\n\r\nI've tried something similar changing the date in the `load_dataset` call that I've shared in the bug description. Obviously, it did not work. I need to read the docs more carefully next time. My bad!\r\n\r\nThanks again and sorry for the noise.\r\n\r\n" ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3763/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3763
https://github.com/huggingface/datasets/issues/3763
false
1,144,849,557
https://api.github.com/repos/huggingface/datasets/issues/3762/labels{/name}
I can make a PR, just wanted approval before starting. **Is your feature request related to a problem? Please describe.** It is often the case that classes are not ordered in alphabetical order. Current `class_encode_column` sort the classes before indexing. https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1235 **Describe the solution you'd like** I would like to add a **optional** parameter `class_names` to `class_encode_column` that would be used for the mapping instead of sorting the unique values. **Describe alternatives you've considered** One can use map instead. I find it harder to read. ```python CLASS_NAMES = ['apple', 'orange', 'potato'] ds = ds.map(lambda item: CLASS_NAMES.index(item[label_column])) # Proposition ds = ds.class_encode_column(label_column, CLASS_NAMES) ``` **Additional context** I can make the PR if this feature is accepted.
2022-02-21T12:16:35Z
3,762
null
https://api.github.com/repos/huggingface/datasets
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
2022-02-19T21:21:45Z
https://api.github.com/repos/huggingface/datasets/issues/3762/comments
null
https://api.github.com/repos/huggingface/datasets/issues/3762/timeline
`Dataset.class_encode` should support custom class names
https://api.github.com/repos/huggingface/datasets/issues/3762/events
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360" }
[]
null
completed
CONTRIBUTOR
2022-02-21T12:16:35Z
null
I_kwDODunzps5EPQSV
[ "Hi @Dref360, thanks a lot for your proposal.\r\n\r\nIt totally makes sense to have more flexibility when class encoding, I agree.\r\n\r\nYou could even further customize the class encoding by passing an instance of `ClassLabel` itself (instead of replicating `ClassLabel` instantiation arguments as `Dataset.class_encode_column` arguments).\r\n\r\nAnd the latter made me think of `Dataset.cast_column`...\r\n\r\nMaybe better to have some others' opinions @lhoestq @mariosasko ", "Hi @Dref360! You can use [`Dataset.align_labels_with_mapping`](https://huggingface.co/docs/datasets/master/package_reference/main_classes.html#datasets.Dataset.align_labels_with_mapping) after `Dataset.class_encode_column` to assign a different mapping of labels to ids.\r\n\r\n@albertvillanova I'd like to avoid adding more complexity to the API where it's not (absolutely) needed, so I don't think introducing a new param in `Dataset.class_encode_column` is a good idea.\r\n\r\n", "I wasn't aware that it existed thank you for the link.\n\nClosing then! " ]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3762/reactions" }
closed
false
https://api.github.com/repos/huggingface/datasets/issues/3762
https://github.com/huggingface/datasets/issues/3762
false