url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.28B
node_id
stringlengths
18
32
number
int64
1
4.53k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
null
comments
sequence
created_at
int64
1,587B
1,656B
updated_at
int64
1,587B
1,656B
closed_at
null
1,587B
1,656B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
nullclasses
1 value
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/4527
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4527/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4527/comments
https://api.github.com/repos/huggingface/datasets/issues/4527/events
https://github.com/huggingface/datasets/issues/4527
1,276,583,536
I_kwDODunzps5MFx5w
4,527
Dataset Viewer issue for vadis/sv-ident
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[]
1,655,714,862,000
1,655,714,862,000
null
MEMBER
null
### Link https://huggingface.co/datasets/vadis/sv-ident ### Description The dataset preview does not work: ``` Server Error Status code: 400 Exception: Status400Error Message: The dataset does not exist. ``` However, the dataset is streamable and works locally: ```python In [1]: from datasets import load_dataset; ds = load_dataset("sv-ident.py", split="train", streaming=True); item = next(iter(ds)); item Using custom data configuration default Out[1]: {'sentence': 'Our point, however, is that so long as downward (favorable) comparisons overwhelm the potential for unfavorable comparisons, system justification should be a likely outcome amongst the disadvantaged.', 'is_variable': 1, 'variable': ['exploredata-ZA5400_VarV66', 'exploredata-ZA5400_VarV53'], 'research_data': ['ZA5400'], 'doc_id': '73106', 'uuid': 'b9fbb80f-3492-4b42-b9d5-0254cc33ac10', 'lang': 'en'} ``` CC: @e-tornike ### Owner No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4527/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4527/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4526
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4526/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4526/comments
https://api.github.com/repos/huggingface/datasets/issues/4526/events
https://github.com/huggingface/datasets/issues/4526
1,276,580,185
I_kwDODunzps5MFxFZ
4,526
split cache used when processing different split
{ "login": "gpucce", "id": 32967787, "node_id": "MDQ6VXNlcjMyOTY3Nzg3", "avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gpucce", "html_url": "https://github.com/gpucce", "followers_url": "https://api.github.com/users/gpucce/followers", "following_url": "https://api.github.com/users/gpucce/following{/other_user}", "gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}", "starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gpucce/subscriptions", "organizations_url": "https://api.github.com/users/gpucce/orgs", "repos_url": "https://api.github.com/users/gpucce/repos", "events_url": "https://api.github.com/users/gpucce/events{/privacy}", "received_events_url": "https://api.github.com/users/gpucce/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,655,714,698,000
1,655,714,820,000
null
NONE
null
## Describe the bug` ``` ds1 = load_dataset('squad', split='validation') ds2 = load_dataset('squad', split='train') ds1 = ds1.map(some_function) ds2 = ds2.map(some_function) assert ds1 == ds2 ``` This happens when ds1 and ds2 are created in `pytorch_lightning.DataModule` through ``` class myDataModule: def train_dataloader(self): ds = load_dataset('squad', split='train') ds = ds.map(some_function) return [ds] def val_dataloader(self): ds = load_dataset('squad', split="validation") ds = ds.map(some_function) return [ds] ``` I don't know if it depends on `pytorch_lightning` or `datasets` but setting `ds.map(some_function, load_from_cache_file=False)` fixes the issue. If this is not enough to replicate I will try and provide and MWE, I don't have time now so I thought I wuld open the issue first!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4526/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4526/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4525
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4525/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4525/comments
https://api.github.com/repos/huggingface/datasets/issues/4525/events
https://github.com/huggingface/datasets/issues/4525
1,276,491,386
I_kwDODunzps5MFbZ6
4,525
Out of memory error on workers while running Beam+Dataflow
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Some naive ideas to cope with this:\r\n- enable more RAM on each worker\r\n- force the spanning of more workers\r\n- others?" ]
1,655,710,092,000
1,655,710,565,000
null
MEMBER
null
## Describe the bug While running the preprocessing of the natural_question dataset (see PR #4368), there is an issue for the "default" config (train+dev files). Previously we ran the preprocessing for the "dev" config (only dev files) with success. Train data files are larger than dev ones and apparently workers run out of memory while processing them. Any help/hint is welcome! Error message: ``` Data channel closed, unable to receive additional data from SDK sdk-0-0 ``` Info from the Diagnostics tab: ``` Out of memory: Killed process 1882 (python) total-vm:6041764kB, anon-rss:3290928kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:9520kB oom_score_adj:900 The worker VM had to shut down one or more processes due to lack of memory. ``` ## Additional information ### Stack trace ``` Traceback (most recent call last): File "/home/albert_huggingface_co/natural_questions/venv/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/commands/run_beam.py", line 127, in run builder.download_and_prepare( File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/builder.py", line 1389, in _download_and_prepare pipeline_results.wait_until_finish() File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/apache_beam/runners/dataflow/dataflow_runner.py", line 1667, in wait_until_finish raise DataflowRuntimeException( apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow pipeline failed. State: FAILED, Error: Data channel closed, unable to receive additional data from SDK sdk-0-0 ``` ### Logs ``` Error message from worker: Data channel closed, unable to receive additional data from SDK sdk-0-0 Workflow failed. Causes: S30:train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/GroupByKey/Read+train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/GroupByKey/GroupByWindow+train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/FlatMap(restore_timestamps)+train/ReadAllFromText/ReadAllFiles/Reshard/RemoveRandomKeys+train/ReadAllFromText/ReadAllFiles/ReadRange+train/Map(_parse_example)+train/Encode+train/Count N. Examples+train/Get values/Values+train/Save to parquet/Write/WriteImpl/WindowInto(WindowIntoFn)+train/Save to parquet/Write/WriteImpl/WriteBundles+train/Save to parquet/Write/WriteImpl/Pair+train/Save to parquet/Write/WriteImpl/GroupByKey/Write failed., The job failed because a work item has failed 4 times. Look in previous log entries for the cause of each one of the 4 failures. For more information, see https://cloud.google.com/dataflow/docs/guides/common-errors. The work item was attempted on these workers: beamapp-alberthuggingface-06170554-5p23-harness-t4v9 Root cause: Data channel closed, unable to receive additional data from SDK sdk-0-0, beamapp-alberthuggingface-06170554-5p23-harness-t4v9 Root cause: The worker lost contact with the service., beamapp-alberthuggingface-06170554-5p23-harness-bwsj Root cause: The worker lost contact with the service., beamapp-alberthuggingface-06170554-5p23-harness-5052 Root cause: The worker lost contact with the service. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4525/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4525/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4524
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4524/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4524/comments
https://api.github.com/repos/huggingface/datasets/issues/4524/events
https://github.com/huggingface/datasets/issues/4524
1,275,909,186
I_kwDODunzps5MDNRC
4,524
Downloading via Apache Pipeline, client cancelled (org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException)
{ "login": "dan-the-meme-man", "id": 45244059, "node_id": "MDQ6VXNlcjQ1MjQ0MDU5", "avatar_url": "https://avatars.githubusercontent.com/u/45244059?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dan-the-meme-man", "html_url": "https://github.com/dan-the-meme-man", "followers_url": "https://api.github.com/users/dan-the-meme-man/followers", "following_url": "https://api.github.com/users/dan-the-meme-man/following{/other_user}", "gists_url": "https://api.github.com/users/dan-the-meme-man/gists{/gist_id}", "starred_url": "https://api.github.com/users/dan-the-meme-man/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dan-the-meme-man/subscriptions", "organizations_url": "https://api.github.com/users/dan-the-meme-man/orgs", "repos_url": "https://api.github.com/users/dan-the-meme-man/repos", "events_url": "https://api.github.com/users/dan-the-meme-man/events{/privacy}", "received_events_url": "https://api.github.com/users/dan-the-meme-man/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi @dan-the-meme-man, thanks for reporting.\r\n\r\nWe are investigating a similar issue but with Beam+Dataflow (instead of Beam+Flink): \r\n- #4525\r\n\r\nIn order to go deeper into the root cause, we need as much information as possible: logs from the main process + logs from the workers are very informative.\r\n\r\nIn the case of the issue with Beam+Dataflow, the logs from the workers report an out of memory issue." ]
1,655,595,405,000
1,655,710,511,000
null
NONE
null
## Describe the bug When downloading some `wikipedia` languages (in particular, I'm having a hard time with Spanish, Cebuano, and Russian) via FlinkRunner, I encounter the exception in the title. I have been playing with package versions a lot, because unfortunately, the different dependencies required by these packages seem to be incompatible in terms of versions (dill and requests, for instance). It should be noted that the following code runs for several hours without issue, executing the `load_dataset()` function, before the exception occurs. ## Steps to reproduce the bug ```python # bash commands !pip install datasets !pip install apache-beam[interactive] !pip install mwparserfromhell !pip install dill==0.3.5.1 !pip install requests==2.23.0 # imports import os from datasets import load_dataset import apache_beam as beam import mwparserfromhell from google.colab import drive import dill import requests # mount drive drive_dir = os.path.join(os.getcwd(), 'drive') drive.mount(drive_dir) # confirming the versions of these two packages are the ones that are suggested by the outputs from the bash commands print(dill.__version__) print(requests.__version__) lang = 'es' # or 'ru' or 'ceb' - these are the ones causing the issue lang_dir = os.path.join(drive_dir, 'path/to/my/folder', lang) if not os.path.exists(lang_dir): x = None x = load_dataset('wikipedia', '20220301.' + lang, beam_runner='Flink', split='train') x.save_to_disk(lang_dir) ``` ## Expected results Although some warnings are generally produced by this code (run in Colab Notebook), most languages I've tried have been successfully downloaded. It should simply go through without issue, but for these languages, I am continually encountering this error. ## Actual results Traceback below: ``` Exception in thread run_worker_3-1: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 234, in run for work_request in self._control_stub.Control(get_responses()): File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 426, in __next__ return self._next() File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 826, in _next raise self grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "Socket closed" debug_error_string = "{"created":"@1655593643.871830638","description":"Error received from peer ipv4:127.0.0.1:44441","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Socket closed","grpc_status":14}" > Traceback (most recent call last): File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__ self._cache[target_window] = self._side_input_data.view_fn(raw_view) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda> lambda iterable: from_runtime_iterable(iterable, view_options)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable head = list(itertools.islice(it, 2)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator self._underlying.get_raw(state_key, continuation_token)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw continuation_token=continuation_token))) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request raise RuntimeError(response.error) RuntimeError: Unknown process bundle instruction id '26' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 267, in _execute response = task() File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 340, in <lambda> lambda: self.create_worker().do_instruction(request), request) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 581, in do_instruction getattr(request, request_type), request.instruction_id) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 618, in process_bundle bundle_processor.process_bundle(instruction_id)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 996, in process_bundle element.data) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 221, in process_encoded self.output(decoded_value) File "apache_beam/runners/worker/operations.py", line 346, in apache_beam.runners.worker.operations.Operation.output File "apache_beam/runners/worker/operations.py", line 348, in apache_beam.runners.worker.operations.Operation.output File "apache_beam/runners/worker/operations.py", line 215, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive File "apache_beam/runners/worker/operations.py", line 707, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam/runners/worker/operations.py", line 708, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam/runners/common.py", line 1200, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 1281, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__ self._cache[target_window] = self._side_input_data.view_fn(raw_view) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda> lambda iterable: from_runtime_iterable(iterable, view_options)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable head = list(itertools.islice(it, 2)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator self._underlying.get_raw(state_key, continuation_token)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw continuation_token=continuation_token))) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request raise RuntimeError(response.error) RuntimeError: Unknown process bundle instruction id '26' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles'] ERROR:apache_beam.runners.worker.sdk_worker:Error processing instruction 26. Original traceback is Traceback (most recent call last): File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__ self._cache[target_window] = self._side_input_data.view_fn(raw_view) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda> lambda iterable: from_runtime_iterable(iterable, view_options)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable head = list(itertools.islice(it, 2)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator self._underlying.get_raw(state_key, continuation_token)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw continuation_token=continuation_token))) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request raise RuntimeError(response.error) RuntimeError: Unknown process bundle instruction id '26' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 267, in _execute response = task() File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 340, in <lambda> lambda: self.create_worker().do_instruction(request), request) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 581, in do_instruction getattr(request, request_type), request.instruction_id) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 618, in process_bundle bundle_processor.process_bundle(instruction_id)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 996, in process_bundle element.data) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 221, in process_encoded self.output(decoded_value) File "apache_beam/runners/worker/operations.py", line 346, in apache_beam.runners.worker.operations.Operation.output File "apache_beam/runners/worker/operations.py", line 348, in apache_beam.runners.worker.operations.Operation.output File "apache_beam/runners/worker/operations.py", line 215, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive File "apache_beam/runners/worker/operations.py", line 707, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam/runners/worker/operations.py", line 708, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam/runners/common.py", line 1200, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 1281, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__ self._cache[target_window] = self._side_input_data.view_fn(raw_view) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda> lambda iterable: from_runtime_iterable(iterable, view_options)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable head = list(itertools.islice(it, 2)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator self._underlying.get_raw(state_key, continuation_token)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw continuation_token=continuation_token))) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request raise RuntimeError(response.error) RuntimeError: Unknown process bundle instruction id '26' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles'] ERROR:root:org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException: CANCELLED: client cancelled ERROR:apache_beam.runners.worker.data_plane:Failed to read inputs in the data plane. Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/data_plane.py", line 634, in _read_inputs for elements in elements_iterator: File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 426, in __next__ return self._next() File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 826, in _next raise self grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.CANCELLED details = "Multiplexer hanging up" debug_error_string = "{"created":"@1655593654.436885887","description":"Error received from peer ipv4:127.0.0.1:43263","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Multiplexer hanging up","grpc_status":1}" > Exception in thread read_grpc_client_inputs: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/data_plane.py", line 651, in <lambda> target=lambda: self._read_inputs(elements_iterator), File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/data_plane.py", line 634, in _read_inputs for elements in elements_iterator: File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 426, in __next__ return self._next() File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 826, in _next raise self grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.CANCELLED details = "Multiplexer hanging up" debug_error_string = "{"created":"@1655593654.436885887","description":"Error received from peer ipv4:127.0.0.1:43263","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Multiplexer hanging up","grpc_status":1}" > --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) [/tmp/ipykernel_219/3869142325.py](https://localhost:8080/#) in <module> 18 x = None 19 x = load_dataset('wikipedia', '20220301.' + lang, beam_runner='Flink', ---> 20 split='train') 21 x.save_to_disk(lang_dir) 3 frames [/usr/local/lib/python3.7/dist-packages/apache_beam/runners/portability/portable_runner.py](https://localhost:8080/#) in wait_until_finish(self, duration) 604 605 if self._runtime_exception: --> 606 raise self._runtime_exception 607 608 return self._state RuntimeError: Pipeline BeamApp-root-0618220708-b3b59a0e_d8efcf67-9119-4f76-b013-70de7b29b54d failed in state FAILED: org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException: CANCELLED: client cancelled ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4524/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4524/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4523
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4523/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4523/comments
https://api.github.com/repos/huggingface/datasets/issues/4523/events
https://github.com/huggingface/datasets/pull/4523
1,275,002,639
PR_kwDODunzps452hgh
4,523
Update download url and improve card of `cats_vs_dogs` dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4523). All of your documentation changes will be reflected on that endpoint." ]
1,655,470,784,000
1,655,471,214,000
null
CONTRIBUTOR
null
Improve the download URL (reported here: https://huggingface.co/datasets/cats_vs_dogs/discussions/1), remove the `image_file_path` column (not used in Transformers, so it should be safe) and add more info to the card.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4523/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4523", "html_url": "https://github.com/huggingface/datasets/pull/4523", "diff_url": "https://github.com/huggingface/datasets/pull/4523.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4523.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4522
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4522/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4522/comments
https://api.github.com/repos/huggingface/datasets/issues/4522/events
https://github.com/huggingface/datasets/issues/4522
1,274,929,328
I_kwDODunzps5L_eCw
4,522
Try to reduce the number of datasets that require manual download
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[]
1,655,466,123,000
1,655,466,768,000
null
CONTRIBUTOR
null
> Currently, 41 canonical datasets require manual download. I checked their scripts and I'm pretty sure this number can be reduced to ≈ 30 by not relying on bash scripts to download data, hosting data directly on the Hub when the license permits, etc. Then, we will mostly be left with datasets with restricted access, which we can ignore from https://github.com/huggingface/datasets-server/issues/12#issuecomment-1026920432
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4522/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4522/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4521
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4521/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4521/comments
https://api.github.com/repos/huggingface/datasets/issues/4521/events
https://github.com/huggingface/datasets/issues/4521
1,274,919,437
I_kwDODunzps5L_boN
4,521
Datasets method `.map` not hashing
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Fix posted: https://github.com/huggingface/datasets/issues/4506#issuecomment-1157417219", "Didn't realize it's a bug when I asked the question yesterday! Free free to post an answer if you are sure the cause has been addressed.\r\n\r\nhttps://stackoverflow.com/questions/72664827/can-pickle-dill-foo-but-not-lambda-x-foox" ]
1,655,465,470,000
1,655,590,230,000
null
CONTRIBUTOR
null
## Describe the bug Datasets method `.map` not hashing, even with an empty no-op function ## Steps to reproduce the bug ```python from datasets import load_dataset # download 9MB dummy dataset ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean") def prepare_dataset(batch): return(batch) ds = ds.map( prepare_dataset, num_proc=1, desc="preprocess train dataset", ) ``` ## Expected results Hashed and cached dataset preprocessing ## Actual results Does not hash properly: ``` Parameter 'function'=<function prepare_dataset at 0x7fccb68e9280> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.3.dev0 - Platform: Linux-5.11.0-1028-gcp-x86_64-with-glibc2.31 - Python version: 3.9.12 - PyArrow version: 8.0.0 - Pandas version: 1.4.2 cc @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4521/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4521/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4520
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4520/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4520/comments
https://api.github.com/repos/huggingface/datasets/issues/4520/events
https://github.com/huggingface/datasets/issues/4520
1,274,879,180
I_kwDODunzps5L_RzM
4,520
Failure to hash `dataclasses` - results in functions that cannot be hashed or cached in `.map`
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,655,462,837,000
1,655,463,423,000
null
CONTRIBUTOR
null
Dataclasses cannot be hashed. As a result, they cannot be hashed or cached if used in the `.map` method. Dataclasses are used extensively in Transformers examples scripts: (c.f. [CTC example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py)). Since dataclasses cannot be hashed, one has to define separate variables prior to passing dataclass attributes to the `.map` method: ```python phoneme_language = data_args.phoneme_language ``` in the example https://github.com/huggingface/transformers/blob/3c7e56fbb11f401de2528c1dcf0e282febc031cd/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L603-L630 ## Steps to reproduce the bug ```python from dataclasses import dataclass, field from datasets.fingerprint import Hasher @dataclass class DataTrainingArguments: """ Arguments pertaining to what data we are going to input our model for training and eval. """ phoneme_language: str = field( default=None, metadata={"help": "The name of the phoneme language to use."} ) data_args = DataTrainingArguments(phoneme_language ="foo") Hasher.hash(data_args) phoneme_language = data_args.phoneme_language Hasher.hash(phoneme_language) ``` ## Expected results A hash. ## Actual results <details> <summary> Traceback </summary> ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Input In [1], in <cell line: 16>() 10 phoneme_language: str = field( 11 default=None, metadata={"help": "The name of the phoneme language to use."} 12 ) 14 data_args = DataTrainingArguments(phoneme_language ="foo") ---> 16 Hasher.hash(data_args) 18 phoneme_language = data_args. phoneme_language 20 Hasher.hash(phoneme_language) File ~/datasets/src/datasets/fingerprint.py:237, in Hasher.hash(cls, value) 235 return cls.dispatch[type(value)](cls, value) 236 else: --> 237 return cls.hash_default(value) File ~/datasets/src/datasets/fingerprint.py:230, in Hasher.hash_default(cls, value) 228 @classmethod 229 def hash_default(cls, value: Any) -> str: --> 230 return cls.hash_bytes(dumps(value)) File ~/datasets/src/datasets/utils/py_utils.py:564, in dumps(obj) 562 file = StringIO() 563 with _no_cache_fields(obj): --> 564 dump(obj, file) 565 return file.getvalue() File ~/datasets/src/datasets/utils/py_utils.py:539, in dump(obj, file) 537 def dump(obj, file): 538 """pickle an object to a file""" --> 539 Pickler(file, recurse=True).dump(obj) 540 return File ~/hf/lib/python3.8/site-packages/dill/_dill.py:620, in Pickler.dump(self, obj) 618 raise PicklingError(msg) 619 else: --> 620 StockPickler.dump(self, obj) 621 return File /usr/lib/python3.8/pickle.py:487, in _Pickler.dump(self, obj) 485 if self.proto >= 4: 486 self.framer.start_framing() --> 487 self.save(obj) 488 self.write(STOP) 489 self.framer.end_framing() File /usr/lib/python3.8/pickle.py:603, in _Pickler.save(self, obj, save_persistent_id) 599 raise PicklingError("Tuple returned by %s must have " 600 "two to six elements" % reduce) 602 # Save the reduce() output and finally memoize the object --> 603 self.save_reduce(obj=obj, *rv) File /usr/lib/python3.8/pickle.py:687, in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj) 684 raise PicklingError( 685 "args[0] from __newobj__ args has the wrong class") 686 args = args[1:] --> 687 save(cls) 688 save(args) 689 write(NEWOBJ) File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id) 558 f = self.dispatch.get(t) 559 if f is not None: --> 560 f(self, obj) # Call unbound method with explicit self 561 return 563 # Check private dispatch table if any, or else 564 # copyreg.dispatch_table File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1838, in save_type(pickler, obj, postproc_list) 1836 postproc_list = [] 1837 postproc_list.append((setattr, (obj, '__qualname__', obj_name))) -> 1838 _save_with_postproc(pickler, (_create_type, ( 1839 type(obj), obj.__name__, obj.__bases__, _dict 1840 )), obj=obj, postproc_list=postproc_list) 1841 log.info("# %s" % _t) 1842 else: File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1140, in _save_with_postproc(pickler, reduction, is_pickler_dill, obj, postproc_list) 1137 pickler._postproc[id(obj)] = postproc_list 1139 # TODO: Use state_setter in Python 3.8 to allow for faster cPickle implementations -> 1140 pickler.save_reduce(*reduction, obj=obj) 1142 if is_pickler_dill: 1143 # pickler.x -= 1 1144 # print(pickler.x*' ', 'pop', obj, id(obj)) 1145 postproc = pickler._postproc.pop(id(obj)) File /usr/lib/python3.8/pickle.py:692, in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj) 690 else: 691 save(func) --> 692 save(args) 693 write(REDUCE) 695 if obj is not None: 696 # If the object is already in the memo, this means it is 697 # recursive. In this case, throw away everything we put on the 698 # stack, and fetch the object back from the memo. File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id) 558 f = self.dispatch.get(t) 559 if f is not None: --> 560 f(self, obj) # Call unbound method with explicit self 561 return 563 # Check private dispatch table if any, or else 564 # copyreg.dispatch_table File /usr/lib/python3.8/pickle.py:901, in _Pickler.save_tuple(self, obj) 899 write(MARK) 900 for element in obj: --> 901 save(element) 903 if id(obj) in memo: 904 # Subtle. d was not in memo when we entered save_tuple(), so 905 # the process of saving the tuple's elements must have saved (...) 909 # could have been done in the "for element" loop instead, but 910 # recursive tuples are a rare thing. 911 get = self.get(memo[id(obj)][0]) File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id) 558 f = self.dispatch.get(t) 559 if f is not None: --> 560 f(self, obj) # Call unbound method with explicit self 561 return 563 # Check private dispatch table if any, or else 564 # copyreg.dispatch_table File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1251, in save_module_dict(pickler, obj) 1248 if is_dill(pickler, child=False) and pickler._session: 1249 # we only care about session the first pass thru 1250 pickler._first_pass = False -> 1251 StockPickler.save_dict(pickler, obj) 1252 log.info("# D2") 1253 return File /usr/lib/python3.8/pickle.py:971, in _Pickler.save_dict(self, obj) 968 self.write(MARK + DICT) 970 self.memoize(obj) --> 971 self._batch_setitems(obj.items()) File /usr/lib/python3.8/pickle.py:997, in _Pickler._batch_setitems(self, items) 995 for k, v in tmp: 996 save(k) --> 997 save(v) 998 write(SETITEMS) 999 elif n: File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id) 558 f = self.dispatch.get(t) 559 if f is not None: --> 560 f(self, obj) # Call unbound method with explicit self 561 return 563 # Check private dispatch table if any, or else 564 # copyreg.dispatch_table File ~/datasets/src/datasets/utils/py_utils.py:862, in save_function(pickler, obj) 859 if state_dict: 860 state = state, state_dict --> 862 dill._dill._save_with_postproc( 863 pickler, 864 ( 865 dill._dill._create_function, 866 (obj.__code__, globs, obj.__name__, obj.__defaults__, closure), 867 state, 868 ), 869 obj=obj, 870 postproc_list=postproc_list, 871 ) 872 else: 873 closure = obj.func_closure File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1153, in _save_with_postproc(pickler, reduction, is_pickler_dill, obj, postproc_list) 1151 dest, source = reduction[1] 1152 if source: -> 1153 pickler.write(pickler.get(pickler.memo[id(dest)][0])) 1154 pickler._batch_setitems(iter(source.items())) 1155 else: 1156 # Updating with an empty dictionary. Same as doing nothing. KeyError: 140434581781568 ``` </details> ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.3.dev0 - Platform: Linux-5.11.0-1028-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.2 cc @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4520/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4520/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4519
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4519/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4519/comments
https://api.github.com/repos/huggingface/datasets/issues/4519/events
https://github.com/huggingface/datasets/pull/4519
1,274,110,623
PR_kwDODunzps45zhqa
4,519
Create new sections for audio and vision in guides
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4519). All of your documentation changes will be reflected on that endpoint." ]
1,655,415,504,000
1,655,415,922,000
null
MEMBER
null
This PR creates separate sections in the guides for audio, vision, text, and general usage so it is easier for users to find loading, processing, or sharing guides specific to the dataset type they're working with. It'll also allow us to scale the docs to additional dataset types - like time series, tabular, etc. - while keeping our docs information architecture. Some other changes include: - Experimented with decorating text with some CSS to highlight guides specific to each modality. Hopefully, it'll be easier for users to find and realize that these different docs exist! - Added deprecation warning for Metrics and redirect to Evaluate. - Updated `set_format` section to recommend using the new `to_tf_dataset` function if you need to convert to a TensorFlow dataset. - Reorganized `toctree` to nest general usage, audio, vision, and text sections under the how-to guides. - A quick review and edit to the Load and Process docs for clarity.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4519/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4519/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4519", "html_url": "https://github.com/huggingface/datasets/pull/4519", "diff_url": "https://github.com/huggingface/datasets/pull/4519.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4519.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4518
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4518/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4518/comments
https://api.github.com/repos/huggingface/datasets/issues/4518/events
https://github.com/huggingface/datasets/pull/4518
1,274,010,628
PR_kwDODunzps45zMnB
4,518
Patch tests for hfh v0.8.0
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,408,732,000
1,655,482,557,000
null
MEMBER
null
This PR patches testing utilities that would otherwise fail with hfh v0.8.0.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4518/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4518/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4518", "html_url": "https://github.com/huggingface/datasets/pull/4518", "diff_url": "https://github.com/huggingface/datasets/pull/4518.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4518.patch", "merged_at": 1655481967000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4517
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4517/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4517/comments
https://api.github.com/repos/huggingface/datasets/issues/4517/events
https://github.com/huggingface/datasets/pull/4517
1,273,960,476
PR_kwDODunzps45zBl0
4,517
Add tags for task_ids:summarization-* and task_categories:summarization*
{ "login": "hobson", "id": 292855, "node_id": "MDQ6VXNlcjI5Mjg1NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/292855?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hobson", "html_url": "https://github.com/hobson", "followers_url": "https://api.github.com/users/hobson/followers", "following_url": "https://api.github.com/users/hobson/following{/other_user}", "gists_url": "https://api.github.com/users/hobson/gists{/gist_id}", "starred_url": "https://api.github.com/users/hobson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hobson/subscriptions", "organizations_url": "https://api.github.com/users/hobson/orgs", "repos_url": "https://api.github.com/users/hobson/repos", "events_url": "https://api.github.com/users/hobson/events{/privacy}", "received_events_url": "https://api.github.com/users/hobson/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Associated community discussion is [here](https://huggingface.co/datasets/aeslc/discussions/1).\r\nPaper referenced in the `dataset_infos.json` is [here](https://arxiv.org/pdf/1906.03497.pdf). It mentions the _email-subject-generation_ task, which is not a tag mentioned in any other dataset so it was not added in this pull request. The _summarization_ task is mentioned as a related task.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4517). All of your documentation changes will be reflected on that endpoint." ]
1,655,405,545,000
1,655,477,436,000
null
NONE
null
yaml header at top of README.md file was edited to add task tags because I couldn't find the existing tags in the json separate Pull Request will modify dataset_infos.json to add these tags The Enron dataset (dataset id aeslc) is only tagged with: arxiv:1906.03497' languages:en pretty_name:AESLC Using the email subject_line field as a label or target variable it possible to create models for the following task_ids (in order of relevance): 'task_ids:summarization' 'task_ids:summarization-other-conversations-summarization' "task_ids:other-other-query-based-multi-document-summarization" 'task_ids:summarization-other-aspect-based-summarization' 'task_ids:summarization--other-headline-generation' The subject might also be used for the task_category "task_categories:summarization" E-mail chains might be used for the task category "task_categories:dialogue-system"
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4517/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4517/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4517", "html_url": "https://github.com/huggingface/datasets/pull/4517", "diff_url": "https://github.com/huggingface/datasets/pull/4517.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4517.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4516
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4516/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4516/comments
https://api.github.com/repos/huggingface/datasets/issues/4516/events
https://github.com/huggingface/datasets/pull/4516
1,273,825,640
PR_kwDODunzps45ykYX
4,516
Fix hashing for python 3.9
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4516). All of your documentation changes will be reflected on that endpoint.", "What do you think @albertvillanova ?" ]
1,655,397,751,000
1,655,717,568,000
null
MEMBER
null
In python 3.9, pickle hashes the `glob_ids` dictionary in addition to the `globs` of a function. Therefore the test at `tests/test_fingerprint.py::RecurseDumpTest::test_recurse_dump_for_function_with_shuffled_globals` is currently failing for python 3.9 To make hashing deterministic when the globals are not in the same order, we also need to make the order of `glob_ids` deterministic. Right now we don't have a CI to test python 3.9 but we should definitely have one. For this PR in particular I ran the tests locally using python 3.9 and they're passing now. Fix https://github.com/huggingface/datasets/issues/4506
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4516/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4516/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4516", "html_url": "https://github.com/huggingface/datasets/pull/4516", "diff_url": "https://github.com/huggingface/datasets/pull/4516.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4516.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4515
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4515/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4515/comments
https://api.github.com/repos/huggingface/datasets/issues/4515/events
https://github.com/huggingface/datasets/pull/4515
1,273,626,131
PR_kwDODunzps45x5mB
4,515
Add uppercased versions of image file extensions for automatic module inference
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,388,889,000
1,655,400,113,000
null
CONTRIBUTOR
null
Adds the uppercased versions of the image file extensions to the supported extensions. Another approach would be to call `.lower()` on extensions while resolving data files, but uppercased extensions are not something we want to encourage out of the box IMO unless they are commonly used (as they are in the vision domain) Note that there is a slight discrepancy between the image file resolution and `imagefolder` as the latter calls `.lower()` on file extensions leading to some image file extensions being ignored by the resolution but not by the loader (e.g. `pNg`). Such extensions should also be discouraged, so I'm ignoring that case too. Fix #4514.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4515/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4515/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4515", "html_url": "https://github.com/huggingface/datasets/pull/4515", "diff_url": "https://github.com/huggingface/datasets/pull/4515.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4515.patch", "merged_at": 1655399500000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4514
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4514/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4514/comments
https://api.github.com/repos/huggingface/datasets/issues/4514/events
https://github.com/huggingface/datasets/issues/4514
1,273,505,230
I_kwDODunzps5L6CXO
4,514
Allow .JPEG as a file extension
{ "login": "DiGyt", "id": 34550289, "node_id": "MDQ6VXNlcjM0NTUwMjg5", "avatar_url": "https://avatars.githubusercontent.com/u/34550289?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DiGyt", "html_url": "https://github.com/DiGyt", "followers_url": "https://api.github.com/users/DiGyt/followers", "following_url": "https://api.github.com/users/DiGyt/following{/other_user}", "gists_url": "https://api.github.com/users/DiGyt/gists{/gist_id}", "starred_url": "https://api.github.com/users/DiGyt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DiGyt/subscriptions", "organizations_url": "https://api.github.com/users/DiGyt/orgs", "repos_url": "https://api.github.com/users/DiGyt/repos", "events_url": "https://api.github.com/users/DiGyt/events{/privacy}", "received_events_url": "https://api.github.com/users/DiGyt/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi, thanks for reporting! I've opened a PR with the fix.", "Wow, that was quick! Thank you very much 🙏 " ]
1,655,382,980,000
1,655,713,126,000
null
NONE
null
## Describe the bug When loading image data, HF datasets seems to recognize `.jpg` and `.jpeg` file extensions, but not e.g. .JPEG. As the naming convention .JPEG is used in important datasets such as imagenet, I would welcome if according extensions like .JPEG or .JPG would be allowed. ## Steps to reproduce the bug ```python # use bash to create 2 sham datasets with jpeg and JPEG ext !mkdir dataset_a !mkdir dataset_b !wget https://upload.wikimedia.org/wikipedia/commons/7/71/Dsc_%28179253513%29.jpeg -O example_img.jpeg !cp example_img.jpeg ./dataset_a/ !mv example_img.jpeg ./dataset_b/example_img.JPEG from datasets import load_dataset # working df1 = load_dataset("./dataset_a", ignore_verifications=True) #not working df2 = load_dataset("./dataset_b", ignore_verifications=True) # show print(df1, df2) ``` ## Expected results ``` DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 1 }) }) DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 1 }) }) ``` ## Actual results ``` FileNotFoundError: Unable to resolve any data file that matches '['**']' at /..PATH../dataset_b with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'] ``` I know that it can be annoying to allow seemingly arbitrary numbers of file extensions. But I think this one would be really welcome.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4514/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4514/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4513
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4513/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4513/comments
https://api.github.com/repos/huggingface/datasets/issues/4513/events
https://github.com/huggingface/datasets/pull/4513
1,273,450,338
PR_kwDODunzps45xTqv
4,513
Update Google Cloud Storage documentation and add Azure Blob Storage example
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4513). All of your documentation changes will be reflected on that endpoint.", "Hi @stevhliu, I've kept the `>>>` before all the in-line code comments as it was done like that in the default S3 example that was already there, I assume that it's done like that just for readiness, let me know whether we should remove the `>>>` in the Python blocks before the in-line code comments or keep them.\r\n\r\n![image](https://user-images.githubusercontent.com/36760800/174254663-b68d28d2-eae1-40f3-8695-dc4b0c3b479a.png)\r\n", "Comments are ignored by doctest, so I think we can remove the `>>>` :)", "Cool I'll remove those now 👍🏻" ]
1,655,379,969,000
1,655,565,402,000
null
CONTRIBUTOR
null
While I was going through the 🤗 Datasets documentation of the Cloud storage filesystems at https://huggingface.co/docs/datasets/filesystems, I realized that the Google Cloud Storage documentation could be improved e.g. bullet point says "Load your dataset" when the actual call was to "Save your dataset", in-line code comment was mentioning "s3 bucket" instead of "gcs bucket", and some more in-line comments could be included. Also, I think that mixing Google Cloud Storage documentation with AWS S3's one was a little bit confusing, so I moved all those to the end of the document under an h2 tab named "Other filesystems", with an h3 for "Google Cloud Storage". Besides that, I was currently working with Azure Blob Storage and found out that the URL to [adlfs](https://github.com/fsspec/adlfs) was common for both filesystems Azure Blob Storage and Azure DataLake Storage, as well as the URL, which was updated even though the redirect was working fine, so I decided to group those under the same row in the column of supported filesystems. And took also the change to add a small documentation entry as for Google Cloud Storage but for Azure Blob Storage, as I assume that AWS S3, GCP Cloud Storage, and Azure Blob Storage, are the most used cloud storage providers. Let me know if you're OK with these changes, or whether you want me to roll back some of those! :hugs:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4513/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4513/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4513", "html_url": "https://github.com/huggingface/datasets/pull/4513", "diff_url": "https://github.com/huggingface/datasets/pull/4513.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4513.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4512
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4512/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4512/comments
https://api.github.com/repos/huggingface/datasets/issues/4512/events
https://github.com/huggingface/datasets/pull/4512
1,273,378,129
PR_kwDODunzps45xEDN
4,512
Add links to vision tasks scripts in ADD_NEW_DATASET template
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4512). All of your documentation changes will be reflected on that endpoint." ]
1,655,375,735,000
1,655,376,205,000
null
CONTRIBUTOR
null
Add links to vision dataset scripts in the ADD_NEW_DATASET template.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4512/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4512", "html_url": "https://github.com/huggingface/datasets/pull/4512", "diff_url": "https://github.com/huggingface/datasets/pull/4512.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4512.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4511
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4511/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4511/comments
https://api.github.com/repos/huggingface/datasets/issues/4511/events
https://github.com/huggingface/datasets/pull/4511
1,273,336,874
PR_kwDODunzps45w7RN
4,511
Support all negative values in ClassLabel
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,373,579,000
1,655,388,250,000
null
MEMBER
null
We usually use -1 to represent a missing label, but we should also support any negative values (some users use -100 for example). This is a regression from `datasets` 2.3 Fix https://github.com/huggingface/datasets/issues/4508
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4511/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4511/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4511", "html_url": "https://github.com/huggingface/datasets/pull/4511", "diff_url": "https://github.com/huggingface/datasets/pull/4511.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4511.patch", "merged_at": 1655387647000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4510
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4510/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4510/comments
https://api.github.com/repos/huggingface/datasets/issues/4510/events
https://github.com/huggingface/datasets/pull/4510
1,273,260,396
PR_kwDODunzps45wq6o
4,510
Add regression test for `ArrowWriter.write_batch` when batch is empty
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "As mentioned by @lhoestq, the current behavior is correct and we should not expect batches with different columns, in that case, the if should fail, as the values of the batch can be empty, but not the actual `batch_examples` value." ]
1,655,369,631,000
1,655,383,082,000
null
CONTRIBUTOR
null
As spotted by @cccntu in #4502, there's a logic bug in `ArrowWriter.write_batch` as the if-statement to handle the empty batches as detailed in the docstrings of the function ("Ignores the batch if it appears to be empty, preventing a potential schema update of unknown types."), the current if-statement is not handling properly `writer.write_batch({})` as an error is triggered. Also, if we add a regression test in `test_arrow_writer.py::test_write_batch` before applying the fix, the test will fail as when trying to write an empty batch as follows: ``` =================================================================================== short test summary info =================================================================================== FAILED tests/test_arrow_writer.py::test_write_batch[None-None] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[None-1] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[None-10] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields1-None] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields1-1] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields1-10] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields2-None] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields2-1] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields2-10] - ValueError: Schema and number of arrays unequal ======================================================================== 9 failed, 73 deselected, 7 warnings in 0.81s ========================================================================= ``` So the batch is not ignored when empty, as `batch_examples={}` won't match the condition `if batch_examples: ...`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4510/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4510/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4510", "html_url": "https://github.com/huggingface/datasets/pull/4510", "diff_url": "https://github.com/huggingface/datasets/pull/4510.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4510.patch", "merged_at": 1655382499000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4509
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4509/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4509/comments
https://api.github.com/repos/huggingface/datasets/issues/4509/events
https://github.com/huggingface/datasets/pull/4509
1,273,227,760
PR_kwDODunzps45wkDl
4,509
Support skipping Parquet to Arrow conversion when using Beam
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4509). All of your documentation changes will be reflected on that endpoint." ]
1,655,367,938,000
1,655,477,803,000
null
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4509/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4509/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4509", "html_url": "https://github.com/huggingface/datasets/pull/4509", "diff_url": "https://github.com/huggingface/datasets/pull/4509.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4509.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4508
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4508/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4508/comments
https://api.github.com/repos/huggingface/datasets/issues/4508/events
https://github.com/huggingface/datasets/issues/4508
1,272,718,921
I_kwDODunzps5L3CZJ
4,508
cast_storage method from datasets.features
{ "login": "romainremyb", "id": 67968596, "node_id": "MDQ6VXNlcjY3OTY4NTk2", "avatar_url": "https://avatars.githubusercontent.com/u/67968596?v=4", "gravatar_id": "", "url": "https://api.github.com/users/romainremyb", "html_url": "https://github.com/romainremyb", "followers_url": "https://api.github.com/users/romainremyb/followers", "following_url": "https://api.github.com/users/romainremyb/following{/other_user}", "gists_url": "https://api.github.com/users/romainremyb/gists{/gist_id}", "starred_url": "https://api.github.com/users/romainremyb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/romainremyb/subscriptions", "organizations_url": "https://api.github.com/users/romainremyb/orgs", "repos_url": "https://api.github.com/users/romainremyb/repos", "events_url": "https://api.github.com/users/romainremyb/events{/privacy}", "received_events_url": "https://api.github.com/users/romainremyb/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi! We've recently added a check to the `ClassLabel` type to ensure the values are in the valid label range `-1, 0, ..., num_classes-1` (-1 is used for missing values). The error in your case happens only if the `labels` column is of type `Sequence(ClassLabel(...))` before the `map` call and can be avoided by calling `dataset = dataset.cast_column(\"labels\", Sequence(Value(\"int\")))` beforehand. The token-classification examples in Transformers introduce a new `labels` column, so their type is also `Sequence(Value(\"int\"))`, which doesn't lead to an error as this type unbounded. ", "I'm fine with re-adding support for all negative values for unknown/missing labels @mariosasko, wdyt ?" ]
1,655,326,042,000
1,655,387,647,000
null
NONE
null
## Describe the bug A bug occurs when mapping a function to a dataset object. I ran the same code with the same data yesterday and it worked just fine. It works when i run locally on an old version of datasets. ## Steps to reproduce the bug Steps are: - load whatever datset - write a preprocessing function such as "tokenize_and_align_labels" written in https://huggingface.co/docs/transformers/tasks/token_classification - map the function on dataset and get "ValueError: Class label -100 less than -1" from cast_storage method from datasets.features # Sample code to reproduce the bug def tokenize_and_align_labels(examples): tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True, max_length=38,padding="max_length") labels = [] for i, label in enumerate(examples[f"labels"]): word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word. previous_word_idx = None label_ids = [] for word_idx in word_ids: # Set the special tokens to -100. if word_idx is None: label_ids.append(-100) elif word_idx != previous_word_idx: # Only label the first token of a given word. label_ids.append(label[word_idx]) else: label_ids.append(-100) previous_word_idx = word_idx labels.append(label_ids) tokenized_inputs["labels"] = labels return tokenized_inputs tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") dt = dataset.map(tokenize_and_align_labels, batched=True) ## Expected results New dataset objects should load and do on older versions. ## Actual results "ValueError: Class label -100 less than -1" from cast_storage method from datasets.features ## Environment info everything works fine on older installations of datasets/transformers Issue arises when installing datasets on google collab under python3.7 I can't manage to find the exact output you're requirering but version printed is datasets-2.3.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4508/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4508/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4507
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4507/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4507/comments
https://api.github.com/repos/huggingface/datasets/issues/4507/events
https://github.com/huggingface/datasets/issues/4507
1,272,615,932
I_kwDODunzps5L2pP8
4,507
How to let `load_dataset` return a `Dataset` instead of `DatasetDict` in customized loading script
{ "login": "liyucheng09", "id": 27999909, "node_id": "MDQ6VXNlcjI3OTk5OTA5", "avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liyucheng09", "html_url": "https://github.com/liyucheng09", "followers_url": "https://api.github.com/users/liyucheng09/followers", "following_url": "https://api.github.com/users/liyucheng09/following{/other_user}", "gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}", "starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions", "organizations_url": "https://api.github.com/users/liyucheng09/orgs", "repos_url": "https://api.github.com/users/liyucheng09/repos", "events_url": "https://api.github.com/users/liyucheng09/events{/privacy}", "received_events_url": "https://api.github.com/users/liyucheng09/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi @liyucheng09.\r\n\r\nUsers can pass the `split` parameter to `load_dataset`. For example, if your split name is \"train\",\r\n```python\r\nds = load_dataset(\"dataset_name\", split=\"train\")\r\n```\r\nwill return a `Dataset` instance.", "@albertvillanova Thanks! I can't believe I didn't know this feature till now." ]
1,655,319,394,000
1,655,376,008,000
null
NONE
null
If the dataset does not need splits, i.e., no training and validation split, more like a table. How can I let the `load_dataset` function return a `Dataset` object directly rather than return a `DatasetDict` object with only one key-value pair. Or I can paraphrase the question in the following way: how to skip `_split_generators` step in `DatasetBuilder` to let `as_dataset` gives a single `Dataset` rather than a list`[Dataset]`? Many thanks for any help.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4507/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4506
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4506/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4506/comments
https://api.github.com/repos/huggingface/datasets/issues/4506/events
https://github.com/huggingface/datasets/issues/4506
1,272,516,895
I_kwDODunzps5L2REf
4,506
Failure to hash (and cache) a `.map(...)` (almost always) - using this method can produce incorrect results
{ "login": "DrMatters", "id": 22641583, "node_id": "MDQ6VXNlcjIyNjQxNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DrMatters", "html_url": "https://github.com/DrMatters", "followers_url": "https://api.github.com/users/DrMatters/followers", "following_url": "https://api.github.com/users/DrMatters/following{/other_user}", "gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}", "starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions", "organizations_url": "https://api.github.com/users/DrMatters/orgs", "repos_url": "https://api.github.com/users/DrMatters/repos", "events_url": "https://api.github.com/users/DrMatters/events{/privacy}", "received_events_url": "https://api.github.com/users/DrMatters/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Important info:\r\n\r\nAs hashes are generated randomly for functions, it leads to **false identifying some results as already hashed** (mapping function is not executed after a method update) when there's a `pytorch_lightning.seed_everything(123)`", "@lhoestq\r\nseems like quite critical stuff for me, if I'm not making a mistake", "Hi ! Thanks for reporting. This bug seems to appear in python 3.9 using dill 3.5.1\r\n\r\nAs a workaround you can use an older version of dill:\r\n```\r\npip install \"dill<0.3.5\"\r\n```", "installing `dill<0.3.5` after installing `datasets` by pip results in dependency conflict with the version required for `multiprocess`. It can be solved by installing `pip install datasets \"dill<0.3.5\"` (simultaneously) on a clean environment" ]
1,655,313,091,000
1,655,381,890,000
null
NONE
null
## Describe the bug Sometimes I get messages about not being able to hash a method: `Parameter 'function'=<function StupidDataModule._separate_speaker_id_from_dialogue at 0x7f1b27180d30> of the transform datasets.arrow_dataset.Dataset. _map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.` Whilst the function looks like this: ```python @staticmethod def _separate_speaker_id_from_dialogue(example: arrow_dataset.Example): speaker_id, dialogue = tuple(zip(*(example["dialogue"]))) example["speaker_id"] = speaker_id example["dialogue"] = dialogue return example ``` This is the first step in my preprocessing pipeline, but sometimes the message about failure to hash is not appearing on the first step, but then appears on a later step. This error is sometimes causing a failure to use cached data, instead of re-running all steps again. ## Steps to reproduce the bug ```python import copy import datasets from datasets import arrow_dataset def main(): dataset = datasets.load_dataset("blended_skill_talk") res = dataset.map(method) print(res) def method(example: arrow_dataset.Example): example['previous_utterance_copy'] = copy.deepcopy(example['previous_utterance']) return example if __name__ == '__main__': main() ``` Run with: ``` python -m reproduce_error ``` ## Expected results Dataset is mapped and cached correctly. ## Actual results The code outputs this at some point: `Parameter 'function'=<function method at 0x7faa83d2a160> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Ubuntu 20.04.3 - Python version: 3.9.12 - PyArrow version: 8.0.0 - Datasets version: 2.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4506/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4506/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4505
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4505/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4505/comments
https://api.github.com/repos/huggingface/datasets/issues/4505/events
https://github.com/huggingface/datasets/pull/4505
1,272,477,226
PR_kwDODunzps45uH-o
4,505
Fix double dots in data files
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The CI fails are unrelated to this PR (apparently something related to `seqeval` on windows) - merging :)" ]
1,655,310,664,000
1,655,313,358,000
null
MEMBER
null
As mentioned in https://github.com/huggingface/transformers/pull/17715 `data_files` can't find a file if the path contains double dots `/../`. This has been introduced in https://github.com/huggingface/datasets/pull/4412, by trying to ignore hidden files and directories (i.e. if they start with a dot) I fixed this and added a test cc @sgugger @ydshieh
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4505/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4505/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4505", "html_url": "https://github.com/huggingface/datasets/pull/4505", "diff_url": "https://github.com/huggingface/datasets/pull/4505.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4505.patch", "merged_at": 1655312753000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4504
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4504/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4504/comments
https://api.github.com/repos/huggingface/datasets/issues/4504/events
https://github.com/huggingface/datasets/issues/4504
1,272,418,480
I_kwDODunzps5L15Cw
4,504
Can you please add the Stanford dog dataset?
{ "login": "dgrnd4", "id": 69434832, "node_id": "MDQ6VXNlcjY5NDM0ODMy", "avatar_url": "https://avatars.githubusercontent.com/u/69434832?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dgrnd4", "html_url": "https://github.com/dgrnd4", "followers_url": "https://api.github.com/users/dgrnd4/followers", "following_url": "https://api.github.com/users/dgrnd4/following{/other_user}", "gists_url": "https://api.github.com/users/dgrnd4/gists{/gist_id}", "starred_url": "https://api.github.com/users/dgrnd4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dgrnd4/subscriptions", "organizations_url": "https://api.github.com/users/dgrnd4/orgs", "repos_url": "https://api.github.com/users/dgrnd4/repos", "events_url": "https://api.github.com/users/dgrnd4/events{/privacy}", "received_events_url": "https://api.github.com/users/dgrnd4/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "would you like to give it a try, @dgrnd4? (maybe with the help of the dataset author?)", "@julien-c i am sorry but I have no idea about how it works: can I add the dataset by myself, following \"instructions to add a new dataset\"?\r\nCan I add a dataset even if it's not mine? (it's public in the link that I wrote on the post)\r\n", "Hi! The [ADD NEW DATASET](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md) instructions are indeed the best place to start. It's also perfectly fine to add a dataset if it's public, even if it's not yours. Let me know if you need some additional pointers." ]
1,655,307,575,000
1,655,376,446,000
null
NONE
null
## Adding a Dataset - **Name:** *Stanford dog dataset* - **Description:** *The dataset is about 120 classes for a total of 20.580 images. You can find the dataset here http://vision.stanford.edu/aditya86/ImageNetDogs/* - **Paper:** *http://vision.stanford.edu/aditya86/ImageNetDogs/* - **Data:** *[link to the Github repository or current dataset location](http://vision.stanford.edu/aditya86/ImageNetDogs/)* - **Motivation:** *The dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It is useful for fine-grain purpose * Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4504/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4504/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4503
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4503/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4503/comments
https://api.github.com/repos/huggingface/datasets/issues/4503/events
https://github.com/huggingface/datasets/pull/4503
1,272,367,055
PR_kwDODunzps45twLR
4,503
Add feverous config to fever dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4503). All of your documentation changes will be reflected on that endpoint." ]
1,655,305,187,000
1,655,305,665,000
null
MEMBER
null
Related to: #4452 and #3792.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4503/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4503/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4503", "html_url": "https://github.com/huggingface/datasets/pull/4503", "diff_url": "https://github.com/huggingface/datasets/pull/4503.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4503.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4502
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4502/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4502/comments
https://api.github.com/repos/huggingface/datasets/issues/4502/events
https://github.com/huggingface/datasets/issues/4502
1,272,353,700
I_kwDODunzps5L1pOk
4,502
Logic bug in arrow_writer?
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @cccntu you're right, as when `batch_examples={}` the current if-statement won't be triggered as the condition won't be satisfied, I'll prepare a PR to address it as well as add the regression tests so that this issue is handled properly.", "Hi @alvarobartt ,\r\nThanks for answering. Do you know when and why an empty batch is passed to this function? This only happened to me when processing with multiple workers, while chunking examples, I think.", "> Hi @alvarobartt , Thanks for answering. Do you know when and why an empty batch is passed to this function? This only happened to me when processing with multiple workers, while chunking examples, I think.\r\n\r\nSo it depends on how you're actually chunking the data as if you're not handling empty chunks `batch_examples={}` or `batch_examples=None`, you may end up running into this issue. So you could check the chunks before you actually call `ArrowWriter.write_batch`, but anyway the fix you proposed I think improves the logic of `write_batch` to avoid running into these issues.", "Thanks, I added a if-print and I found it does return an empty examples in the chunking function that is passed to `.map()`.", "Hi ! We consider an empty batch to look like this:\r\n```python\r\nempty_batch = {\r\n \"column_1\": [],\r\n \"column_2\": [],\r\n ...\r\n}\r\n```\r\n\r\nWhile `{}` corresponds to a batch with no columns.\r\n\r\nTherefore calling this code should fail, because the two batches don't have the same columns:\r\n```python\r\nwriter.write_batch({\"a\": [1, 2, 3]})\r\nwriter.write_batch({})\r\n```\r\n\r\nIf you want to write an empty batch, you should do this instead:\r\n```python\r\nwriter.write_batch({\"a\": [1, 2, 3]})\r\nwriter.write_batch({\"a\": []})\r\n```", "Makes sense, then the if-statement should remain the same or is it better to handle both cases separately using `if not batch_examples or len(next(iter(batch_examples.values()))) == 0: ...`?\r\n\r\nUpdating the regressions tests with an empty batch formatted as `{\"col_1\": [], \"col_2\": []}` instead of `{}` works fine with the current if, and also with the one proposed by @cccntu.", "> Makes sense, then the if-statement should remain the same or is it better to handle both cases separately using if not batch_examples or len(next(iter(batch_examples.values()))) == 0: ...?\r\n\r\nThere's a check later in the code that makes sure that the columns are the right ones, so I don't think we need to check for `{}` here\r\n\r\nIn particular the check `if not batch_examples or len(next(iter(batch_examples.values()))) == 0:` doesn't raise an error while it should, that why the old `if` is fine IMO\r\n\r\n> Updating the regressions tests with an empty batch formatted as {\"col_1\": [], \"col_2\": []} instead of {} works fine with the current if, and also with the one proposed by @cccntu.\r\n\r\nCool ! If you want you can update your PR to add the regression tests, to make sure that `{\"col_1\": [], \"col_2\": []}` works but not `{}`", "Great thanks for the response! So I'll just add that regression test and remove the current if-statement.", "Hi @lhoestq ,\r\n\r\nThanks for your explanation. Now I get it that `{}` means the columns are different. But wouldn't it be nice if the code can ignore it, like it ignores `{\"a\": []}`?\r\n\r\n\r\n--- \r\nBTW, \r\n> There's a check later in the code that makes sure that the columns are the right ones, so I don't think we need to check for {} here\r\n\r\nI remember the error happens around here:\r\nhttps://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L506-L507\r\nThe error says something like `arrays` and `schema` doesn't have the same length. And it's not very clear I passed a `{}`.\r\n\r\nedit: actual error message\r\n```\r\nFile \"site-packages/datasets/arrow_writer.py\", line 595, in write_batch\r\n pa_table = pa.Table.from_arrays(arrays, schema=schema)\r\n File \"pyarrow/table.pxi\", line 3557, in pyarrow.lib.Table.from_arrays\r\n File \"pyarrow/table.pxi\", line 1401, in pyarrow.lib._sanitize_arrays\r\nValueError: Schema and number of arrays unequal\r\n```", "> But wouldn't it be nice if the code can ignore it, like it ignores {\"a\": []}?\r\n\r\nI think it would make things confusing because it doesn't follow our definition of a batch: \"the columns of a batch = the keys of the dict\". It would probably break certain behaviors as well. For example if you remove all the columns of a dataset (using `.remove_colums(...)` or `.map(..., remove_columns=...)`), the writer has to write 0 columns, and currently the only way to tell the writer to do so using `write_batch` is to pass `{}`.\r\n\r\n> The error says something like arrays and schema doesn't have the same length. And it's not very clear I passed a {}.\r\n\r\nYea the message can actually be improved indeed, it's definitely not clear. Maybe we can add a line right before the call `pa.Table.from_arrays` to make sure the keys of the batch match the field names of the schema" ]
1,655,304,600,000
1,655,565,351,000
null
CONTRIBUTOR
null
https://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L475-L488 I got some error, and I found it's caused by `batch_examples` being `{}`. I wonder if the code should be as follows: ``` - if batch_examples and len(next(iter(batch_examples.values()))) == 0: + if not batch_examples or len(next(iter(batch_examples.values()))) == 0: return ``` @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4502/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4502/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4501
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4501/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4501/comments
https://api.github.com/repos/huggingface/datasets/issues/4501/events
https://github.com/huggingface/datasets/pull/4501
1,272,300,646
PR_kwDODunzps45th2M
4,501
Corrected broken links in doc
{ "login": "clefourrier", "id": 22726840, "node_id": "MDQ6VXNlcjIyNzI2ODQw", "avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clefourrier", "html_url": "https://github.com/clefourrier", "followers_url": "https://api.github.com/users/clefourrier/followers", "following_url": "https://api.github.com/users/clefourrier/following{/other_user}", "gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}", "starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions", "organizations_url": "https://api.github.com/users/clefourrier/orgs", "repos_url": "https://api.github.com/users/clefourrier/repos", "events_url": "https://api.github.com/users/clefourrier/events{/privacy}", "received_events_url": "https://api.github.com/users/clefourrier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,302,337,000
1,655,305,865,000
null
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4501/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4501/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4501", "html_url": "https://github.com/huggingface/datasets/pull/4501", "diff_url": "https://github.com/huggingface/datasets/pull/4501.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4501.patch", "merged_at": 1655305256000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4500
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4500/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4500/comments
https://api.github.com/repos/huggingface/datasets/issues/4500/events
https://github.com/huggingface/datasets/pull/4500
1,272,281,992
PR_kwDODunzps45tdxk
4,500
Add `concatenate_datasets` for iterable datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4500). All of your documentation changes will be reflected on that endpoint." ]
1,655,301,530,000
1,655,476,905,000
null
MEMBER
null
`concatenate_datasets` currently only supports lists of `datasets.Dataset`, not lists of `datasets.IterableDataset` like `interleave_datasets` Fix https://github.com/huggingface/datasets/issues/2564 I also moved `_interleave_map_style_datasets` from combine.py to arrow_dataset.py, since the logic depends a lot on the `Dataset` object internals And I moved `concatenate_datasets` from arrow_dataset.py to combine.py to have it with `interleave_datasets`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4500/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4500", "html_url": "https://github.com/huggingface/datasets/pull/4500", "diff_url": "https://github.com/huggingface/datasets/pull/4500.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4500.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4499
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4499/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4499/comments
https://api.github.com/repos/huggingface/datasets/issues/4499/events
https://github.com/huggingface/datasets/pull/4499
1,272,118,162
PR_kwDODunzps45s6Jh
4,499
fix ETT m1/m2 test/val dataset
{ "login": "kashif", "id": 8100, "node_id": "MDQ6VXNlcjgxMDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kashif", "html_url": "https://github.com/kashif", "followers_url": "https://api.github.com/users/kashif/followers", "following_url": "https://api.github.com/users/kashif/following{/other_user}", "gists_url": "https://api.github.com/users/kashif/gists{/gist_id}", "starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kashif/subscriptions", "organizations_url": "https://api.github.com/users/kashif/orgs", "repos_url": "https://api.github.com/users/kashif/repos", "events_url": "https://api.github.com/users/kashif/events{/privacy}", "received_events_url": "https://api.github.com/users/kashif/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thansk for the fix ! Can you regenerate the datasets_infos.json please ? This way it will update the expected number of examples in the test and val splits", "ah yes!" ]
1,655,293,862,000
1,655,304,956,000
null
CONTRIBUTOR
null
https://huggingface.co/datasets/ett/discussions/1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4499/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4499/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4499", "html_url": "https://github.com/huggingface/datasets/pull/4499", "diff_url": "https://github.com/huggingface/datasets/pull/4499.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4499.patch", "merged_at": 1655304312000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4498
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4498/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4498/comments
https://api.github.com/repos/huggingface/datasets/issues/4498/events
https://github.com/huggingface/datasets/issues/4498
1,272,100,549
I_kwDODunzps5L0rbF
4,498
WER and CER > 1
{ "login": "sadrasabouri", "id": 43045767, "node_id": "MDQ6VXNlcjQzMDQ1NzY3", "avatar_url": "https://avatars.githubusercontent.com/u/43045767?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sadrasabouri", "html_url": "https://github.com/sadrasabouri", "followers_url": "https://api.github.com/users/sadrasabouri/followers", "following_url": "https://api.github.com/users/sadrasabouri/following{/other_user}", "gists_url": "https://api.github.com/users/sadrasabouri/gists{/gist_id}", "starred_url": "https://api.github.com/users/sadrasabouri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sadrasabouri/subscriptions", "organizations_url": "https://api.github.com/users/sadrasabouri/orgs", "repos_url": "https://api.github.com/users/sadrasabouri/repos", "events_url": "https://api.github.com/users/sadrasabouri/events{/privacy}", "received_events_url": "https://api.github.com/users/sadrasabouri/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "WER can have values bigger than 1.0, this is expected when there are too many insertions\r\n\r\nFrom [wikipedia](https://en.wikipedia.org/wiki/Word_error_rate):\r\n> Note that since N is the number of words in the reference, the word error rate can be larger than 1.0" ]
1,655,292,912,000
1,655,311,085,000
null
NONE
null
## Describe the bug It seems that in some cases in which the `prediction` is longer than the `reference` we may have word/character error rate higher than 1 which is a bit odd. If it's a real bug I think I can solve it with a PR changing [this](https://github.com/huggingface/datasets/blob/master/metrics/wer/wer.py#L105) line to ```python return min(incorrect / total, 1.0) ``` ## Steps to reproduce the bug ```python from datasets import load_metric wer = load_metric("wer") wer_value = wer.compute(predictions=["Hi World vka"], references=["Hello"]) print(wer_value) ``` ## Expected results ``` 1.0 ``` ## Actual results ``` 3.0 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4498/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4498/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4497
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4497/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4497/comments
https://api.github.com/repos/huggingface/datasets/issues/4497/events
https://github.com/huggingface/datasets/pull/4497
1,271,964,338
PR_kwDODunzps45sYns
4,497
Re-add download_manager module in utils
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the fix.\r\n\r\nI'm wondering how this fixes backward compatibility...\r\n\r\nExecuting this code:\r\n```python\r\nfrom datasets.utils.download_manager import DownloadMode\r\n```\r\nwe will have\r\n```python\r\nDownloadMode = None\r\n```\r\n\r\nIf afterwards we use something like:\r\n```python\r\nif download_mode == DownloadMode.FORCE_REDOWNLOAD\r\n```\r\nthat will raise an exception.", "It works fine on my side:\r\n```python\r\n>>> from datasets.utils.download_manager import DownloadMode\r\n>>> DownloadMode is not None\r\nTrue\r\n```", "As reported in https://github.com/huggingface/evaluate/pull/143\r\n```python\r\nfrom datasets.utils import DownloadConfig\r\n```\r\nis also missing, I'm re-adding it", "Took the liberty of merging this one, to do a patch release soon. If we think of a better approach we can improve it later" ]
1,655,286,273,000
1,655,289,208,000
null
MEMBER
null
https://github.com/huggingface/datasets/pull/4384 moved `datasets.utils.download_manager` to `datasets.download.download_manager` This breaks `evaluate` which imports `DownloadMode` from `datasets.utils.download_manager` This PR re-adds `datasets.utils.download_manager` without circular imports. We could also show a message that says that accessing it is deprecated, but I think we can do this in a subsequent PR, and just focus on doing a patch release for now
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4497/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4497/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4497", "html_url": "https://github.com/huggingface/datasets/pull/4497", "diff_url": "https://github.com/huggingface/datasets/pull/4497.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4497.patch", "merged_at": 1655288624000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4496
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4496/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4496/comments
https://api.github.com/repos/huggingface/datasets/issues/4496/events
https://github.com/huggingface/datasets/pull/4496
1,271,945,704
PR_kwDODunzps45sUnW
4,496
Replace `assertEqual` with `assertTupleEqual` in unit tests for verbosity
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4496). All of your documentation changes will be reflected on that endpoint.", "FYI I used the following regex to look for the `assertEqual` statements where the assertion was being done over a Tuple: `self.assertEqual(.*, \\(.*,)(\\)\\))$`, hope this is useful!" ]
1,655,285,356,000
1,655,286,253,000
null
CONTRIBUTOR
null
As detailed in #4419 and as suggested by @mariosasko, we could replace the `assertEqual` assertions with `assertTupleEqual` when the assertion is between Tuples, in order to make the tests more verbose.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4496/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4496/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4496", "html_url": "https://github.com/huggingface/datasets/pull/4496", "diff_url": "https://github.com/huggingface/datasets/pull/4496.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4496.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4495
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4495/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4495/comments
https://api.github.com/repos/huggingface/datasets/issues/4495/events
https://github.com/huggingface/datasets/pull/4495
1,271,851,025
PR_kwDODunzps45sAgO
4,495
Fix patching module that doesn't exist
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,281,070,000
1,655,311,249,000
null
MEMBER
null
Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true When trying to patch `scipy.io.loadmat`: ```python ModuleNotFoundError: No module named 'scipy' ``` Instead it shouldn't raise an error and do nothing Bug introduced by #4375 Fix https://github.com/huggingface/datasets/issues/4494
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4495/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4495/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4495", "html_url": "https://github.com/huggingface/datasets/pull/4495", "diff_url": "https://github.com/huggingface/datasets/pull/4495.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4495.patch", "merged_at": 1655283249000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4494
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4494/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4494/comments
https://api.github.com/repos/huggingface/datasets/issues/4494/events
https://github.com/huggingface/datasets/issues/4494
1,271,850,599
I_kwDODunzps5LzuZn
4,494
Patching fails for modules that are not installed or don't exist
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,655,281,049,000
1,655,283,249,000
null
MEMBER
null
Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true When trying to patch `scipy.io.loadmat`: ```python ModuleNotFoundError: No module named 'scipy' ``` Instead it shouldn't raise an error and do nothing We use patching to extend such functions to support remote URLs and work in streaming mode
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4494/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4494/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4493
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4493/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4493/comments
https://api.github.com/repos/huggingface/datasets/issues/4493/events
https://github.com/huggingface/datasets/pull/4493
1,271,306,385
PR_kwDODunzps45qL7J
4,493
Add `@transmit_format` in `flatten`
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@mariosasko please let me know whether we need to include some sort of tests to make sure that the decorator is working as expected. Thanks! 🤗 ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4493). All of your documentation changes will be reflected on that endpoint.", "Hi, thanks for working on this! Yes, please add (simple) tests so we can avoid any unexpected behavior in the future.\r\n\r\n`@transmit_format` doesn't handle column renaming, so I removed it from `rename_column` and `rename_columns` and added a comment to explain this." ]
1,655,237,349,000
1,655,357,277,000
null
CONTRIBUTOR
null
As suggested by @mariosasko in https://github.com/huggingface/datasets/pull/4411, we should include the `@transmit_format` decorator to `flatten`, `rename_column`, and `rename_columns` so as to ensure that the value of `_format_columns` in an `ArrowDataset` is properly updated. **Edit**: according to @mariosasko comment below, the decorator `@transmit_format` doesn't handle column renaming, so it's done manually for those instead.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4493/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4493/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4493", "html_url": "https://github.com/huggingface/datasets/pull/4493", "diff_url": "https://github.com/huggingface/datasets/pull/4493.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4493.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4492/comments
https://api.github.com/repos/huggingface/datasets/issues/4492/events
https://github.com/huggingface/datasets/pull/4492
1,271,112,497
PR_kwDODunzps45pktu
4,492
Pin the revision in imagenet download links
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,226,917,000
1,655,228,113,000
null
MEMBER
null
Use the commit sha in the data files URLs of the imagenet-1k download script, in case we want to restructure the data files in the future. For example we may split it into many more shards for better paralellism. cc @mariosasko
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4492/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4492/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4492", "html_url": "https://github.com/huggingface/datasets/pull/4492", "diff_url": "https://github.com/huggingface/datasets/pull/4492.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4492.patch", "merged_at": 1655227545000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4491
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4491/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4491/comments
https://api.github.com/repos/huggingface/datasets/issues/4491/events
https://github.com/huggingface/datasets/issues/4491
1,270,803,822
I_kwDODunzps5Lvu1u
4,491
Dataset Viewer issue for Pavithree/test
{ "login": "Pavithree", "id": 23344465, "node_id": "MDQ6VXNlcjIzMzQ0NDY1", "avatar_url": "https://avatars.githubusercontent.com/u/23344465?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Pavithree", "html_url": "https://github.com/Pavithree", "followers_url": "https://api.github.com/users/Pavithree/followers", "following_url": "https://api.github.com/users/Pavithree/following{/other_user}", "gists_url": "https://api.github.com/users/Pavithree/gists{/gist_id}", "starred_url": "https://api.github.com/users/Pavithree/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Pavithree/subscriptions", "organizations_url": "https://api.github.com/users/Pavithree/orgs", "repos_url": "https://api.github.com/users/Pavithree/repos", "events_url": "https://api.github.com/users/Pavithree/events{/privacy}", "received_events_url": "https://api.github.com/users/Pavithree/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue can be resolved according to this post https://stackoverflow.com/questions/70566660/parquet-with-null-columns-on-pyarrow. It looks like first data entry in the json file must not have any null values as pyarrow uses this first file to infer schema for entire dataset." ]
1,655,212,990,000
1,655,217,441,000
null
NONE
null
### Link https://huggingface.co/datasets/Pavithree/test ### Description I have extracted the subset of original eli5 dataset found at hugging face. However, while loading the dataset It throws ArrowNotImplementedError: Unsupported cast from string to null using function cast_null error. Is there anything missing from my end? Kindly help. ### Owner _No response_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4491/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4491/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4490
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4490/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4490/comments
https://api.github.com/repos/huggingface/datasets/issues/4490/events
https://github.com/huggingface/datasets/issues/4490
1,270,719,074
I_kwDODunzps5LvaJi
4,490
Use `torch.nested_tensor` for arrays of varying length in torch formatter
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,655,209,180,000
1,655,209,180,000
null
CONTRIBUTOR
null
Use `torch.nested_tensor` for arrays of varying length in `TorchFormatter`. The PyTorch API of nested tensors is in the prototype stage, so wait for it to become more mature.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4490/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4490/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4489
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4489/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4489/comments
https://api.github.com/repos/huggingface/datasets/issues/4489/events
https://github.com/huggingface/datasets/pull/4489
1,270,706,195
PR_kwDODunzps45oONF
4,489
Add SV-Ident dataset
{ "login": "e-tornike", "id": 20404466, "node_id": "MDQ6VXNlcjIwNDA0NDY2", "avatar_url": "https://avatars.githubusercontent.com/u/20404466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/e-tornike", "html_url": "https://github.com/e-tornike", "followers_url": "https://api.github.com/users/e-tornike/followers", "following_url": "https://api.github.com/users/e-tornike/following{/other_user}", "gists_url": "https://api.github.com/users/e-tornike/gists{/gist_id}", "starred_url": "https://api.github.com/users/e-tornike/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/e-tornike/subscriptions", "organizations_url": "https://api.github.com/users/e-tornike/orgs", "repos_url": "https://api.github.com/users/e-tornike/repos", "events_url": "https://api.github.com/users/e-tornike/events{/privacy}", "received_events_url": "https://api.github.com/users/e-tornike/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @e-tornike, thanks a lot for adding this interesting dataset.\r\n\r\nRecently at Hugging Face, we have decided to give priority to adding datasets directly on the Hub. Would you mind to transfer your loading script to the Hub? You could create a dedicated org namespace, so that your dataset would be accessible using `vadis/sv_ident` or `sdproc/sv_ident` or `coling/sv_ident` (as you prefer).\r\n\r\nYou have an example here: https://huggingface.co/datasets/projecte-aina/catalan_textual_corpus", "Additionally, please feel free to ping us if you need assistance/help in creating this dataset.\r\n\r\nYou could put the link to your Hub dataset here in this Issue discussion page, so that we can follow the progress. :)", "Hi @albertvillanova, thanks for the feedback! Uploading via the Hub is a lot easier! \r\n\r\nI've uploaded the dataset here: https://huggingface.co/datasets/vadis/sv-ident, but it's reporting a \"Status400Error\". Is there any way to see the logs of the dataset script and what is causing the error?", "Hi @e-tornike, good job at https://huggingface.co/datasets/vadis/sv-ident.\r\n\r\nNormally, you can run locally the loading of the dataset by passing `streaming=True` (as the previewer does):\r\n```python\r\nds = load_dataset(\"path/to/sv_ident.py, split=\"train\", streaming=True)\r\nitem = next(iter(ds))\r\nitem\r\n```\r\n\r\nLet me have a look and open a discussion on your Hub repo! ;)", "I've opened an Issue: \r\n- #4527 " ]
1,655,208,540,000
1,655,714,906,000
null
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4489/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4489/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4489", "html_url": "https://github.com/huggingface/datasets/pull/4489", "diff_url": "https://github.com/huggingface/datasets/pull/4489.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4489.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4488
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4488/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4488/comments
https://api.github.com/repos/huggingface/datasets/issues/4488/events
https://github.com/huggingface/datasets/pull/4488
1,270,613,857
PR_kwDODunzps45n6Ja
4,488
Update PASS dataset version
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,203,634,000
1,655,224,915,000
null
CONTRIBUTOR
null
Update the PASS dataset to version v3 (the newest one) from the [version history](https://github.com/yukimasano/PASS/blob/main/version_history.txt). PS: The older versions are not exposed as configs in the script because v1 was removed from Zenodo, and the same thing will probably happen to v2.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4488/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4488/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4488", "html_url": "https://github.com/huggingface/datasets/pull/4488", "diff_url": "https://github.com/huggingface/datasets/pull/4488.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4488.patch", "merged_at": 1655224348000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4487
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4487/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4487/comments
https://api.github.com/repos/huggingface/datasets/issues/4487/events
https://github.com/huggingface/datasets/pull/4487
1,270,525,163
PR_kwDODunzps45nm5J
4,487
Support streaming UDHR dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,199,213,000
1,655,269,762,000
null
MEMBER
null
This PR: - Adds support for streaming UDHR dataset - Adds the BCP 47 language code as feature
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4487/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4487/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4487", "html_url": "https://github.com/huggingface/datasets/pull/4487", "diff_url": "https://github.com/huggingface/datasets/pull/4487.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4487.patch", "merged_at": 1655269189000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4486
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4486/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4486/comments
https://api.github.com/repos/huggingface/datasets/issues/4486/events
https://github.com/huggingface/datasets/pull/4486
1,269,518,084
PR_kwDODunzps45kP88
4,486
Add CCAgT dataset
{ "login": "johnnv1", "id": 20444345, "node_id": "MDQ6VXNlcjIwNDQ0MzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnnv1", "html_url": "https://github.com/johnnv1", "followers_url": "https://api.github.com/users/johnnv1/followers", "following_url": "https://api.github.com/users/johnnv1/following{/other_user}", "gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}", "starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions", "organizations_url": "https://api.github.com/users/johnnv1/orgs", "repos_url": "https://api.github.com/users/johnnv1/repos", "events_url": "https://api.github.com/users/johnnv1/events{/privacy}", "received_events_url": "https://api.github.com/users/johnnv1/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4486). All of your documentation changes will be reflected on that endpoint.", "Hi! Excellent job @johnnv1! There were typos/missing words in the card, so I took the liberty to rewrite some parts to make them easier to understand. Let me know if you are ok with the changes. Also, feel free to add some info under the `Who are the annotators?` section.\r\n\r\nAdditionally, I fixed the issue with streaming and renamed the `digits` feature to `objects`.\r\n\r\n@lhoestq Are you ok with skipping the dummy data test here as it's tricky to generate it due to the splits separation logic?", "I think I can also add instance segmentation: by exposing the segment of each instance, so it will be similar with object detection:\r\n\r\n- `instances`: a dictionary containing bounding boxes, segments, and labels of the cell objects \r\n - `bbox`: a list of bounding boxes\r\n - `segment`: a list of segments in format of `[polygon]`, where each polygon is `[x0, y0, ..., xn, yn]`\r\n - `label`: a list of integers representing the category\r\n\r\nDo you think it would be ok?", "Don't you think it makes sense to keep the same category IDs for all approaches? \r\n\r\nNow we have:\r\n - nucleus category ID equals 0 for object detection and instance segmentation\r\n - background category ID equals 0 (on the masks) for semantic segmentation", "I find it weird to have a dummy label in object detection just to align the mapping with semantic segmentation. Instead, let's explain in the card (under Data Fields -> annotation) what the pixel values mean (background + object detection labels)" ]
1,655,130,019,000
1,655,717,862,000
null
NONE
null
As described in #4075 I could not generate the dummy data. Also, on the data repository isn't provided the split IDs, but I copy the functions that provide the correct data split. In summary, to have a better distribution, the data in this dataset should be separated based on the amount of NORs in each image.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4486/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4486/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4486", "html_url": "https://github.com/huggingface/datasets/pull/4486", "diff_url": "https://github.com/huggingface/datasets/pull/4486.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4486.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4485
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4485/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4485/comments
https://api.github.com/repos/huggingface/datasets/issues/4485/events
https://github.com/huggingface/datasets/pull/4485
1,269,463,054
PR_kwDODunzps45kD7A
4,485
Fix cast to null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,127,872,000
1,655,214,234,000
null
MEMBER
null
It currently fails with `ArrowNotImplementedError` instead of `TypeError` when one tries to cast integer to null type. Because if this, type inference breaks when one replaces null values with integers in `map` (it first tries to cast to the previous type before inferring the new type). Fix https://github.com/huggingface/datasets/issues/4483
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4485/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4485/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4485", "html_url": "https://github.com/huggingface/datasets/pull/4485", "diff_url": "https://github.com/huggingface/datasets/pull/4485.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4485.patch", "merged_at": 1655213654000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4484
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4484/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4484/comments
https://api.github.com/repos/huggingface/datasets/issues/4484/events
https://github.com/huggingface/datasets/pull/4484
1,269,383,811
PR_kwDODunzps45jywZ
4,484
Better ImportError message when a dataset script dependency is missing
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Discussed offline with @mariosasko, merging :)" ]
1,655,124,277,000
1,655,128,831,000
null
MEMBER
null
When a depenency is missing for a dataset script, an ImportError message is shown, with a tip to install the missing dependencies. This message is not ideal at the moment: it may show duplicate dependencies, and is not very readable. I improved it from ``` ImportError: To be able to use bigbench, you need to install the following dependencies['bigbench', 'bigbench', 'bigbench', 'bigbench'] using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz" bigbench bigbench bigbench' for instance' ``` to ``` ImportError: To be able to use bigbench, you need to install the following dependency: bigbench. Please install it using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"' for instance' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4484/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4484/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4484", "html_url": "https://github.com/huggingface/datasets/pull/4484", "diff_url": "https://github.com/huggingface/datasets/pull/4484.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4484.patch", "merged_at": 1655128247000 }
true

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
9
Add dataset card