url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
51
51
id
int64
1.29B
1.57B
node_id
stringlengths
18
18
number
int64
4.59k
5.51k
title
stringlengths
10
165
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
null
comments
int64
0
48
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
51
33.9k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
0 classes
pull_request
dict
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/5426
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5426/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5426/comments
https://api.github.com/repos/huggingface/datasets/issues/5426/events
https://github.com/huggingface/datasets/issues/5426
1,535,158,555
I_kwDODunzps5bgKkb
5,426
CI tests are broken: SchemaInferenceError
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2023-01-16T16:02:07
2023-01-17T07:17:12
2023-01-16T16:49:04
MEMBER
null
CI is broken, raising a `SchemaInferenceError`: see https://github.com/huggingface/datasets/actions/runs/3930901593/jobs/6721492004 ``` FAILED tests/test_beam.py::BeamBuilderTest::test_download_and_prepare_sharded - datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data ``` Stack trace: ``` ______________ BeamBuilderTest.test_download_and_prepare_sharded _______________ [gw1] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python self = <tests.test_beam.BeamBuilderTest testMethod=test_download_and_prepare_sharded> @require_beam def test_download_and_prepare_sharded(self): import apache_beam as beam original_write_parquet = beam.io.parquetio.WriteToParquet expected_num_examples = len(get_test_dummy_examples()) with tempfile.TemporaryDirectory() as tmp_cache_dir: builder = DummyBeamDataset(cache_dir=tmp_cache_dir, beam_runner="DirectRunner") with patch("apache_beam.io.parquetio.WriteToParquet") as write_parquet_mock: write_parquet_mock.side_effect = partial(original_write_parquet, num_shards=2) > builder.download_and_prepare() tests/test_beam.py:97: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/builder.py:864: in download_and_prepare **download_and_prepare_kwargs, /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/builder.py:1976: in _download_and_prepare num_examples, num_bytes = beam_writer.finalize(metrics.query(m_filter)) /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:694: in finalize shard_num_bytes, _ = parquet_to_arrow(source, destination) /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:740: in parquet_to_arrow num_bytes, num_examples = writer.finalize() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <datasets.arrow_writer.ArrowWriter object at 0x7f6dcbb3e810> close_stream = True def finalize(self, close_stream=True): self.write_rows_on_file() # In case current_examples < writer_batch_size, but user uses finalize() if self._check_duplicates: self.check_duplicate_keys() # Re-intializing to empty list for next batch self.hkey_record = [] self.write_examples_on_file() # If schema is known, infer features even if no examples were written if self.pa_writer is None and self.schema: self._build_writer(self.schema) if self.pa_writer is not None: self.pa_writer.close() self.pa_writer = None if close_stream: self.stream.close() else: if close_stream: self.stream.close() > raise SchemaInferenceError("Please pass `features` or at least one example when writing data") E datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:593: SchemaInferenceError ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5426/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5426/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5425
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5425/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5425/comments
https://api.github.com/repos/huggingface/datasets/issues/5425/events
https://github.com/huggingface/datasets/issues/5425
1,534,581,850
I_kwDODunzps5bd9xa
5,425
Sort on multiple keys with datasets.Dataset.sort()
{ "login": "rocco-fortuna", "id": 101344863, "node_id": "U_kgDOBgpmXw", "avatar_url": "https://avatars.githubusercontent.com/u/101344863?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rocco-fortuna", "html_url": "https://github.com/rocco-fortuna", "followers_url": "https://api.github.com/users/rocco-fortuna/followers", "following_url": "https://api.github.com/users/rocco-fortuna/following{/other_user}", "gists_url": "https://api.github.com/users/rocco-fortuna/gists{/gist_id}", "starred_url": "https://api.github.com/users/rocco-fortuna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rocco-fortuna/subscriptions", "organizations_url": "https://api.github.com/users/rocco-fortuna/orgs", "repos_url": "https://api.github.com/users/rocco-fortuna/repos", "events_url": "https://api.github.com/users/rocco-fortuna/events{/privacy}", "received_events_url": "https://api.github.com/users/rocco-fortuna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
open
false
null
[]
null
9
2023-01-16T09:22:26
2023-02-03T13:42:23
null
NONE
null
### Feature request From discussion on forum: https://discuss.huggingface.co/t/datasets-dataset-sort-does-not-preserve-ordering/29065/1 `sort()` does not preserve ordering, and it does not support sorting on multiple columns, nor a key function. The suggested solution: > ... having something similar to pandas and be able to specify multiple columns for sorting. We’re already using pandas under the hood to do the sorting in datasets. The suggested workaround: > convert your dataset to pandas and use `df.sort_values()` ### Motivation Preserved ordering when sorting is very handy when one needs to sort on multiple columns, A and B, so that e.g. whenever A is equal for two or more rows, B is kept sorted. Having a parameter to do this in 🤗datasets would be cleaner than going through pandas and back, and it wouldn't add much complexity to the library. Alternatives: - the possibility to specify multiple keys to sort by with decreasing priority (suggested solution), - the ability to provide a key function for sorting, so that one can manually specify the sorting criteria. ### Your contribution I'll be happy to contribute by submitting a PR. Will get documented on `CONTRIBUTING.MD`. Would love to get thoughts on this, if anyone has anything to add.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5425/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5425/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5424
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5424/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5424/comments
https://api.github.com/repos/huggingface/datasets/issues/5424/events
https://github.com/huggingface/datasets/issues/5424
1,534,394,756
I_kwDODunzps5bdQGE
5,424
When applying `ReadInstruction` to custom load it's not DatasetDict but list of Dataset?
{ "login": "macabdul9", "id": 25720695, "node_id": "MDQ6VXNlcjI1NzIwNjk1", "avatar_url": "https://avatars.githubusercontent.com/u/25720695?v=4", "gravatar_id": "", "url": "https://api.github.com/users/macabdul9", "html_url": "https://github.com/macabdul9", "followers_url": "https://api.github.com/users/macabdul9/followers", "following_url": "https://api.github.com/users/macabdul9/following{/other_user}", "gists_url": "https://api.github.com/users/macabdul9/gists{/gist_id}", "starred_url": "https://api.github.com/users/macabdul9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/macabdul9/subscriptions", "organizations_url": "https://api.github.com/users/macabdul9/orgs", "repos_url": "https://api.github.com/users/macabdul9/repos", "events_url": "https://api.github.com/users/macabdul9/events{/privacy}", "received_events_url": "https://api.github.com/users/macabdul9/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2023-01-16T06:54:28
2023-01-19T15:09:14
null
NONE
null
### Describe the bug I am loading datasets from custom `tsv` files stored locally and applying split instructions for each split. Although the ReadInstruction is being applied correctly and I was expecting it to be `DatasetDict` but instead it is a list of `Dataset`. ### Steps to reproduce the bug Steps to reproduce the behaviour: 1. Import `from datasets import load_dataset, ReadInstruction` 2. Instruction to load the dataset ``` instructions = [ ReadInstruction(split_name="train", from_=0, to=10, unit='%', rounding='closest'), ReadInstruction(split_name="dev", from_=0, to=10, unit='%', rounding='closest'), ReadInstruction(split_name="test", from_=0, to=5, unit='%', rounding='closest') ] ``` 3. Load `dataset = load_dataset('csv', data_dir="data/", data_files={"train":"train.tsv", "dev":"dev.tsv", "test":"test.tsv"}, delimiter="\t", split=instructions)` ### Expected behavior **Current behaviour** ![Screenshot from 2023-01-16 10-45-27](https://user-images.githubusercontent.com/25720695/212614754-306898d8-8c27-4475-9bb8-0321bd939561.png) : **Expected behaviour** ![Screenshot from 2023-01-16 10-45-42](https://user-images.githubusercontent.com/25720695/212614813-0d336bf7-5266-482e-bb96-ef51f64de204.png) ### Environment info ``datasets==2.8.0 `` `Python==3.8.5 ` `Platform - Ubuntu 20.04.4 LTS`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5424/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5424/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5422
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5422/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5422/comments
https://api.github.com/repos/huggingface/datasets/issues/5422/events
https://github.com/huggingface/datasets/issues/5422
1,533,385,239
I_kwDODunzps5bZZoX
5,422
Datasets load error for saved github issues
{ "login": "folterj", "id": 7360564, "node_id": "MDQ6VXNlcjczNjA1NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7360564?v=4", "gravatar_id": "", "url": "https://api.github.com/users/folterj", "html_url": "https://github.com/folterj", "followers_url": "https://api.github.com/users/folterj/followers", "following_url": "https://api.github.com/users/folterj/following{/other_user}", "gists_url": "https://api.github.com/users/folterj/gists{/gist_id}", "starred_url": "https://api.github.com/users/folterj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/folterj/subscriptions", "organizations_url": "https://api.github.com/users/folterj/orgs", "repos_url": "https://api.github.com/users/folterj/repos", "events_url": "https://api.github.com/users/folterj/events{/privacy}", "received_events_url": "https://api.github.com/users/folterj/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2023-01-14T17:29:38
2023-01-16T13:10:30
null
NONE
null
### Describe the bug Loading a previously downloaded & saved dataset as described in the HuggingFace course: issues_dataset = load_dataset("json", data_files="issues/datasets-issues.jsonl", split="train") Gives this error: datasets.builder.DatasetGenerationError: An error occurred while generating the dataset A work-around I found was to use streaming. ### Steps to reproduce the bug Reproduce by executing the code provided: https://huggingface.co/course/chapter5/5?fw=pt From the heading: 'let’s create a function that can download all the issues from a GitHub repository' ### Expected behavior No error ### Environment info Datasets version 2.8.0. Note that version 2.6.1 gives the same error (related to null timestamp). **[EDIT]** This is the complete error trace confirming the issue is related to the timestamp (`Couldn't cast array of type timestamp[s] to null`) ``` Using custom data configuration default-950028611d2860c8 Downloading and preparing dataset json/default to [...]/.cache/huggingface/datasets/json/default-950028611d2860c8/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51... Downloading data files: 100%|██████████| 1/1 [00:00<?, ?it/s] Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 500.63it/s] Generating train split: 2619 examples [00:00, 7155.72 examples/s]Traceback (most recent call last): File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 1831, in _prepare_split_single writer.write_table(table) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\arrow_writer.py", line 567, in write_table pa_table = table_cast(pa_table, self._schema) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2282, in table_cast return cast_table_to_schema(table, schema) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2241, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2241, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1807, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1807, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2035, in cast_array_to_feature arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2035, in <listcomp> arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1809, in wrapper return func(array, *args, **kwargs) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2101, in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1809, in wrapper return func(array, *args, **kwargs) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1990, in array_cast raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}") TypeError: Couldn't cast array of type timestamp[s] to null The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm 2022.1.3\plugins\python\helpers\pydev\pydevconsole.py", line 364, in runcode coro = func() File "<input>", line 1, in <module> File "C:\Program Files\JetBrains\PyCharm 2022.1.3\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "C:\Program Files\JetBrains\PyCharm 2022.1.3\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "[...]\PycharmProjects\TransformersTesting\dataset_issues.py", line 20, in <module> issues_dataset = load_dataset("json", data_files="issues/datasets-issues.jsonl", split="train") File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\load.py", line 1757, in load_dataset builder_instance.download_and_prepare( File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 860, in download_and_prepare self._download_and_prepare( File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 953, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 1706, in _prepare_split for job_id, done, content in self._prepare_split_single( File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 1849, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset Generating train split: 2619 examples [00:19, 7155.72 examples/s] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5422/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5422/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5421
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5421/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5421/comments
https://api.github.com/repos/huggingface/datasets/issues/5421/events
https://github.com/huggingface/datasets/issues/5421
1,532,278,307
I_kwDODunzps5bVLYj
5,421
Support case-insensitive Hub dataset name in load_dataset
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2023-01-13T13:07:07
2023-01-13T20:12:32
2023-01-13T20:12:32
CONTRIBUTOR
null
### Feature request The dataset name on the Hub is case-insensitive (see https://github.com/huggingface/moon-landing/pull/2399, internal issue), i.e., https://huggingface.co/datasets/GLUE redirects to https://huggingface.co/datasets/glue. Ideally, we could load the glue dataset using the following: ``` from datasets import load_dataset load_dataset('GLUE', 'cola') ``` It breaks because the loading script `GLUE.py` does not exist (`glue.py` should be selected instead). Minor additional comment: in other cases without a loading script, we can load the dataset, but the automatically generated config name depends on the casing: - `load_dataset('severo/danish-wit')` generates the config name `severo--danish-wit-e6fda5b070deb133`, while - `load_dataset('severo/danish-WIT')` generates the config name `severo--danish-WIT-e6fda5b070deb133` ### Motivation To follow the same UX on the Hub and in the datasets library. ### Your contribution ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5421/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5421/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5419
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5419/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5419/comments
https://api.github.com/repos/huggingface/datasets/issues/5419/events
https://github.com/huggingface/datasets/issues/5419
1,531,999,850
I_kwDODunzps5bUHZq
5,419
label_column='labels' in datasets.TextClassification and 'label' or 'label_ids' in transformers.DataColator
{ "login": "CreatixEA", "id": 172385, "node_id": "MDQ6VXNlcjE3MjM4NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/172385?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CreatixEA", "html_url": "https://github.com/CreatixEA", "followers_url": "https://api.github.com/users/CreatixEA/followers", "following_url": "https://api.github.com/users/CreatixEA/following{/other_user}", "gists_url": "https://api.github.com/users/CreatixEA/gists{/gist_id}", "starred_url": "https://api.github.com/users/CreatixEA/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CreatixEA/subscriptions", "organizations_url": "https://api.github.com/users/CreatixEA/orgs", "repos_url": "https://api.github.com/users/CreatixEA/repos", "events_url": "https://api.github.com/users/CreatixEA/events{/privacy}", "received_events_url": "https://api.github.com/users/CreatixEA/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2023-01-13T09:40:07
2023-01-19T15:46:51
null
NONE
null
### Describe the bug When preparing a dataset for a task using `datasets.TextClassification`, the output feature is named `labels`. When preparing the trainer using the `transformers.DataCollator` the default column name is `label` if binary or `label_ids` if multi-class problem. It is required to rename the column accordingly to the expected name : `label` or `label_ids` ### Steps to reproduce the bug ```python from datasets import TextClassification, AutoTokenized, DataCollatorWithPadding ds_prepared = my_dataset.prepare_for_task(TextClassification(text_column='TEXT', label_column='MY_LABEL_COLUMN_1_OR_0')) print(ds_prepared) tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ds_tokenized = ds_prepared.map(lambda x: tokenizer(x['text'], truncation=True), batched=True) print(ds_tokenized) data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") tf_data = model.prepare_tf_dataset(ds_tokenized, shuffle=True, batch_size=16, collate_fn=data_collator) print(tf_data) ``` ### Expected behavior Without renaming the the column, the target column is not in the final tf_data since it is not in the column name expected by the data_collator. To correct this, we have to rename the column: ```python ds_prepared = my_dataset.prepare_for_task(TextClassification(text_column='TEXT', label_column='MY_LABEL_COLUMN_1_OR_0')).rename_column('labels', 'label') ``` ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.15.79.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.6 - PyArrow version: 10.0.1 - Pandas version: 1.5.2 - `transformers` version: 4.26.0.dev0 - Platform: Linux-5.15.79.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in>
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5419/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5419/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5418
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5418/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5418/comments
https://api.github.com/repos/huggingface/datasets/issues/5418/events
https://github.com/huggingface/datasets/issues/5418
1,530,111,184
I_kwDODunzps5bM6TQ
5,418
Add ProgressBar for `to_parquet`
{ "login": "zanussbaum", "id": 33707069, "node_id": "MDQ6VXNlcjMzNzA3MDY5", "avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zanussbaum", "html_url": "https://github.com/zanussbaum", "followers_url": "https://api.github.com/users/zanussbaum/followers", "following_url": "https://api.github.com/users/zanussbaum/following{/other_user}", "gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}", "starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions", "organizations_url": "https://api.github.com/users/zanussbaum/orgs", "repos_url": "https://api.github.com/users/zanussbaum/repos", "events_url": "https://api.github.com/users/zanussbaum/events{/privacy}", "received_events_url": "https://api.github.com/users/zanussbaum/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "zanussbaum", "id": 33707069, "node_id": "MDQ6VXNlcjMzNzA3MDY5", "avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zanussbaum", "html_url": "https://github.com/zanussbaum", "followers_url": "https://api.github.com/users/zanussbaum/followers", "following_url": "https://api.github.com/users/zanussbaum/following{/other_user}", "gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}", "starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions", "organizations_url": "https://api.github.com/users/zanussbaum/orgs", "repos_url": "https://api.github.com/users/zanussbaum/repos", "events_url": "https://api.github.com/users/zanussbaum/events{/privacy}", "received_events_url": "https://api.github.com/users/zanussbaum/received_events", "type": "User", "site_admin": false }
[ { "login": "zanussbaum", "id": 33707069, "node_id": "MDQ6VXNlcjMzNzA3MDY5", "avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zanussbaum", "html_url": "https://github.com/zanussbaum", "followers_url": "https://api.github.com/users/zanussbaum/followers", "following_url": "https://api.github.com/users/zanussbaum/following{/other_user}", "gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}", "starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions", "organizations_url": "https://api.github.com/users/zanussbaum/orgs", "repos_url": "https://api.github.com/users/zanussbaum/repos", "events_url": "https://api.github.com/users/zanussbaum/events{/privacy}", "received_events_url": "https://api.github.com/users/zanussbaum/received_events", "type": "User", "site_admin": false } ]
null
4
2023-01-12T05:06:20
2023-01-24T18:18:24
2023-01-24T18:18:24
CONTRIBUTOR
null
### Feature request Add a progress bar for `Dataset.to_parquet`, similar to how `to_json` works. ### Motivation It's a bit frustrating to not know how long a dataset will take to write to file and if it's stuck or not without a progress bar ### Your contribution Sure I can help if needed
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5418/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5418/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5415
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5415/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5415/comments
https://api.github.com/repos/huggingface/datasets/issues/5415/events
https://github.com/huggingface/datasets/issues/5415
1,526,904,861
I_kwDODunzps5bArgd
5,415
RuntimeError: Sharding is ambiguous for this dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2023-01-10T07:36:11
2023-01-18T14:09:04
2023-01-18T14:09:03
MEMBER
null
### Describe the bug When loading some datasets, a RuntimeError is raised. For example, for "ami" dataset: https://huggingface.co/datasets/ami/discussions/3 ``` .../huggingface/datasets/src/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) 1415 fpath = path_join(self._output_dir, fname) 1416 -> 1417 num_input_shards = _number_of_shards_in_gen_kwargs(split_generator.gen_kwargs) 1418 if num_input_shards <= 1 and num_proc is not None: 1419 logger.warning( .../huggingface/datasets/src/datasets/utils/sharding.py in _number_of_shards_in_gen_kwargs(gen_kwargs) 10 lists_lengths = {key: len(value) for key, value in gen_kwargs.items() if isinstance(value, list)} 11 if len(set(lists_lengths.values())) > 1: ---> 12 raise RuntimeError( 13 ( 14 "Sharding is ambiguous for this dataset: " RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize: - key samples_paths has length 6 - key ids has length 7 - key verification_ids has length 6 To fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, and use tuples otherwise. In the end there should only be one single list, or several lists with the same length. ``` This behavior was introduced when implementing multiprocessing by PR: - #5107 ### Steps to reproduce the bug ```python ds = load_dataset("ami", "microphone-single", split="train", revision="2d7620bb7c3f1aab9f329615c3bdb598069d907a") ``` ### Expected behavior No error raised. ### Environment info Since datasets 2.7.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5415/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5415/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5414
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5414/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5414/comments
https://api.github.com/repos/huggingface/datasets/issues/5414/events
https://github.com/huggingface/datasets/issues/5414
1,525,733,818
I_kwDODunzps5a8Nm6
5,414
Sharding error with Multilingual LibriSpeech
{ "login": "Nithin-Holla", "id": 19574344, "node_id": "MDQ6VXNlcjE5NTc0MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/19574344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nithin-Holla", "html_url": "https://github.com/Nithin-Holla", "followers_url": "https://api.github.com/users/Nithin-Holla/followers", "following_url": "https://api.github.com/users/Nithin-Holla/following{/other_user}", "gists_url": "https://api.github.com/users/Nithin-Holla/gists{/gist_id}", "starred_url": "https://api.github.com/users/Nithin-Holla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Nithin-Holla/subscriptions", "organizations_url": "https://api.github.com/users/Nithin-Holla/orgs", "repos_url": "https://api.github.com/users/Nithin-Holla/repos", "events_url": "https://api.github.com/users/Nithin-Holla/events{/privacy}", "received_events_url": "https://api.github.com/users/Nithin-Holla/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
4
2023-01-09T14:45:31
2023-01-18T14:09:04
2023-01-18T14:09:04
NONE
null
### Describe the bug Loading the German Multilingual LibriSpeech dataset results in a RuntimeError regarding sharding with the following stacktrace: ``` Downloading and preparing dataset multilingual_librispeech/german to /home/nithin/datadrive/cache/huggingface/datasets/facebook___multilingual_librispeech/german/2.1.0/1904af50f57a5c370c9364cc337699cfe496d4e9edcae6648a96be23086362d0... Downloading data files: 100% 3/3 [00:00<00:00, 107.23it/s] Downloading data files: 100% 1/1 [00:00<00:00, 35.08it/s] Downloading data files: 100% 6/6 [00:00<00:00, 303.36it/s] Downloading data files: 100% 3/3 [00:00<00:00, 130.37it/s] Downloading data files: 100% 1049/1049 [00:00<00:00, 4491.40it/s] Downloading data files: 100% 37/37 [00:00<00:00, 1096.78it/s] Downloading data files: 100% 40/40 [00:00<00:00, 1003.93it/s] Extracting data files: 100% 3/3 [00:11<00:00, 2.62s/it] Generating train split: 469942/0 [34:13<00:00, 273.21 examples/s] Output exceeds the size limit. Open the full output data in a text editor --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-14-74fa6d092bdc> in <module> ----> 1 mls = load_dataset(MLS_DATASET, 2 LANGUAGE, 3 cache_dir="~/datadrive/cache/huggingface/datasets", 4 ignore_verifications=True) /anaconda/envs/py38_default/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs) 1755 1756 # Download and prepare data -> 1757 builder_instance.download_and_prepare( 1758 download_config=download_config, 1759 download_mode=download_mode, /anaconda/envs/py38_default/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 858 if num_proc is not None: 859 prepare_split_kwargs["num_proc"] = num_proc --> 860 self._download_and_prepare( 861 dl_manager=dl_manager, 862 verify_infos=verify_infos, /anaconda/envs/py38_default/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs) 1609 1610 def _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs): ... RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize: - key audio_archives has length 1049 - key local_extracted_archive has length 1049 - key limited_ids_paths has length 1 To fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, and use tuples otherwise. In the end there should only be one single list, or several lists with the same length. ``` ### Steps to reproduce the bug Here is the code to reproduce it: ```python from datasets import load_dataset MLS_DATASET = "facebook/multilingual_librispeech" LANGUAGE = "german" mls = load_dataset(MLS_DATASET, LANGUAGE, cache_dir="~/datadrive/cache/huggingface/datasets", ignore_verifications=True) ``` ### Expected behavior The expected behaviour is that the dataset is successfully loaded. ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.4.0-1094-azure-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyArrow version: 10.0.1 - Pandas version: 1.2.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5414/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5414/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5413
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5413/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5413/comments
https://api.github.com/repos/huggingface/datasets/issues/5413/events
https://github.com/huggingface/datasets/issues/5413
1,524,591,837
I_kwDODunzps5a32zd
5,413
concatenate_datasets fails when two dataset with shards > 1 and unequal shard numbers
{ "login": "ZeguanXiao", "id": 38279341, "node_id": "MDQ6VXNlcjM4Mjc5MzQx", "avatar_url": "https://avatars.githubusercontent.com/u/38279341?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZeguanXiao", "html_url": "https://github.com/ZeguanXiao", "followers_url": "https://api.github.com/users/ZeguanXiao/followers", "following_url": "https://api.github.com/users/ZeguanXiao/following{/other_user}", "gists_url": "https://api.github.com/users/ZeguanXiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZeguanXiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZeguanXiao/subscriptions", "organizations_url": "https://api.github.com/users/ZeguanXiao/orgs", "repos_url": "https://api.github.com/users/ZeguanXiao/repos", "events_url": "https://api.github.com/users/ZeguanXiao/events{/privacy}", "received_events_url": "https://api.github.com/users/ZeguanXiao/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
1
2023-01-08T17:01:52
2023-01-26T09:27:21
2023-01-26T09:27:21
NONE
null
### Describe the bug When using `concatenate_datasets([dataset1, dataset2], axis = 1)` to concatenate two datasets with shards > 1, it fails: ``` File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/combine.py", line 182, in concatenate_datasets return _concatenate_map_style_datasets(dsets, info=info, split=split, axis=axis) File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 5499, in _concatenate_map_style_datasets table = concat_tables([dset._data for dset in dsets], axis=axis) File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/table.py", line 1778, in concat_tables return ConcatenationTable.from_tables(tables, axis=axis) File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/table.py", line 1483, in from_tables blocks = _extend_blocks(blocks, table_blocks, axis=axis) File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/table.py", line 1477, in _extend_blocks result[i].extend(row_blocks) IndexError: list index out of range ``` ### Steps to reproduce the bug dataset = concatenate_datasets([dataset1, dataset2], axis = 1) ### Expected behavior The datasets are correctly concatenated. ### Environment info datasets==2.8.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5413/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5413/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5412
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5412/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5412/comments
https://api.github.com/repos/huggingface/datasets/issues/5412/events
https://github.com/huggingface/datasets/issues/5412
1,524,250,269
I_kwDODunzps5a2jad
5,412
load_dataset() cannot find dataset_info.json with multiple training runs in parallel
{ "login": "destigres", "id": 7139344, "node_id": "MDQ6VXNlcjcxMzkzNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7139344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/destigres", "html_url": "https://github.com/destigres", "followers_url": "https://api.github.com/users/destigres/followers", "following_url": "https://api.github.com/users/destigres/following{/other_user}", "gists_url": "https://api.github.com/users/destigres/gists{/gist_id}", "starred_url": "https://api.github.com/users/destigres/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/destigres/subscriptions", "organizations_url": "https://api.github.com/users/destigres/orgs", "repos_url": "https://api.github.com/users/destigres/repos", "events_url": "https://api.github.com/users/destigres/events{/privacy}", "received_events_url": "https://api.github.com/users/destigres/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-01-08T00:44:32
2023-01-19T20:28:43
2023-01-19T20:28:43
NONE
null
### Describe the bug I have a custom local dataset in JSON form. I am trying to do multiple training runs in parallel. The first training run runs with no issue. However, when I start another run on another GPU, the following code throws this error. If there is a workaround to ignore the cache I think that would solve my problem too. I am using datasets version 2.8.0. ### Steps to reproduce the bug 1. Start training run of GPU 0 loading dataset from ``` load_dataset( "json", data_files=tr_dataset_path, split=f"train", download_mode="force_redownload", ) ``` 2. While GPU 0 is training, start an identical run on GPU 1. GPU 1 will produce the following error: ``` Traceback (most recent call last): File "/local-scratch1/data/mt/code/qq/train.py", line 198, in <module> main() File "/home/username/.local/lib/python3.8/site-packages/click/core.py", line 1130, in __call__ return self.main(*args, **kwargs) File "/home/username/.local/lib/python3.8/site-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/home/username/.local/lib/python3.8/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/username/.local/lib/python3.8/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "/local-scratch1/data/mt/code/qq/train.py", line 113, in main load_dataset( File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/datasets/load.py", line 1734, in load_dataset builder_instance = load_dataset_builder( File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/datasets/load.py", line 1518, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/datasets/builder.py", line 366, in __init__ self.info = DatasetInfo.from_directory(self._cache_dir) File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/datasets/info.py", line 313, in from_directory with fs.open(path_join(dataset_info_dir, config.DATASET_INFO_FILENAME), "r", encoding="utf-8") as f: File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/spec.py", line 1094, in open self.open( File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/spec.py", line 1106, in open f = self._open( File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/implementations/local.py", line 175, in _open return LocalFileOpener(path, mode, fs=self, **kwargs) File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/implementations/local.py", line 273, in __init__ self._open() File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/implementations/local.py", line 278, in _open self.f = open(self.path, mode=self.mode) FileNotFoundError: [Errno 2] No such file or directory: '/home/username/.cache/huggingface/datasets/json/default-43d06a4aedb25e6d/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/dataset_info.json' ``` ### Expected behavior Expected behavior: 2nd GPU training run should run the same as 1st GPU training run. ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.8.0 - Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.10 - Python version: 3.8.15 - PyArrow version: 9.0.0 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5412/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5412/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5408
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5408/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5408/comments
https://api.github.com/repos/huggingface/datasets/issues/5408/events
https://github.com/huggingface/datasets/issues/5408
1,519,890,752
I_kwDODunzps5al7FA
5,408
dataset map function could not be hash properly
{ "login": "Tungway1990", "id": 68179274, "node_id": "MDQ6VXNlcjY4MTc5Mjc0", "avatar_url": "https://avatars.githubusercontent.com/u/68179274?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tungway1990", "html_url": "https://github.com/Tungway1990", "followers_url": "https://api.github.com/users/Tungway1990/followers", "following_url": "https://api.github.com/users/Tungway1990/following{/other_user}", "gists_url": "https://api.github.com/users/Tungway1990/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tungway1990/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tungway1990/subscriptions", "organizations_url": "https://api.github.com/users/Tungway1990/orgs", "repos_url": "https://api.github.com/users/Tungway1990/repos", "events_url": "https://api.github.com/users/Tungway1990/events{/privacy}", "received_events_url": "https://api.github.com/users/Tungway1990/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-01-05T01:59:59
2023-01-06T13:22:19
2023-01-06T13:22:18
NONE
null
### Describe the bug I follow the [blog post](https://huggingface.co/blog/fine-tune-whisper#building-a-demo) to finetune a Cantonese transcribe model. When using map function to prepare dataset, following warning pop out: `common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=1)` > Parameter 'function'=<function prepare_dataset at 0x000001D1D9D79A60> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed. I read https://github.com/huggingface/datasets/issues/4521 and https://github.com/huggingface/datasets/issues/3178 but cannot solve the issue. ### Steps to reproduce the bug ```python from datasets import load_dataset, DatasetDict common_voice = DatasetDict() common_voice["train"] = load_dataset("mozilla-foundation/common_voice_11_0", "zh-HK", split="train+validation") common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "zh-HK", split="test") common_voice = common_voice.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "path", "segment", "up_votes"]) from transformers import WhisperFeatureExtractor, WhisperTokenizer, WhisperProcessor feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small") tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", language="chinese", task="transcribe") processor = WhisperProcessor.from_pretrained("openai/whisper-small", language="chinese", task="transcribe") from datasets import Audio common_voice = common_voice.cast_column("audio", Audio(sampling_rate=16000)) def prepare_dataset(batch): # load and resample audio data from 48 to 16kHz audio = batch["audio"] # compute log-Mel input features from input audio array batch["input_features"] = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0] # encode target text to label ids batch["labels"] = tokenizer(batch["sentence"]).input_ids return batch common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=1) ``` ### Expected behavior Should be no warning shown. ### Environment info - `datasets` version: 2.7.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.9.12 - PyArrow version: 8.0.0 - Pandas version: 1.3.5 - dill version: 0.3.4 - multiprocess version: 0.70.12.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5408/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5408/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5407
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5407/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5407/comments
https://api.github.com/repos/huggingface/datasets/issues/5407/events
https://github.com/huggingface/datasets/issues/5407
1,519,797,345
I_kwDODunzps5alkRh
5,407
Datasets.from_sql() generates deprecation warning
{ "login": "msummerfield", "id": 21002157, "node_id": "MDQ6VXNlcjIxMDAyMTU3", "avatar_url": "https://avatars.githubusercontent.com/u/21002157?v=4", "gravatar_id": "", "url": "https://api.github.com/users/msummerfield", "html_url": "https://github.com/msummerfield", "followers_url": "https://api.github.com/users/msummerfield/followers", "following_url": "https://api.github.com/users/msummerfield/following{/other_user}", "gists_url": "https://api.github.com/users/msummerfield/gists{/gist_id}", "starred_url": "https://api.github.com/users/msummerfield/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/msummerfield/subscriptions", "organizations_url": "https://api.github.com/users/msummerfield/orgs", "repos_url": "https://api.github.com/users/msummerfield/repos", "events_url": "https://api.github.com/users/msummerfield/events{/privacy}", "received_events_url": "https://api.github.com/users/msummerfield/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
2023-01-05T00:43:17
2023-01-06T10:59:14
2023-01-06T10:59:14
NONE
null
### Describe the bug Calling `Datasets.from_sql()` generates a warning: `.../site-packages/datasets/builder.py:712: FutureWarning: 'use_auth_token' was deprecated in version 2.7.1 and will be removed in 3.0.0. Pass 'use_auth_token' to the initializer/'load_dataset_builder' instead.` ### Steps to reproduce the bug Any valid call to `Datasets.from_sql()` will produce the deprecation warning. ### Expected behavior No warning. The fix should be simply to remove the parameter `use_auth_token` from the call to `builder.download_and_prepare()` at line 43 of `io/sql.py` (it is set to `None` anyway, and is not needed). ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-4.15.0-169-generic-x86_64-with-glibc2.27 - Python version: 3.9.15 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5407/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5407/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5406
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5406/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5406/comments
https://api.github.com/repos/huggingface/datasets/issues/5406/events
https://github.com/huggingface/datasets/issues/5406
1,519,140,544
I_kwDODunzps5ajD7A
5,406
[2.6.1][2.7.0] Upgrade `datasets` to fix `TypeError: can only concatenate str (not "int") to str`
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
9
2023-01-04T15:10:04
2023-02-02T13:03:14
null
MEMBER
null
`datasets` 2.6.1 and 2.7.0 started to stop supporting datasets like IMDB, ConLL or MNIST datasets. When loading a dataset using 2.6.1 or 2.7.0, you may this error when loading certain datasets: ```python TypeError: can only concatenate str (not "int") to str ``` This is because we started to update the metadata of those datasets to a format that is not supported in 2.6.1 and 2.7.0 This change is required or those datasets won't be supported by the Hugging Face Hub. Therefore if you encounter this error or if you're using `datasets` 2.6.1 or 2.7.0, we encourage you to update to a newer version. For example, versions 2.6.2 and 2.7.1 patch this issue. ```python pip install -U datasets ``` All the datasets affected are the ones with a ClassLabel feature type and YAML "dataset_info" metadata. More info [here](https://github.com/huggingface/datasets/issues/5275). We apologize for the inconvenience.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5406/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5406/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5405
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5405/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5405/comments
https://api.github.com/repos/huggingface/datasets/issues/5405/events
https://github.com/huggingface/datasets/issues/5405
1,517,879,386
I_kwDODunzps5aeQBa
5,405
size_in_bytes the same for all splits
{ "login": "Breakend", "id": 1609857, "node_id": "MDQ6VXNlcjE2MDk4NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1609857?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Breakend", "html_url": "https://github.com/Breakend", "followers_url": "https://api.github.com/users/Breakend/followers", "following_url": "https://api.github.com/users/Breakend/following{/other_user}", "gists_url": "https://api.github.com/users/Breakend/gists{/gist_id}", "starred_url": "https://api.github.com/users/Breakend/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Breakend/subscriptions", "organizations_url": "https://api.github.com/users/Breakend/orgs", "repos_url": "https://api.github.com/users/Breakend/repos", "events_url": "https://api.github.com/users/Breakend/events{/privacy}", "received_events_url": "https://api.github.com/users/Breakend/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2023-01-03T20:25:48
2023-01-04T09:22:59
null
NONE
null
### Describe the bug Hi, it looks like whenever you pull a dataset and get size_in_bytes, it returns the same size for all splits (and that size is the combined size of all splits). It seems like this shouldn't be the intended behavior since it is misleading. Here's an example: ``` >>> from datasets import load_dataset >>> x = load_dataset("glue", "wnli") Found cached dataset glue (/Users/breakend/.cache/huggingface/datasets/glue/wnli/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1097.70it/s] >>> x["train"].size_in_bytes 186159 >>> x["validation"].size_in_bytes 186159 >>> x["test"].size_in_bytes 186159 >>> ``` ### Steps to reproduce the bug ``` >>> from datasets import load_dataset >>> x = load_dataset("glue", "wnli") >>> x["train"].size_in_bytes 186159 >>> x["validation"].size_in_bytes 186159 >>> x["test"].size_in_bytes 186159 ``` ### Expected behavior The expected behavior is that it should return the separate sizes for all splits. ### Environment info - `datasets` version: 2.7.1 - Platform: macOS-12.5-arm64-arm-64bit - Python version: 3.10.8 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5405/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5405/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5404
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5404/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5404/comments
https://api.github.com/repos/huggingface/datasets/issues/5404/events
https://github.com/huggingface/datasets/issues/5404
1,517,566,331
I_kwDODunzps5adDl7
5,404
Better integration of BIG-bench
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
1
2023-01-03T15:37:57
2023-01-31T15:04:04
null
MEMBER
null
### Feature request Ideally, it would be nice to have a maintained PyPI package for `bigbench`. ### Motivation We'd like to allow anyone to access, explore and use any task. ### Your contribution @lhoestq has opened an issue in their repo: - https://github.com/google/BIG-bench/issues/906
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5404/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5404/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5402
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5402/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5402/comments
https://api.github.com/repos/huggingface/datasets/issues/5402/events
https://github.com/huggingface/datasets/issues/5402
1,517,409,429
I_kwDODunzps5acdSV
5,402
Missing state.json when creating a cloud dataset using a dataset_builder
{ "login": "danielfleischer", "id": 22022514, "node_id": "MDQ6VXNlcjIyMDIyNTE0", "avatar_url": "https://avatars.githubusercontent.com/u/22022514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danielfleischer", "html_url": "https://github.com/danielfleischer", "followers_url": "https://api.github.com/users/danielfleischer/followers", "following_url": "https://api.github.com/users/danielfleischer/following{/other_user}", "gists_url": "https://api.github.com/users/danielfleischer/gists{/gist_id}", "starred_url": "https://api.github.com/users/danielfleischer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danielfleischer/subscriptions", "organizations_url": "https://api.github.com/users/danielfleischer/orgs", "repos_url": "https://api.github.com/users/danielfleischer/repos", "events_url": "https://api.github.com/users/danielfleischer/events{/privacy}", "received_events_url": "https://api.github.com/users/danielfleischer/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
3
2023-01-03T13:39:59
2023-01-04T17:23:57
null
NONE
null
### Describe the bug Using `load_dataset_builder` to create a builder, run `download_and_prepare` do upload it to S3. However when trying to load it, there are missing `state.json` files. Complete example: ```python from aiobotocore.session import AioSession as Session from datasets import load_from_disk, load_datase, load_dataset_builder import s3fs storage_options = {"session": Session()} fs = s3fs.S3FileSystem(**storage_options) output_dir = "s3://bucket/imdb" builder = load_dataset_builder("imdb") builder.download_and_prepare(output_dir, storage_options=storage_options) load_from_disk(output_dir, fs=fs) # ERROR # [Errno 2] No such file or directory: '/tmp/tmpy22yys8o/bucket/imdb/state.json' ``` As a comparison, if you use the non lazy `load_dataset`, it works and the S3 folder has different structure + state.json files. Example: ```python from aiobotocore.session import AioSession as Session from datasets import load_from_disk, load_dataset, load_dataset_builder import s3fs storage_options = {"session": Session()} fs = s3fs.S3FileSystem(**storage_options) output_dir = "s3://bucket/imdb" dataset = load_dataset("imdb",) dataset.save_to_disk(output_dir, fs=fs) load_from_disk(output_dir, fs=fs) # WORKS ``` You still want the 1st option for the laziness and the parquet conversion. Thanks! ### Steps to reproduce the bug ```python from aiobotocore.session import AioSession as Session from datasets import load_from_disk, load_datase, load_dataset_builder import s3fs storage_options = {"session": Session()} fs = s3fs.S3FileSystem(**storage_options) output_dir = "s3://bucket/imdb" builder = load_dataset_builder("imdb") builder.download_and_prepare(output_dir, storage_options=storage_options) load_from_disk(output_dir, fs=fs) # ERROR # [Errno 2] No such file or directory: '/tmp/tmpy22yys8o/bucket/imdb/state.json' ``` BTW, you need the AioSession as s3fs is now based on aiobotocore, see https://github.com/fsspec/s3fs/issues/385. ### Expected behavior Expected to be able to load the dataset from S3. ### Environment info ``` s3fs 2022.11.0 s3transfer 0.6.0 datasets 2.8.0 aiobotocore 2.4.2 boto3 1.24.59 botocore 1.27.59 ``` python 3.7.15.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5402/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5399
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5399/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5399/comments
https://api.github.com/repos/huggingface/datasets/issues/5399/events
https://github.com/huggingface/datasets/issues/5399
1,515,548,427
I_kwDODunzps5aVW8L
5,399
Got disconnected from remote data host. Retrying in 5sec [2/20]
{ "login": "alhuri", "id": 46427957, "node_id": "MDQ6VXNlcjQ2NDI3OTU3", "avatar_url": "https://avatars.githubusercontent.com/u/46427957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alhuri", "html_url": "https://github.com/alhuri", "followers_url": "https://api.github.com/users/alhuri/followers", "following_url": "https://api.github.com/users/alhuri/following{/other_user}", "gists_url": "https://api.github.com/users/alhuri/gists{/gist_id}", "starred_url": "https://api.github.com/users/alhuri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alhuri/subscriptions", "organizations_url": "https://api.github.com/users/alhuri/orgs", "repos_url": "https://api.github.com/users/alhuri/repos", "events_url": "https://api.github.com/users/alhuri/events{/privacy}", "received_events_url": "https://api.github.com/users/alhuri/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-01-01T13:00:11
2023-01-02T07:21:52
2023-01-02T07:21:52
NONE
null
### Describe the bug While trying to upload my image dataset of a CSV file type to huggingface by running the below code. The dataset consists of a little over 100k of image-caption pairs ### Steps to reproduce the bug ``` df = pd.read_csv('x.csv', encoding='utf-8-sig') features = Features({ 'link': Image(decode=True), 'caption': Value(dtype='string'), }) #make sure u r logged in to HF ds = Dataset.from_pandas(df, features=features) ds.features ds.push_to_hub("x/x") ``` I got the below error and It always stops at the same progress ``` 100%|██████████| 4/4 [23:53<00:00, 358.48s/ba] 100%|██████████| 4/4 [24:37<00:00, 369.47s/ba]%|▍ | 1/22 [00:06<02:09, 6.16s/it] 100%|██████████| 4/4 [25:00<00:00, 375.15s/ba]%|▉ | 2/22 [25:54<2:36:15, 468.80s/it] 100%|██████████| 4/4 [24:53<00:00, 373.29s/ba]%|█▎ | 3/22 [51:01<4:07:07, 780.39s/it] 100%|██████████| 4/4 [24:01<00:00, 360.34s/ba]%|█▊ | 4/22 [1:17:00<5:04:07, 1013.74s/it] 100%|██████████| 4/4 [23:59<00:00, 359.91s/ba]%|██▎ | 5/22 [1:41:07<5:24:06, 1143.90s/it] 100%|██████████| 4/4 [24:16<00:00, 364.06s/ba]%|██▋ | 6/22 [2:05:14<5:29:15, 1234.74s/it] 100%|██████████| 4/4 [25:24<00:00, 381.10s/ba]%|███▏ | 7/22 [2:29:38<5:25:52, 1303.52s/it] 100%|██████████| 4/4 [25:24<00:00, 381.24s/ba]%|███▋ | 8/22 [2:56:02<5:23:46, 1387.58s/it] 100%|██████████| 4/4 [25:08<00:00, 377.23s/ba]%|████ | 9/22 [3:22:24<5:13:17, 1445.97s/it] 100%|██████████| 4/4 [24:11<00:00, 362.87s/ba]%|████▌ | 10/22 [3:48:24<4:56:02, 1480.19s/it] 100%|██████████| 4/4 [24:44<00:00, 371.11s/ba]%|█████ | 11/22 [4:12:42<4:30:10, 1473.66s/it] 100%|██████████| 4/4 [24:35<00:00, 368.81s/ba]%|█████▍ | 12/22 [4:37:34<4:06:29, 1478.98s/it] 100%|██████████| 4/4 [24:02<00:00, 360.67s/ba]%|█████▉ | 13/22 [5:03:24<3:45:04, 1500.45s/it] 100%|██████████| 4/4 [24:07<00:00, 361.78s/ba]%|██████▎ | 14/22 [5:27:33<3:17:59, 1484.97s/it] 100%|██████████| 4/4 [23:39<00:00, 354.85s/ba]%|██████▊ | 15/22 [5:51:48<2:52:10, 1475.82s/it] Pushing dataset shards to the dataset hub: 73%|███████▎ | 16/22 [6:16:58<2:28:37, 1486.31s/it]Got disconnected from remote data host. Retrying in 5sec [1/20] Got disconnected from remote data host. Retrying in 5sec [2/20] Got disconnected from remote data host. Retrying in 5sec [3/20] Got disconnected from remote data host. Retrying in 5sec [4/20] Got disconnected from remote data host. Retrying in 5sec [5/20] Got disconnected from remote data host. Retrying in 5sec [6/20] Got disconnected from remote data host. Retrying in 5sec [7/20] Got disconnected from remote data host. Retrying in 5sec [8/20] Got disconnected from remote data host. Retrying in 5sec [9/20] ... Got disconnected from remote data host. Retrying in 5sec [19/20] Got disconnected from remote data host. Retrying in 5sec [20/20] 75%|███████▌ | 3/4 [24:47<08:15, 495.86s/ba] Pushing dataset shards to the dataset hub: 73%|███████▎ | 16/22 [6:41:46<2:30:39, 1506.65s/it] Output exceeds the size limit. Open the full output data in a text editor --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) <ipython-input-1-dbf8530779e9> in <module> 16 ds.features ``` ### Expected behavior I was trying to upload an image dataset and expected it to be fully uploaded ### Environment info - `datasets` version: 2.8.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.9 - PyArrow version: 10.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5399/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5399/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5398
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5398/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5398/comments
https://api.github.com/repos/huggingface/datasets/issues/5398/events
https://github.com/huggingface/datasets/issues/5398
1,514,425,231
I_kwDODunzps5aREuP
5,398
Unpin pydantic
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-12-30T10:37:31
2022-12-30T10:43:41
2022-12-30T10:43:41
MEMBER
null
Once `pydantic` fixes their issue in their 1.10.3 version, unpin it. See issue: - #5394 See temporary fix: - #5395
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5398/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5398/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5394
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5394/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5394/comments
https://api.github.com/repos/huggingface/datasets/issues/5394/events
https://github.com/huggingface/datasets/issues/5394
1,513,976,229
I_kwDODunzps5aPXGl
5,394
CI error: TypeError: dataclass_transform() got an unexpected keyword argument 'field_specifiers'
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
2
2022-12-29T18:58:44
2022-12-30T10:40:51
2022-12-29T21:00:27
MEMBER
null
### Describe the bug While installing the dependencies, the CI raises a TypeError: ``` Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/runpy.py", line 183, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/runpy.py", line 142, in _get_module_details return _get_module_details(pkg_main_name, error) File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/runpy.py", line 109, in _get_module_details __import__(pkg_name) File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/spacy/__init__.py", line 6, in <module> from .errors import setup_default_warnings File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/spacy/errors.py", line 2, in <module> from .compat import Literal File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/spacy/compat.py", line 3, in <module> from thinc.util import copy_array File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/thinc/__init__.py", line 5, in <module> from .config import registry File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/thinc/config.py", line 2, in <module> import confection File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/confection/__init__.py", line 10, in <module> from pydantic import BaseModel, create_model, ValidationError, Extra File "pydantic/__init__.py", line 2, in init pydantic.__init__ File "pydantic/dataclasses.py", line 46, in init pydantic.dataclasses # | None | Attribute is set to None. | File "pydantic/main.py", line 121, in init pydantic.main TypeError: dataclass_transform() got an unexpected keyword argument 'field_specifiers' ``` See: https://github.com/huggingface/datasets/actions/runs/3793736481/jobs/6466356565 ### Steps to reproduce the bug ```shell pip install .[tests,metrics-tests] python -m spacy download en_core_web_sm ``` ### Expected behavior No error. ### Environment info See: https://github.com/huggingface/datasets/actions/runs/3793736481/jobs/6466356565
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5394/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5394/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5391
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5391/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5391/comments
https://api.github.com/repos/huggingface/datasets/issues/5391/events
https://github.com/huggingface/datasets/issues/5391
1,510,350,400
I_kwDODunzps5aBh5A
5,391
Whisper Event - RuntimeError: The size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1 100% 1000/1000 [2:52:21<00:00, 10.34s/it]
{ "login": "catswithbats", "id": 12885107, "node_id": "MDQ6VXNlcjEyODg1MTA3", "avatar_url": "https://avatars.githubusercontent.com/u/12885107?v=4", "gravatar_id": "", "url": "https://api.github.com/users/catswithbats", "html_url": "https://github.com/catswithbats", "followers_url": "https://api.github.com/users/catswithbats/followers", "following_url": "https://api.github.com/users/catswithbats/following{/other_user}", "gists_url": "https://api.github.com/users/catswithbats/gists{/gist_id}", "starred_url": "https://api.github.com/users/catswithbats/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/catswithbats/subscriptions", "organizations_url": "https://api.github.com/users/catswithbats/orgs", "repos_url": "https://api.github.com/users/catswithbats/repos", "events_url": "https://api.github.com/users/catswithbats/events{/privacy}", "received_events_url": "https://api.github.com/users/catswithbats/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2022-12-25T15:17:14
2023-01-05T12:56:02
null
NONE
null
Done in a VM with a GPU (Ubuntu) following the [Whisper Event - PYTHON](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#python-script) instructions. Attempted using [RuntimeError: he size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1 100% 1000/1000 - WEB](https://discuss.huggingface.co/t/trainer-runtimeerror-the-size-of-tensor-a-462-must-match-the-size-of-tensor-b-448-at-non-singleton-dimension-1/26010/10 ) - another person experiencing the same issue. But could not resolve the issue with the google/fleurs data. __Not clear what can be modified in the PY code to resolve the input data size mismatch, as the training data is already very small__. Tried posting on Discord, @sanchit-gandhi and @vaibhavs10. Was hoping that the event is over and some input/help is now available. [Hugging Face - whisper-small-amet](https://huggingface.co/drmeeseeks/whisper-small-amet). The paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356) am_et is a low resource language (Table E), with the WER results ranging from 120-229, based on model size. (Whisper small WER=120.2). # ---> Initial Training Output /usr/local/lib/python3.8/dist-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( [INFO|trainer.py:1641] 2022-12-18 05:23:28,799 >> ***** Running training ***** [INFO|trainer.py:1642] 2022-12-18 05:23:28,799 >> Num examples = 446 [INFO|trainer.py:1643] 2022-12-18 05:23:28,799 >> Num Epochs = 72 [INFO|trainer.py:1644] 2022-12-18 05:23:28,799 >> Instantaneous batch size per device = 16 [INFO|trainer.py:1645] 2022-12-18 05:23:28,799 >> Total train batch size (w. parallel, distributed & accumulation) = 32 [INFO|trainer.py:1646] 2022-12-18 05:23:28,799 >> Gradient Accumulation steps = 2 [INFO|trainer.py:1647] 2022-12-18 05:23:28,800 >> Total optimization steps = 1000 [INFO|trainer.py:1648] 2022-12-18 05:23:28,801 >> Number of trainable parameters = 241734912 # ---> Error 14% 9/65 [07:07<48:34, 52.04s/it][INFO|configuration_utils.py:523] 2022-12-18 05:03:07,941 >> Generate config GenerationConfig { "begin_suppress_tokens": [ 220, 50257 ], "bos_token_id": 50257, "decoder_start_token_id": 50258, "eos_token_id": 50257, "max_length": 448, "pad_token_id": 50257, "transformers_version": "4.26.0.dev0", "use_cache": false } Traceback (most recent call last): File "run_speech_recognition_seq2seq_streaming.py", line 629, in <module> main() File "run_speech_recognition_seq2seq_streaming.py", line 578, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1534, in train return inner_training_loop( File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1859, in _inner_training_loop self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2122, in _maybe_log_save_evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) File "/usr/local/lib/python3.8/dist-packages/transformers/trainer_seq2seq.py", line 78, in evaluate return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2818, in evaluate output = eval_loop( File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 3000, in evaluation_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) File "/usr/local/lib/python3.8/dist-packages/transformers/trainer_seq2seq.py", line 213, in prediction_step outputs = model(**inputs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/models/whisper/modeling_whisper.py", line 1197, in forward outputs = self.model( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/models/whisper/modeling_whisper.py", line 1066, in forward decoder_outputs = self.decoder( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/models/whisper/modeling_whisper.py", line 873, in forward hidden_states = inputs_embeds + positions RuntimeError: The size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1 100% 1000/1000 [2:52:21<00:00, 10.34s/it]
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5391/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5391/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5390
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5390/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5390/comments
https://api.github.com/repos/huggingface/datasets/issues/5390/events
https://github.com/huggingface/datasets/issues/5390
1,509,357,553
I_kwDODunzps5Z9vfx
5,390
Error when pushing to the CI hub
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
5
2022-12-23T13:36:37
2022-12-23T20:29:02
2022-12-23T20:29:02
CONTRIBUTOR
null
### Describe the bug Note that it's a special case where the Hub URL is "https://hub-ci.huggingface.co", which does not appear if we do the same on the Hub (https://huggingface.co). The call to `dataset.push_to_hub(` fails: ``` Pushing dataset shards to the dataset hub: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.93s/it] Traceback (most recent call last): File "reproduce_hubci.py", line 16, in <module> dataset.push_to_hub(repo_id=repo_id, private=False, token=USER_TOKEN, embed_external_files=True) File "/home/slesage/hf/datasets/src/datasets/arrow_dataset.py", line 5025, in push_to_hub HfApi(endpoint=config.HF_ENDPOINT).upload_file( File "/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1346, in upload_file raise err File "/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1337, in upload_file r.raise_for_status() File "/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_DATASETS_SERVER_USER__/bug-16718047265472/upload/main/README.md ``` ### Steps to reproduce the bug ```python # reproduce.py from datasets import Dataset import time USER = "__DUMMY_DATASETS_SERVER_USER__" USER_TOKEN = "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" dataset = Dataset.from_dict({"a": [1, 2, 3]}) repo_id = f"{USER}/bug-{int(time.time() * 10e3)}" dataset.push_to_hub(repo_id=repo_id, private=False, token=USER_TOKEN, embed_external_files=True) ``` ```bash $ HF_ENDPOINT="https://hub-ci.huggingface.co" python reproduce.py ``` ### Expected behavior No error and the dataset should be uploaded to the Hub with the README file (which generates the error). ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.15.0-1026-aws-x86_64-with-glibc2.35 - Python version: 3.9.15 - PyArrow version: 7.0.0 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5390/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5390/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5388
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5388/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5388/comments
https://api.github.com/repos/huggingface/datasets/issues/5388/events
https://github.com/huggingface/datasets/issues/5388
1,509,042,348
I_kwDODunzps5Z8iis
5,388
Getting Value Error while loading a dataset..
{ "login": "valmetisrinivas", "id": 51160232, "node_id": "MDQ6VXNlcjUxMTYwMjMy", "avatar_url": "https://avatars.githubusercontent.com/u/51160232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/valmetisrinivas", "html_url": "https://github.com/valmetisrinivas", "followers_url": "https://api.github.com/users/valmetisrinivas/followers", "following_url": "https://api.github.com/users/valmetisrinivas/following{/other_user}", "gists_url": "https://api.github.com/users/valmetisrinivas/gists{/gist_id}", "starred_url": "https://api.github.com/users/valmetisrinivas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/valmetisrinivas/subscriptions", "organizations_url": "https://api.github.com/users/valmetisrinivas/orgs", "repos_url": "https://api.github.com/users/valmetisrinivas/repos", "events_url": "https://api.github.com/users/valmetisrinivas/events{/privacy}", "received_events_url": "https://api.github.com/users/valmetisrinivas/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2022-12-23T08:16:43
2022-12-29T08:36:33
2022-12-27T17:59:09
NONE
null
### Describe the bug I am trying to load a dataset using Hugging Face Datasets load_dataset method. I am getting the value error as show below. Can someone help with this? I am using Windows laptop and Google Colab notebook. ``` WARNING:datasets.builder:Using custom data configuration default-a1d9e8eaedd958cd --------------------------------------------------------------------------- ValueError Traceback (most recent call last) [<ipython-input-12-5b4fdcb8e6d5>](https://localhost:8080/#) in <module> 6 ) 7 ----> 8 next(iter(law_dataset_streamed)) 17 frames [/usr/local/lib/python3.8/dist-packages/fsspec/core.py](https://localhost:8080/#) in get_compression(urlpath, compression) 485 compression = infer_compression(urlpath) 486 if compression is not None and compression not in compr: --> 487 raise ValueError("Compression type %s not supported" % compression) 488 return compression 489 ValueError: Compression type zstd not supported ``` ### Steps to reproduce the bug ``` !pip install zstandard from datasets import load_dataset lds = load_dataset( "json", data_files="https://the-eye.eu/public/AI/pile_preliminary_components/FreeLaw_Opinions.jsonl.zst", split="train", streaming=True, ) ``` ### Expected behavior I expect an iterable object as the output 'lds' to be created. ### Environment info Windows laptop with Google Colab notebook
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5388/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5388/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5387
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5387/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5387/comments
https://api.github.com/repos/huggingface/datasets/issues/5387/events
https://github.com/huggingface/datasets/issues/5387
1,508,740,177
I_kwDODunzps5Z7YxR
5,387
Missing documentation page : improve-performance
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2022-12-23T01:12:57
2023-01-24T16:33:40
2023-01-24T16:33:40
NONE
null
### Describe the bug Trying to access https://huggingface.co/docs/datasets/v2.8.0/en/package_reference/cache#improve-performance, the page is missing. The link is in here : https://huggingface.co/docs/datasets/v2.8.0/en/package_reference/loading_methods#datasets.load_dataset.keep_in_memory ### Steps to reproduce the bug Access the page and see it's missing. ### Expected behavior Not missing page ### Environment info Doesn't matter
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5387/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5387/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5386
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5386/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5386/comments
https://api.github.com/repos/huggingface/datasets/issues/5386/events
https://github.com/huggingface/datasets/issues/5386
1,508,592,918
I_kwDODunzps5Z600W
5,386
`max_shard_size` in `datasets.push_to_hub()` breaks with large files
{ "login": "salieri", "id": 1086393, "node_id": "MDQ6VXNlcjEwODYzOTM=", "avatar_url": "https://avatars.githubusercontent.com/u/1086393?v=4", "gravatar_id": "", "url": "https://api.github.com/users/salieri", "html_url": "https://github.com/salieri", "followers_url": "https://api.github.com/users/salieri/followers", "following_url": "https://api.github.com/users/salieri/following{/other_user}", "gists_url": "https://api.github.com/users/salieri/gists{/gist_id}", "starred_url": "https://api.github.com/users/salieri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/salieri/subscriptions", "organizations_url": "https://api.github.com/users/salieri/orgs", "repos_url": "https://api.github.com/users/salieri/repos", "events_url": "https://api.github.com/users/salieri/events{/privacy}", "received_events_url": "https://api.github.com/users/salieri/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2022-12-22T21:50:58
2022-12-26T23:45:51
2022-12-26T23:45:51
NONE
null
### Describe the bug `max_shard_size` parameter for `datasets.push_to_hub()` works unreliably with large files, generating shard files that are way past the specified limit. In my private dataset, which contains unprocessed images of all sizes (up to `~100MB` per file), I've encountered cases where `max_shard_size='100MB'` results in shard files that are `>2GB` in size. Setting `max_shard_size` to another value, such as `1GB` or `500MB` does not fix this problem. **The real problem is this:** When the shard file size grows too big, the entire dataset breaks because of #4721 and ultimately https://issues.apache.org/jira/browse/ARROW-5030. Since `max_shard_size` does not let one accurately control the size of the shard files, it becomes very easy to build a large dataset without any warnings that it will be broken -- even when you think you are mitigating this problem by setting `max_shard_size`. ``` File " /path/to/sd-test-suite-v1/venv/lib/site-packages/datasets/builder.py", line 1763, in _prepare_split_single for _, table in generator: File " /path/to/sd-test-suite-v1/venv/lib/site-packages/datasets/packaged_modules/parquet/parquet.py", line 69, in _generate_tables for batch_idx, record_batch in enumerate( File "pyarrow/_parquet.pyx", line 1323, in iter_batches File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs ``` ### Steps to reproduce the bug 1. Clone [example repo](https://github.com/salieri/hf-dataset-shard-size-bug) 2. Follow steps in [README.md](https://github.com/salieri/hf-dataset-shard-size-bug/blob/main/README.md) 3. After uploading the dataset, you will see that the shard file size varies between `30MB` and `200MB` -- way beyond the `max_shard_size='75MB'` limit (example: `train-00003-of-00131...` is `155MB` in [here](https://huggingface.co/datasets/slri/shard-size-test/tree/main/data)) (Note that this example repo does not generate shard files that are so large that they would trigger #4721) ### Expected behavior The shard file size should remain below or equal to `max_shard_size`. ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.10.157-139.675.amzn2.aarch64-aarch64-with-glibc2.17 - Python version: 3.7.15 - PyArrow version: 10.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5386/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5386/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5385
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5385/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5385/comments
https://api.github.com/repos/huggingface/datasets/issues/5385/events
https://github.com/huggingface/datasets/issues/5385
1,508,535,532
I_kwDODunzps5Z6mzs
5,385
Is `fs=` deprecated in `load_from_disk()` as well?
{ "login": "dconathan", "id": 15098095, "node_id": "MDQ6VXNlcjE1MDk4MDk1", "avatar_url": "https://avatars.githubusercontent.com/u/15098095?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dconathan", "html_url": "https://github.com/dconathan", "followers_url": "https://api.github.com/users/dconathan/followers", "following_url": "https://api.github.com/users/dconathan/following{/other_user}", "gists_url": "https://api.github.com/users/dconathan/gists{/gist_id}", "starred_url": "https://api.github.com/users/dconathan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dconathan/subscriptions", "organizations_url": "https://api.github.com/users/dconathan/orgs", "repos_url": "https://api.github.com/users/dconathan/repos", "events_url": "https://api.github.com/users/dconathan/events{/privacy}", "received_events_url": "https://api.github.com/users/dconathan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2022-12-22T21:00:45
2023-01-23T10:50:05
2023-01-23T10:50:04
CONTRIBUTOR
null
### Describe the bug The `fs=` argument was deprecated from `Dataset.save_to_disk` and `Dataset.load_from_disk` in favor of automagically figuring it out via fsspec: https://github.com/huggingface/datasets/blob/9a7272cd4222383a5b932b0083a4cc173fda44e8/src/datasets/arrow_dataset.py#L1339-L1340 Is there a reason the same thing shouldn't also apply to `datasets.load.load_from_disk()` as well ? https://github.com/huggingface/datasets/blob/9a7272cd4222383a5b932b0083a4cc173fda44e8/src/datasets/load.py#L1779 ### Steps to reproduce the bug n/a ### Expected behavior n/a ### Environment info n/a
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5385/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5385/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5383
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5383/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5383/comments
https://api.github.com/repos/huggingface/datasets/issues/5383/events
https://github.com/huggingface/datasets/issues/5383
1,507,293,968
I_kwDODunzps5Z13sQ
5,383
IterableDataset missing column_names, differs from Dataset interface
{ "login": "iceboundflame", "id": 933687, "node_id": "MDQ6VXNlcjkzMzY4Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/933687?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iceboundflame", "html_url": "https://github.com/iceboundflame", "followers_url": "https://api.github.com/users/iceboundflame/followers", "following_url": "https://api.github.com/users/iceboundflame/following{/other_user}", "gists_url": "https://api.github.com/users/iceboundflame/gists{/gist_id}", "starred_url": "https://api.github.com/users/iceboundflame/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iceboundflame/subscriptions", "organizations_url": "https://api.github.com/users/iceboundflame/orgs", "repos_url": "https://api.github.com/users/iceboundflame/repos", "events_url": "https://api.github.com/users/iceboundflame/events{/privacy}", "received_events_url": "https://api.github.com/users/iceboundflame/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
3
2022-12-22T05:27:02
2022-12-23T02:57:31
null
NONE
null
### Describe the bug The documentation on [Stream](https://huggingface.co/docs/datasets/v1.18.2/stream.html) seems to imply that IterableDataset behaves just like a Dataset. However, examples like ``` dataset.map(augment_data, batched=True, remove_columns=dataset.column_names, ...) ``` will not work because `.column_names` does not exist on IterableDataset. I cannot find any clear explanation on why this is not available, is it an oversight? We do have `iterable_ds.features` available. ### Steps to reproduce the bug See above ### Expected behavior Dataset and IterableDataset would be expected to have the same interface, with any differences noted in the documentation. ### Environment info n/a
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5383/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5383/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5381
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5381/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5381/comments
https://api.github.com/repos/huggingface/datasets/issues/5381/events
https://github.com/huggingface/datasets/issues/5381
1,504,498,387
I_kwDODunzps5ZrNLT
5,381
Wrong URL for the_pile dataset
{ "login": "LeoGrin", "id": 45738728, "node_id": "MDQ6VXNlcjQ1NzM4NzI4", "avatar_url": "https://avatars.githubusercontent.com/u/45738728?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LeoGrin", "html_url": "https://github.com/LeoGrin", "followers_url": "https://api.github.com/users/LeoGrin/followers", "following_url": "https://api.github.com/users/LeoGrin/following{/other_user}", "gists_url": "https://api.github.com/users/LeoGrin/gists{/gist_id}", "starred_url": "https://api.github.com/users/LeoGrin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LeoGrin/subscriptions", "organizations_url": "https://api.github.com/users/LeoGrin/orgs", "repos_url": "https://api.github.com/users/LeoGrin/repos", "events_url": "https://api.github.com/users/LeoGrin/events{/privacy}", "received_events_url": "https://api.github.com/users/LeoGrin/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2022-12-20T12:40:14
2022-12-20T14:26:52
null
NONE
null
### Describe the bug When trying to load `the_pile` dataset from the library, I get a `FileNotFound` error. ### Steps to reproduce the bug Steps to reproduce: Run: ``` from datasets import load_dataset dataset = load_dataset("the_pile") ``` I get the output: "name": "FileNotFoundError", "message": "Unable to resolve any data file that matches '['**']' at /storage/store/work/lgrinszt/memorization/the_pile with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'BLP', 'BMP', 'DIB', 'BUFR', 'CUR', 'PCX', 'DCX', 'DDS', 'PS', 'EPS', 'FIT', 'FITS', 'FLI', 'FLC', 'FTC', 'FTU', 'GBR', 'GIF', 'GRIB', 'H5', 'HDF', 'PNG', 'APNG', 'JP2', 'J2K', 'JPC', 'JPF', 'JPX', 'J2C', 'ICNS', 'ICO', 'IM', 'IIM', 'TIF', 'TIFF', 'JFIF', 'JPE', 'JPG', 'JPEG', 'MPG', 'MPEG', 'MSP', 'PCD', 'PXR', 'PBM', 'PGM', 'PPM', 'PNM', 'PSD', 'BW', 'RGB', 'RGBA', 'SGI', 'RAS', 'TGA', 'ICB', 'VDA', 'VST', 'WEBP', 'WMF', 'EMF', 'XBM', 'XPM', 'aiff', 'au', 'avr', 'caf', 'flac', 'htk', 'svx', 'mat4', 'mat5', 'mpc2k', 'ogg', 'paf', 'pvf', 'raw', 'rf64', 'sd2', 'sds', 'ircam', 'voc', 'w64', 'wav', 'nist', 'wavex', 'wve', 'xi', 'mp3', 'opus', 'AIFF', 'AU', 'AVR', 'CAF', 'FLAC', 'HTK', 'SVX', 'MAT4', 'MAT5', 'MPC2K', 'OGG', 'PAF', 'PVF', 'RAW', 'RF64', 'SD2', 'SDS', 'IRCAM', 'VOC', 'W64', 'WAV', 'NIST', 'WAVEX', 'WVE', 'XI', 'MP3', 'OPUS', 'zip']" ### Expected behavior `the_pile` dataset should be dowloaded. ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-4.15.0-112-generic-x86_64-with-glibc2.27 - Python version: 3.10.8 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5381/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5380
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5380/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5380/comments
https://api.github.com/repos/huggingface/datasets/issues/5380/events
https://github.com/huggingface/datasets/issues/5380
1,504,404,043
I_kwDODunzps5Zq2JL
5,380
Improve dataset `.skip()` speed in streaming mode
{ "login": "versae", "id": 173537, "node_id": "MDQ6VXNlcjE3MzUzNw==", "avatar_url": "https://avatars.githubusercontent.com/u/173537?v=4", "gravatar_id": "", "url": "https://api.github.com/users/versae", "html_url": "https://github.com/versae", "followers_url": "https://api.github.com/users/versae/followers", "following_url": "https://api.github.com/users/versae/following{/other_user}", "gists_url": "https://api.github.com/users/versae/gists{/gist_id}", "starred_url": "https://api.github.com/users/versae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/versae/subscriptions", "organizations_url": "https://api.github.com/users/versae/orgs", "repos_url": "https://api.github.com/users/versae/repos", "events_url": "https://api.github.com/users/versae/events{/privacy}", "received_events_url": "https://api.github.com/users/versae/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
8
2022-12-20T11:25:23
2023-01-17T08:44:56
null
CONTRIBUTOR
null
### Feature request Add extra information to the `dataset_infos.json` file to include the number of samples/examples in each shard, for example in a new field `num_examples` alongside `num_bytes`. The `.skip()` function could use this information to ignore the download of a shard when in streaming mode, which AFAICT it should speed up the skipping process. ### Motivation When resuming from a checkpoint after a crashed run, using `dataset.skip()` is very convenient to recover the exact state of the data and to not train again over the same examples (assuming same seed, no shuffling). However, I have noticed that for audio datasets in streaming mode this is very costly in terms of time, as shards need to be downloaded every time before skipping the right number of examples. ### Your contribution I took a look already at the code, but it seems a change like this is way deeper than I am able to manage, as it touches the library in several parts. I could give it a try but might need some guidance on the internals.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5380/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5380/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5378
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5378/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5378/comments
https://api.github.com/repos/huggingface/datasets/issues/5378/events
https://github.com/huggingface/datasets/issues/5378
1,503,887,508
I_kwDODunzps5Zo4CU
5,378
The dataset "the_pile", subset "enron_emails" , load_dataset() failure
{ "login": "shaoyuta", "id": 52023469, "node_id": "MDQ6VXNlcjUyMDIzNDY5", "avatar_url": "https://avatars.githubusercontent.com/u/52023469?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shaoyuta", "html_url": "https://github.com/shaoyuta", "followers_url": "https://api.github.com/users/shaoyuta/followers", "following_url": "https://api.github.com/users/shaoyuta/following{/other_user}", "gists_url": "https://api.github.com/users/shaoyuta/gists{/gist_id}", "starred_url": "https://api.github.com/users/shaoyuta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shaoyuta/subscriptions", "organizations_url": "https://api.github.com/users/shaoyuta/orgs", "repos_url": "https://api.github.com/users/shaoyuta/repos", "events_url": "https://api.github.com/users/shaoyuta/events{/privacy}", "received_events_url": "https://api.github.com/users/shaoyuta/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2022-12-20T02:19:13
2022-12-20T07:52:54
2022-12-20T07:52:54
NONE
null
### Describe the bug When run "datasets.load_dataset("the_pile","enron_emails")" failure ![image](https://user-images.githubusercontent.com/52023469/208565302-cfab7b89-0b97-4fa6-a5ba-c11b0b629b1a.png) ### Steps to reproduce the bug Run below code in python cli: >>> import datasets >>> datasets.load_dataset("the_pile","enron_emails") ### Expected behavior Load dataset "the_pile", "enron_emails" successfully. ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.7.1 - Platform: Linux-5.15.0-53-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - PyArrow version: 10.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5378/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5378/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5374
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5374/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5374/comments
https://api.github.com/repos/huggingface/datasets/issues/5374/events
https://github.com/huggingface/datasets/issues/5374
1,501,872,945
I_kwDODunzps5ZhMMx
5,374
Using too many threads results in: Got disconnected from remote data host. Retrying in 5sec
{ "login": "Muennighoff", "id": 62820084, "node_id": "MDQ6VXNlcjYyODIwMDg0", "avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Muennighoff", "html_url": "https://github.com/Muennighoff", "followers_url": "https://api.github.com/users/Muennighoff/followers", "following_url": "https://api.github.com/users/Muennighoff/following{/other_user}", "gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}", "starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions", "organizations_url": "https://api.github.com/users/Muennighoff/orgs", "repos_url": "https://api.github.com/users/Muennighoff/repos", "events_url": "https://api.github.com/users/Muennighoff/events{/privacy}", "received_events_url": "https://api.github.com/users/Muennighoff/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
7
2022-12-18T11:38:58
2022-12-19T16:33:31
null
CONTRIBUTOR
null
### Describe the bug `streaming_download_manager` seems to disconnect if too many runs access the same underlying dataset 🧐 The code works fine for me if I have ~100 runs in parallel, but disconnects once scaling to 200. Possibly related: - https://github.com/huggingface/datasets/pull/3100 - https://github.com/huggingface/datasets/pull/3050 ### Steps to reproduce the bug Running ```python c4 = datasets.load_dataset("c4", "en", split="train", streaming=True).skip(args.start).take(args.end-args.start) df = pd.DataFrame(c4, index=None) ``` with different start & end arguments on 200 CPUs in parallel yields: ``` WARNING:datasets.load:Using the latest cached version of the module from /users/muennighoff/.cache/huggingface/modules/datasets_modules/datasets/c4/df532b158939272d032cc63ef19cd5b83e9b4d00c922b833e4cb18b2e9869b01 (last modified on Mon Dec 12 10:45:02 2022) since it couldn't be found locally at c4. WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [1/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [2/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [3/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [4/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [5/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [6/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [7/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [8/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [9/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [10/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [11/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [12/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [13/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [14/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [15/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [16/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [17/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [18/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [19/20] WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [20/20] ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /pfs/lustrep4/scratch/project_462000119/muennighoff/dec-2022-tasky/inference │ │ _c4.py:68 in <module> │ │ │ │ 65 │ model.eval() │ │ 66 │ │ │ 67 │ c4 = datasets.load_dataset("c4", "en", split="train", streaming=Tru │ │ ❱ 68 │ df = pd.DataFrame(c4, index=None) │ │ 69 │ texts = df["text"].to_list() │ │ 70 │ preds = batch_inference(texts, batch_size=args.batch_size) │ │ 71 │ │ │ │ /opt/cray/pe/python/3.9.12.1/lib/python3.9/site-packages/pandas/core/frame.p │ │ y:684 in __init__ │ │ │ │ 681 │ │ # For data is list-like, or Iterable (will consume into list │ │ 682 │ │ elif is_list_like(data): │ │ 683 │ │ │ if not isinstance(data, (abc.Sequence, ExtensionArray)): │ │ ❱ 684 │ │ │ │ data = list(data) │ │ 685 │ │ │ if len(data) > 0: │ │ 686 │ │ │ │ if is_dataclass(data[0]): │ │ 687 │ │ │ │ │ data = dataclasses_to_dicts(data) │ │ │ │ /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/ │ │ lib/python3.9/site-packages/datasets/iterable_dataset.py:751 in __iter__ │ │ │ │ 748 │ │ yield from ex_iterable.shard_data_sources(shard_idx) │ │ 749 │ │ │ 750 │ def __iter__(self): │ │ ❱ 751 │ │ for key, example in self._iter(): │ │ 752 │ │ │ if self.features: │ │ 753 │ │ │ │ # `IterableDataset` automatically fills missing colum │ │ 754 │ │ │ │ # This is done with `_apply_feature_types`. │ │ │ │ /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/ │ │ lib/python3.9/site-packages/datasets/iterable_dataset.py:741 in _iter │ │ │ │ 738 │ │ │ ex_iterable = self._ex_iterable.shuffle_data_sources(self │ │ 739 │ │ else: │ │ 740 │ │ │ ex_iterable = self._ex_iterable │ │ ❱ 741 │ │ yield from ex_iterable │ │ 742 │ │ │ 743 │ def _iter_shard(self, shard_idx: int): │ │ 744 │ │ if self._shuffling: │ │ │ │ /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/ │ │ lib/python3.9/site-packages/datasets/iterable_dataset.py:617 in __iter__ │ │ │ │ 614 │ │ self.n = n │ │ 615 │ │ │ 616 │ def __iter__(self): │ │ ❱ 617 │ │ yield from islice(self.ex_iterable, self.n) │ │ 618 │ │ │ 619 │ def shuffle_data_sources(self, generator: np.random.Generator) -> │ │ 620 │ │ """Doesn't shuffle the wrapped examples iterable since it wou │ │ │ │ /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/ │ │ lib/python3.9/site-packages/datasets/iterable_dataset.py:594 in __iter__ │ │ │ │ 591 │ │ │ 592 │ def __iter__(self): │ │ 593 │ │ #ex_iterator = iter(self.ex_iterable) │ │ ❱ 594 │ │ yield from islice(self.ex_iterable, self.n, None) │ │ 595 │ │ #for _ in range(self.n): │ │ 596 │ │ # next(ex_iterator) │ │ 597 │ │ #yield from islice(ex_iterator, self.n, None) │ │ │ │ /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/ │ │ lib/python3.9/site-packages/datasets/iterable_dataset.py:106 in __iter__ │ │ │ │ 103 │ │ self.kwargs = kwargs │ │ 104 │ │ │ 105 │ def __iter__(self): │ │ ❱ 106 │ │ yield from self.generate_examples_fn(**self.kwargs) │ │ 107 │ │ │ 108 │ def shuffle_data_sources(self, generator: np.random.Generator) -> │ │ 109 │ │ return ShardShuffledExamplesIterable(self.generate_examples_f │ │ │ │ /users/muennighoff/.cache/huggingface/modules/datasets_modules/datasets/c4/d │ │ f532b158939272d032cc63ef19cd5b83e9b4d00c922b833e4cb18b2e9869b01/c4.py:89 in │ │ _generate_examples │ │ │ │ 86 │ │ for filepath in filepaths: │ │ 87 │ │ │ logger.info("generating examples from = %s", filepath) │ │ 88 │ │ │ with gzip.open(open(filepath, "rb"), "rt", encoding="utf-8" │ │ ❱ 89 │ │ │ │ for line in f: │ │ 90 │ │ │ │ │ if line: │ │ 91 │ │ │ │ │ │ example = json.loads(line) │ │ 92 │ │ │ │ │ │ yield id_, example │ │ │ │ /opt/cray/pe/python/3.9.12.1/lib/python3.9/gzip.py:313 in read1 │ │ │ │ 310 │ │ │ │ 311 │ │ if size < 0: │ │ 312 │ │ │ size = io.DEFAULT_BUFFER_SIZE │ │ ❱ 313 │ │ return self._buffer.read1(size) │ │ 314 │ │ │ 315 │ def peek(self, n): │ │ 316 │ │ self._check_not_closed() │ │ │ │ /opt/cray/pe/python/3.9.12.1/lib/python3.9/_compression.py:68 in readinto │ │ │ │ 65 │ │ │ 66 │ def readinto(self, b): │ │ 67 │ │ with memoryview(b) as view, view.cast("B") as byte_view: │ │ ❱ 68 │ │ │ data = self.read(len(byte_view)) │ │ 69 │ │ │ byte_view[:len(data)] = data │ │ 70 │ │ return len(data) │ │ 71 │ │ │ │ /opt/cray/pe/python/3.9.12.1/lib/python3.9/gzip.py:493 in read │ │ │ │ 490 │ │ │ │ self._new_member = False │ │ 491 │ │ │ │ │ 492 │ │ │ # Read a chunk of data from the file │ │ ❱ 493 │ │ │ buf = self._fp.read(io.DEFAULT_BUFFER_SIZE) │ │ 494 │ │ │ │ │ 495 │ │ │ uncompress = self._decompressor.decompress(buf, size) │ │ 496 │ │ │ if self._decompressor.unconsumed_tail != b"": │ │ │ │ /opt/cray/pe/python/3.9.12.1/lib/python3.9/gzip.py:96 in read │ │ │ │ 93 │ │ │ read = self._read │ │ 94 │ │ │ self._read = None │ │ 95 │ │ │ return self._buffer[read:] + \ │ │ ❱ 96 │ │ │ │ self.file.read(size-self._length+read) │ │ 97 │ │ │ 98 │ def prepend(self, prepend=b''): │ │ 99 │ │ if self._read is None: │ │ │ │ /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/ │ │ lib/python3.9/site-packages/datasets/download/streaming_download_manager.py: │ │ 365 in read_with_retries │ │ │ │ 362 │ │ │ │ ) │ │ 363 │ │ │ │ time.sleep(config.STREAMING_READ_RETRY_INTERVAL) │ │ 364 │ │ else: │ │ ❱ 365 │ │ │ raise ConnectionError("Server Disconnected") │ │ 366 │ │ return out │ │ 367 │ │ │ 368 │ file_obj.read = read_with_retries │ ╰──────────────────────────────────────────────────────────────────────────────╯ ConnectionError: Server Disconnected ``` ### Expected behavior There should be no disconnect I think. ### Environment info ``` datasets=2.7.0 Python 3.9.12 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5374/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5374/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5371
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5371/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5371/comments
https://api.github.com/repos/huggingface/datasets/issues/5371/events
https://github.com/huggingface/datasets/issues/5371
1,501,369,036
I_kwDODunzps5ZfRLM
5,371
Add a robustness benchmark dataset for vision
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[ { "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false } ]
null
1
2022-12-17T12:35:13
2022-12-20T06:21:41
null
MEMBER
null
### Name ImageNet-C ### Paper Benchmarking Neural Network Robustness to Common Corruptions and Perturbations ### Data https://github.com/hendrycks/robustness ### Motivation It's a known fact that vision models are brittle when they meet with slightly corrupted and perturbed data. This is also correlated to the robustness aspects of vision models. Researchers use different benchmark datasets to evaluate the robustness aspects of vision models. ImageNet-C is one of them. Having this dataset in 🤗 Datasets would allow researchers to evaluate and study the robustness aspects of vision models. Since the metric associated with these evaluations is top-1 accuracy, researchers should be able to easily take advantage of the evaluation benchmarks on the Hub and perform comprehensive reporting. ImageNet-C is a large dataset. Once it's in, it can act as a reference and we can also reach out to the authors of the other robustness benchmark datasets in vision, such as ObjectNet, WILDS, Metashift, etc. These datasets cater to different aspects. For example, ObjectNet is related to assessing how well a model performs under sub-population shifts. Related thread: https://huggingface.slack.com/archives/C036H4A5U8Z/p1669994598060499
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5371/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5371/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5363
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5363/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5363/comments
https://api.github.com/repos/huggingface/datasets/issues/5363/events
https://github.com/huggingface/datasets/issues/5363
1,498,171,317
I_kwDODunzps5ZTEe1
5,363
Dataset.from_generator() crashes on simple example
{ "login": "villmow", "id": 2743060, "node_id": "MDQ6VXNlcjI3NDMwNjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/villmow", "html_url": "https://github.com/villmow", "followers_url": "https://api.github.com/users/villmow/followers", "following_url": "https://api.github.com/users/villmow/following{/other_user}", "gists_url": "https://api.github.com/users/villmow/gists{/gist_id}", "starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/villmow/subscriptions", "organizations_url": "https://api.github.com/users/villmow/orgs", "repos_url": "https://api.github.com/users/villmow/repos", "events_url": "https://api.github.com/users/villmow/events{/privacy}", "received_events_url": "https://api.github.com/users/villmow/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-12-15T10:21:28
2022-12-15T11:51:33
2022-12-15T11:51:33
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5363/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5363/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5362
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5362/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5362/comments
https://api.github.com/repos/huggingface/datasets/issues/5362/events
https://github.com/huggingface/datasets/issues/5362
1,497,643,744
I_kwDODunzps5ZRDrg
5,362
Run 'GPT-J' failure due to download dataset fail (' ConnectionError: Couldn't reach http://eaidata.bmk.sh/data/enron_emails.jsonl.zst ' )
{ "login": "shaoyuta", "id": 52023469, "node_id": "MDQ6VXNlcjUyMDIzNDY5", "avatar_url": "https://avatars.githubusercontent.com/u/52023469?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shaoyuta", "html_url": "https://github.com/shaoyuta", "followers_url": "https://api.github.com/users/shaoyuta/followers", "following_url": "https://api.github.com/users/shaoyuta/following{/other_user}", "gists_url": "https://api.github.com/users/shaoyuta/gists{/gist_id}", "starred_url": "https://api.github.com/users/shaoyuta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shaoyuta/subscriptions", "organizations_url": "https://api.github.com/users/shaoyuta/orgs", "repos_url": "https://api.github.com/users/shaoyuta/repos", "events_url": "https://api.github.com/users/shaoyuta/events{/privacy}", "received_events_url": "https://api.github.com/users/shaoyuta/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
2
2022-12-15T01:23:03
2022-12-15T07:45:54
2022-12-15T07:45:53
NONE
null
### Describe the bug Run model "GPT-J" with dataset "the_pile" fail. The fail out is as below: ![image](https://user-images.githubusercontent.com/52023469/207750127-118d9896-35f4-4ee9-90d4-d0ab9aae9c74.png) Looks like which is due to "http://eaidata.bmk.sh/data/enron_emails.jsonl.zst" unreachable . ### Steps to reproduce the bug Steps to reproduce this issue: git clone https://github.com/huggingface/transformers cd transformers python examples/pytorch/language-modeling/run_clm.py --model_name_or_path EleutherAI/gpt-j-6B --dataset_name the_pile --dataset_config_name enron_emails --do_eval --output_dir /tmp/output --overwrite_output_dir ### Expected behavior This issue looks like due to "http://eaidata.bmk.sh/data/enron_emails.jsonl.zst " couldn't be reached. Is there another way to download the dataset "the_pile" ? Is there another way to cache the dataset "the_pile" but not let the hg to download it when runtime ? ### Environment info huggingface_hub version: 0.11.1 Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35 Python version: 3.9.12 Running in iPython ?: No Running in notebook ?: No Running in Google Colab ?: No Token path ?: /home/taosy/.huggingface/token Has saved token ?: False Configured git credential helpers: FastAI: N/A Tensorflow: N/A Torch: N/A Jinja2: N/A Graphviz: N/A Pydot: N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5362/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5362/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5361
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5361/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5361/comments
https://api.github.com/repos/huggingface/datasets/issues/5361/events
https://github.com/huggingface/datasets/issues/5361
1,497,153,889
I_kwDODunzps5ZPMFh
5,361
How concatenate `Audio` elements using batch mapping
{ "login": "bayartsogt-ya", "id": 43239645, "node_id": "MDQ6VXNlcjQzMjM5NjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bayartsogt-ya", "html_url": "https://github.com/bayartsogt-ya", "followers_url": "https://api.github.com/users/bayartsogt-ya/followers", "following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}", "gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}", "starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions", "organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs", "repos_url": "https://api.github.com/users/bayartsogt-ya/repos", "events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}", "received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
3
2022-12-14T18:13:55
2022-12-15T10:53:28
null
NONE
null
### Describe the bug I am trying to do concatenate audios in a dataset e.g. `google/fleurs`. ```python print(dataset) # Dataset({ # features: ['path', 'audio'], # num_rows: 24 # }) def mapper_function(batch): # to merge every 3 audio # np.concatnate(audios[i: i+3]) for i in range(i, len(batch), 3) dataset = dataset.map(mapper_function, batch=True, batch_size=24) print(dataset) # Expected output: # Dataset({ # features: ['path', 'audio'], # num_rows: 8 # }) ``` I tried to construct `result={}` dictionary inside the mapper function, I just found it will not work because it needs `byte` also needed :(( I'd appreciate if your share any use cases similar to my problem or any solutions really. Thanks! cc: @lhoestq ### Steps to reproduce the bug 1. load audio dataset 2. try to merge every k audios and return as one ### Expected behavior Merged dataset with a fewer rows. If we merge every 3 rows, then `n // 3` number of examples. ### Environment info - `datasets` version: 2.1.0 - Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyArrow version: 8.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5361/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5361/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5360
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5360/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5360/comments
https://api.github.com/repos/huggingface/datasets/issues/5360/events
https://github.com/huggingface/datasets/issues/5360
1,496,947,177
I_kwDODunzps5ZOZnp
5,360
IterableDataset returns duplicated data using PyTorch DDP
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
6
2022-12-14T16:06:19
2023-01-16T13:33:33
2023-01-16T13:33:33
MEMBER
null
As mentioned in https://github.com/huggingface/datasets/issues/3423, when using PyTorch DDP the dataset ends up with duplicated data. We already check for the PyTorch `worker_info` for single node, but we should also check for `torch.distributed.get_world_size()` and `torch.distributed.get_rank()`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5360/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5360/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5354
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5354/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5354/comments
https://api.github.com/repos/huggingface/datasets/issues/5354/events
https://github.com/huggingface/datasets/issues/5354
1,492,174,125
I_kwDODunzps5Y8MUt
5,354
Consider using "Sequence" instead of "List"
{ "login": "tranhd95", "id": 15568078, "node_id": "MDQ6VXNlcjE1NTY4MDc4", "avatar_url": "https://avatars.githubusercontent.com/u/15568078?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tranhd95", "html_url": "https://github.com/tranhd95", "followers_url": "https://api.github.com/users/tranhd95/followers", "following_url": "https://api.github.com/users/tranhd95/following{/other_user}", "gists_url": "https://api.github.com/users/tranhd95/gists{/gist_id}", "starred_url": "https://api.github.com/users/tranhd95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tranhd95/subscriptions", "organizations_url": "https://api.github.com/users/tranhd95/orgs", "repos_url": "https://api.github.com/users/tranhd95/repos", "events_url": "https://api.github.com/users/tranhd95/events{/privacy}", "received_events_url": "https://api.github.com/users/tranhd95/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
open
false
null
[]
null
6
2022-12-12T15:39:45
2023-02-01T13:55:13
null
NONE
null
### Feature request Hi, please consider using `Sequence` type annotation instead of `List` in function arguments such as in [`Dataset.from_parquet()`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L1088). It leads to type checking errors, see below. **How to reproduce** ```py list_of_filenames = ["foo.parquet", "bar.parquet"] ds = Dataset.from_parquet(list_of_filenames) ``` **Expected mypy output:** ``` Success: no issues found ``` **Actual mypy output:** ```py test.py:19: error: Argument 1 to "from_parquet" of "Dataset" has incompatible type "List[str]"; expected "Union[Union[str, bytes, PathLike[Any]], List[Union[str, bytes, PathLike[Any]]]]" [arg-type] test.py:19: note: "List" is invariant -- see https://mypy.readthedocs.io/en/stable/common_issues.html#variance test.py:19: note: Consider using "Sequence" instead, which is covariant ``` **Env:** mypy 0.991, Python 3.10.0, datasets 2.7.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5354/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5353
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5353/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5353/comments
https://api.github.com/repos/huggingface/datasets/issues/5353/events
https://github.com/huggingface/datasets/issues/5353
1,491,880,500
I_kwDODunzps5Y7Eo0
5,353
Support remote file systems for `Audio`
{ "login": "OllieBroadhurst", "id": 46894149, "node_id": "MDQ6VXNlcjQ2ODk0MTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/46894149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OllieBroadhurst", "html_url": "https://github.com/OllieBroadhurst", "followers_url": "https://api.github.com/users/OllieBroadhurst/followers", "following_url": "https://api.github.com/users/OllieBroadhurst/following{/other_user}", "gists_url": "https://api.github.com/users/OllieBroadhurst/gists{/gist_id}", "starred_url": "https://api.github.com/users/OllieBroadhurst/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OllieBroadhurst/subscriptions", "organizations_url": "https://api.github.com/users/OllieBroadhurst/orgs", "repos_url": "https://api.github.com/users/OllieBroadhurst/repos", "events_url": "https://api.github.com/users/OllieBroadhurst/events{/privacy}", "received_events_url": "https://api.github.com/users/OllieBroadhurst/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2022-12-12T13:22:13
2022-12-12T13:37:14
2022-12-12T13:37:14
NONE
null
### Feature request Hi there! It would be super cool if `Audio()`, and potentially other features, could read files from a remote file system. ### Motivation Large amounts of data is often stored in buckets. `load_from_disk` is able to retrieve data from cloud storage but to my knowledge actually copies the datasets across first, so if you're working off a system with smaller disk specs (like a VM), you can run out of space very quickly. ### Your contribution Something like this (for Google Cloud Platform in this instance): ```python from datasets import Dataset, Audio import gcsfs fs = gcsfs.GCSFileSystem() list_of_audio_fp = {'audio': ['1', '2', '3']} ds = Dataset.from_dict(list_of_audio_fp) ds = ds.cast_column("audio", Audio(sampling_rate=16000, fs=fs)) ``` Under the hood: ```python import librosa from io import BytesIO def load_audio(fp, sampling_rate=None, fs=None): if fs is not None: with fs.open(fp, 'rb') as f: arr, sr = librosa.load(BytesIO(f), sr=sampling_rate) else: # Perform existing io operations ``` Written from memory so some things could be wrong.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5353/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5352
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5352/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5352/comments
https://api.github.com/repos/huggingface/datasets/issues/5352/events
https://github.com/huggingface/datasets/issues/5352
1,490,796,414
I_kwDODunzps5Y279-
5,352
__init__() got an unexpected keyword argument 'input_size'
{ "login": "J-shel", "id": 82662111, "node_id": "MDQ6VXNlcjgyNjYyMTEx", "avatar_url": "https://avatars.githubusercontent.com/u/82662111?v=4", "gravatar_id": "", "url": "https://api.github.com/users/J-shel", "html_url": "https://github.com/J-shel", "followers_url": "https://api.github.com/users/J-shel/followers", "following_url": "https://api.github.com/users/J-shel/following{/other_user}", "gists_url": "https://api.github.com/users/J-shel/gists{/gist_id}", "starred_url": "https://api.github.com/users/J-shel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/J-shel/subscriptions", "organizations_url": "https://api.github.com/users/J-shel/orgs", "repos_url": "https://api.github.com/users/J-shel/repos", "events_url": "https://api.github.com/users/J-shel/events{/privacy}", "received_events_url": "https://api.github.com/users/J-shel/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2022-12-12T02:52:03
2022-12-19T01:38:48
null
NONE
null
### Describe the bug I try to define a custom configuration with a input_size attribute following the instructions by "Specifying several dataset configurations" in https://huggingface.co/docs/datasets/v1.2.1/add_dataset.html But when I load the dataset, I got an error "__init__() got an unexpected keyword argument 'input_size'" ### Steps to reproduce the bug Following is the code to define the dataset: class CsvConfig(datasets.BuilderConfig): """BuilderConfig for CSV.""" input_size: int = 2048 class MRF(datasets.ArrowBasedBuilder): """Archival MRF data""" BUILDER_CONFIG_CLASS = CsvConfig VERSION = datasets.Version("1.0.0") BUILDER_CONFIGS = [ CsvConfig(name="default", version=VERSION, description="MRF data", input_size=2048), ] ... def _generate_examples(self): input_size = self.config.input_size if input_size > 1000: numin = 10000 else: numin = 15000 Below is the code to load the dataset: reader = load_dataset("default", input_size=1024) ### Expected behavior I hope to pass the "input_size" parameter to MRF datasets, and change "input_size" to any value when loading the datasets. ### Environment info - `datasets` version: 2.5.1 - Platform: Linux-4.18.0-305.3.1.el8.x86_64-x86_64-with-glibc2.31 - Python version: 3.9.12 - PyArrow version: 9.0.0 - Pandas version: 1.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5352/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5352/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5351
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5351/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5351/comments
https://api.github.com/repos/huggingface/datasets/issues/5351/events
https://github.com/huggingface/datasets/issues/5351
1,490,659,504
I_kwDODunzps5Y2aiw
5,351
Do we need to implement `_prepare_split`?
{ "login": "jmwoloso", "id": 7530947, "node_id": "MDQ6VXNlcjc1MzA5NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/7530947?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmwoloso", "html_url": "https://github.com/jmwoloso", "followers_url": "https://api.github.com/users/jmwoloso/followers", "following_url": "https://api.github.com/users/jmwoloso/following{/other_user}", "gists_url": "https://api.github.com/users/jmwoloso/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmwoloso/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmwoloso/subscriptions", "organizations_url": "https://api.github.com/users/jmwoloso/orgs", "repos_url": "https://api.github.com/users/jmwoloso/repos", "events_url": "https://api.github.com/users/jmwoloso/events{/privacy}", "received_events_url": "https://api.github.com/users/jmwoloso/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
11
2022-12-12T01:38:54
2022-12-20T18:20:57
2022-12-12T16:48:56
NONE
null
### Describe the bug I'm not sure this is a bug or if it's just missing in the documentation, or i'm not doing something correctly, but I'm subclassing `DatasetBuilder` and getting the following error because on the `DatasetBuilder` class the `_prepare_split` method is abstract (as are the others we are required to implement, hence the genesis of my question): ``` Traceback (most recent call last): File "/home/jason/source/python/prism_machine_learning/examples/create_hf_datasets.py", line 28, in <module> dataset_builder.download_and_prepare() File "/home/jason/.virtualenvs/pml/lib/python3.8/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/jason/.virtualenvs/pml/lib/python3.8/site-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/jason/.virtualenvs/pml/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split raise NotImplementedError() NotImplementedError ``` ### Steps to reproduce the bug I will share implementation if it turns out that everything should be working (i.e. we only need to implement those 3 methods the docs mention), but I don't want to distract from the original question. ### Expected behavior I just need to know if there are additional methods we need to implement when subclassing `DatasetBuilder` besides what the documentation specifies -> `_info`, `_split_generators` and `_generate_examples` ### Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.2.5 - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5351/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5351/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5348
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5348/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5348/comments
https://api.github.com/repos/huggingface/datasets/issues/5348/events
https://github.com/huggingface/datasets/issues/5348
1,486,975,626
I_kwDODunzps5YoXKK
5,348
The data downloaded in the download folder of the cache does not respect `umask`
{ "login": "SaulLu", "id": 55560583, "node_id": "MDQ6VXNlcjU1NTYwNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaulLu", "html_url": "https://github.com/SaulLu", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "repos_url": "https://api.github.com/users/SaulLu/repos", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2022-12-09T15:46:27
2022-12-09T17:21:26
null
NONE
null
### Describe the bug For a project on a cluster we are several users to share the same cache for the datasets library. And we have a problem with the permissions on the data downloaded in the cache. Indeed, it seems that the data is downloaded by giving read and write permissions only to the user launching the command (and no permissions to the group). In our case, those permissions don't respect the `umask` of this user, which was `0007`. Traceback: ``` Using custom data configuration default Downloading and preparing dataset text_caps/default to /gpfswork/rech/cnw/commun/datasets/HuggingFaceM4___text_caps/default/1.0.0/2b9ad220cd90fcf2bfb454645bc54364711b83d6d39401ffdaf8cc40882e9141... Downloading data files: 100%|████████████████████| 3/3 [00:00<00:00, 921.62it/s] --------------------------------------------------------------------------- PermissionError Traceback (most recent call last) Cell In [3], line 1 ----> 1 ds = load_dataset(dataset_name) File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/load.py:1746, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1743 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1745 # Download and prepare data -> 1746 builder_instance.download_and_prepare( 1747 download_config=download_config, 1748 download_mode=download_mode, 1749 ignore_verifications=ignore_verifications, 1750 try_from_hf_gcs=try_from_hf_gcs, 1751 use_auth_token=use_auth_token, 1752 ) 1754 # Build dataset for splits 1755 keep_in_memory = ( 1756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1757 ) File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 702 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 703 if not downloaded_from_gcs: --> 704 self._download_and_prepare( 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 706 ) 707 # Sync info 708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/builder.py:1227, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos) 1226 def _download_and_prepare(self, dl_manager, verify_infos): -> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 769 split_dict = SplitDict(dataset_name=self.name) 770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 773 # Checksums verification 774 if verify_infos and dl_manager.record_checksums: File /gpfswork/rech/cnw/commun/modules/datasets_modules/datasets/HuggingFaceM4--TextCaps/2b9ad220cd90fcf2bfb454645bc54364711b83d6d39401ffdaf8cc40882e9141/TextCaps.py:125, in TextCapsDataset._split_generators(self, dl_manager) 123 def _split_generators(self, dl_manager): 124 # urls = _URLS[self.config.name] # TODO later --> 125 data_dir = dl_manager.download_and_extract(_URLS) 126 gen_kwargs = { 127 split_name: { 128 f"{dir_name}_path": Path(data_dir[dir_name][split_name]) (...) 133 for split_name in ["train", "val", "test"] 134 } 136 for split_name in ["train", "val", "test"]: File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/download/download_manager.py:431, in DownloadManager.download_and_extract(self, url_or_urls) 415 def download_and_extract(self, url_or_urls): 416 """Download and extract given url_or_urls. 417 418 Is roughly equivalent to: (...) 429 extracted_path(s): `str`, extracted paths of given URL(s). 430 """ --> 431 return self.extract(self.download(url_or_urls)) File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/download/download_manager.py:324, in DownloadManager.download(self, url_or_urls) 321 self.downloaded_paths.update(dict(zip(url_or_urls.flatten(), downloaded_path_or_paths.flatten()))) 323 start_time = datetime.now() --> 324 self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths) 325 duration = datetime.now() - start_time 326 logger.info(f"Checksum Computation took {duration.total_seconds() // 60} min") File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/download/download_manager.py:229, in DownloadManager._record_sizes_checksums(self, url_or_urls, downloaded_path_or_paths) 226 """Record size/checksum of downloaded files.""" 227 for url, path in zip(url_or_urls.flatten(), downloaded_path_or_paths.flatten()): 228 # call str to support PathLike objects --> 229 self._recorded_sizes_checksums[str(url)] = get_size_checksum_dict( 230 path, record_checksum=self.record_checksums 231 ) File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/utils/info_utils.py:82, in get_size_checksum_dict(path, record_checksum) 80 if record_checksum: 81 m = sha256() ---> 82 with open(path, "rb") as f: 83 for chunk in iter(lambda: f.read(1 << 20), b""): 84 m.update(chunk) PermissionError: [Errno 13] Permission denied: '/gpfswork/rech/cnw/commun/datasets/downloads/1e6aa6d23190c30885194fabb193dce3874d902d7636b66315ee8aaa584e80d6' ``` ### Steps to reproduce the bug I think the following will reproduce the bug. Given 2 users belonging to the same group with `umask` set to `0007` - first run with User 1: ```python from datasets import load_dataset ds_name = "HuggingFaceM4/VQAv2" ds = load_dataset(ds_name) ``` - then run with User 2: ```python from datasets import load_dataset ds_name = "HuggingFaceM4/TextCaps" ds = load_dataset(ds_name) ``` ### Expected behavior No `PermissionError` ### Environment info - `datasets` version: 2.4.0 - Platform: Linux-4.18.0-305.65.1.el8_4.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5348/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5348/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5346
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5346/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5346/comments
https://api.github.com/repos/huggingface/datasets/issues/5346/events
https://github.com/huggingface/datasets/issues/5346
1,486,884,983
I_kwDODunzps5YoBB3
5,346
[Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem!
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2022-12-09T14:48:02
2023-01-25T19:35:41
2023-01-25T19:35:40
MEMBER
null
Thanks to all of you, Datasets is just about to pass 15k stars! Since the last survey, a lot has happened: the [diffusers](https://github.com/huggingface/diffusers), [evaluate](https://github.com/huggingface/evaluate) and [skops](https://github.com/skops-dev/skops) libraries were born. `timm` joined the Hugging Face ecosystem. There were 25 new releases of `transformers`, 21 new releases of `datasets`, 13 new releases of `accelerate`. If you have a couple of minutes and want to participate in shaping the future of the ecosystem, please share your thoughts: [**hf.co/oss-survey**](https://docs.google.com/forms/d/e/1FAIpQLSf4xFQKtpjr6I_l7OfNofqiR8s-WG6tcNbkchDJJf5gYD72zQ/viewform?usp=sf_link) (please reply in the above feedback form rather than to this thread) Thank you all on behalf of the HuggingFace team! 🤗
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5346/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 3, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5346/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5345
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5345/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5345/comments
https://api.github.com/repos/huggingface/datasets/issues/5345/events
https://github.com/huggingface/datasets/issues/5345
1,486,555,384
I_kwDODunzps5Ymwj4
5,345
Wrong dtype for array in audio features
{ "login": "qmeeus", "id": 25608944, "node_id": "MDQ6VXNlcjI1NjA4OTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/25608944?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qmeeus", "html_url": "https://github.com/qmeeus", "followers_url": "https://api.github.com/users/qmeeus/followers", "following_url": "https://api.github.com/users/qmeeus/following{/other_user}", "gists_url": "https://api.github.com/users/qmeeus/gists{/gist_id}", "starred_url": "https://api.github.com/users/qmeeus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qmeeus/subscriptions", "organizations_url": "https://api.github.com/users/qmeeus/orgs", "repos_url": "https://api.github.com/users/qmeeus/repos", "events_url": "https://api.github.com/users/qmeeus/events{/privacy}", "received_events_url": "https://api.github.com/users/qmeeus/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2022-12-09T11:05:11
2022-12-16T13:44:46
null
NONE
null
### Describe the bug When concatenating/interleaving different datasets, I stumble into an error because the features can't be aligned. After some investigation, I understood that the audio arrays had different dtypes, namely `float32` and `float64`. Consequently, the datasets cannot be merged. ### Steps to reproduce the bug For example, for `facebook/voxpopuli` and `mozilla-foundation/common_voice_11_0`: ``` from datasets import load_dataset, interleave_datasets covost = load_dataset("mozilla-foundation/common_voice_11_0", "en", split="train", streaming=True) voxpopuli = datasets.load_dataset("facebook/voxpopuli", "nl", split="train", streaming=True) sample_cv, = covost.take(1) sample_vp, = voxpopuli.take(1) assert sample_cv["audio"]["array"].dtype == sample_vp["audio"]["array"].dtype # Fails dataset = interleave_datasets([covost, voxpopuli]) # ValueError: The features can't be aligned because the key audio of features {'audio_id': Value(dtype='string', id=None), 'language': Value(dtype='int64', id=None), 'audio': {'array': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'path': Value(dtype='string', id=None), 'sampling_rate': Value(dtype='int64', id=None)}, 'normalized_text': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'speaker_id': Value(dtype='string', id=None), 'is_gold_transcript': Value(dtype='bool', id=None), 'accent': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None)} has unexpected type - {'array': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'path': Value(dtype='string', id=None), 'sampling_rate': Value(dtype='int64', id=None)} (expected either Audio(sampling_rate=16000, mono=True, decode=True, id=None) or Value("null"). ``` ### Expected behavior The audio should be loaded to arrays with a unique dtype (I guess `float32`) ### Environment info ``` - `datasets` version: 2.7.1.dev0 - Platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.15 - PyArrow version: 10.0.1 - Pandas version: 1.5.2 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5345/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5343
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5343/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5343/comments
https://api.github.com/repos/huggingface/datasets/issues/5343/events
https://github.com/huggingface/datasets/issues/5343
1,485,297,823
I_kwDODunzps5Yh9if
5,343
T5 for Q&A produces truncated sentence
{ "login": "junyongyou", "id": 13484072, "node_id": "MDQ6VXNlcjEzNDg0MDcy", "avatar_url": "https://avatars.githubusercontent.com/u/13484072?v=4", "gravatar_id": "", "url": "https://api.github.com/users/junyongyou", "html_url": "https://github.com/junyongyou", "followers_url": "https://api.github.com/users/junyongyou/followers", "following_url": "https://api.github.com/users/junyongyou/following{/other_user}", "gists_url": "https://api.github.com/users/junyongyou/gists{/gist_id}", "starred_url": "https://api.github.com/users/junyongyou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/junyongyou/subscriptions", "organizations_url": "https://api.github.com/users/junyongyou/orgs", "repos_url": "https://api.github.com/users/junyongyou/repos", "events_url": "https://api.github.com/users/junyongyou/events{/privacy}", "received_events_url": "https://api.github.com/users/junyongyou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-12-08T19:48:46
2022-12-08T19:57:17
2022-12-08T19:57:17
NONE
null
Dear all, I am fine-tuning T5 for Q&A task using the MedQuAD ([GitHub - abachaa/MedQuAD: Medical Question Answering Dataset of 47,457 QA pairs created from 12 NIH websites](https://github.com/abachaa/MedQuAD)) dataset. In the dataset, there are many long answers with thousands of words. I have used pytorch_lightning to train the T5-large model. I have two questions. For example, I set both the max_length, max_input_length, max_output_length to 128. How to deal with those long answers? I just left them as is and the T5Tokenizer can automatically handle. I would assume the tokenizer just truncates an answer at the position of 128th word (or 127th). Is it possible that I manually split an answer into different parts, each part has 128 words; and then all these sub-answers serve as a separate answer to the same question? Another question is that I get incomplete (truncated) answers when using the fine-tuned model in inference, even though the predicted answer is shorter than 128 words. I found a message posted 2 years ago saying that one should add at the end of texts when fine-tuning T5. I followed that but then got a warning message that duplicated were found. I am assuming that this is because the tokenizer truncates an answer text, thus is missing in the truncated answer, such that the end token is not produced in predicted answer. However, I am not sure. Can anybody point out how to address this issue? Any suggestions are highly appreciated. Below is some code snippet. ` import pytorch_lightning as pl from torch.utils.data import DataLoader import torch import numpy as np import time from pathlib import Path from transformers import ( Adafactor, T5ForConditionalGeneration, T5Tokenizer, get_linear_schedule_with_warmup ) from torch.utils.data import RandomSampler from question_answering.utils import * class T5FineTuner(pl.LightningModule): def __init__(self, hyparams): super(T5FineTuner, self).__init__() self.hyparams = hyparams self.model = T5ForConditionalGeneration.from_pretrained(hyparams.model_name_or_path) self.tokenizer = T5Tokenizer.from_pretrained(hyparams.tokenizer_name_or_path) if self.hyparams.freeze_embeds: self.freeze_embeds() if self.hyparams.freeze_encoder: self.freeze_params(self.model.get_encoder()) # assert_all_frozen() self.step_count = 0 self.output_dir = Path(self.hyparams.output_dir) n_observations_per_split = { 'train': self.hyparams.n_train, 'validation': self.hyparams.n_val, 'test': self.hyparams.n_test } self.n_obs = {k: v if v >= 0 else None for k, v in n_observations_per_split.items()} self.em_score_list = [] self.subset_score_list = [] data_folder = r'C:\Datasets\MedQuAD-master' self.train_data, self.val_data, self.test_data = load_medqa_data(data_folder) def freeze_params(self, model): for param in model.parameters(): param.requires_grad = False def freeze_embeds(self): try: self.freeze_params(self.model.model.shared) for d in [self.model.model.encoder, self.model.model.decoder]: self.freeze_params(d.embed_positions) self.freeze_params(d.embed_tokens) except AttributeError: self.freeze_params(self.model.shared) for d in [self.model.encoder, self.model.decoder]: self.freeze_params(d.embed_tokens) def lmap(self, f, x): return list(map(f, x)) def is_logger(self): return self.trainer.proc_rank <= 0 def forward(self, input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, labels=None): return self.model( input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, labels=labels ) def _step(self, batch): labels = batch['target_ids'] labels[labels[:, :] == self.tokenizer.pad_token_id] = -100 outputs = self( input_ids = batch['source_ids'], attention_mask=batch['source_mask'], labels=labels, decoder_attention_mask=batch['target_mask'] ) loss = outputs[0] return loss def ids_to_clean_text(self, generated_ids): gen_text = self.tokenizer.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) return self.lmap(str.strip, gen_text) def _generative_step(self, batch): t0 = time.time() generated_ids = self.model.generate( batch["source_ids"], attention_mask=batch["source_mask"], use_cache=True, decoder_attention_mask=batch['target_mask'], max_length=128, num_beams=2, early_stopping=True ) preds = self.ids_to_clean_text(generated_ids) targets = self.ids_to_clean_text(batch["target_ids"]) gen_time = (time.time() - t0) / batch["source_ids"].shape[0] loss = self._step(batch) base_metrics = {'val_loss': loss} summ_len = np.mean(self.lmap(len, generated_ids)) base_metrics.update(gen_time=gen_time, gen_len=summ_len, preds=preds, target=targets) em_score, subset_match_score = calculate_scores(preds, targets) self.em_score_list.append(em_score) self.subset_score_list.append(subset_match_score) em_score = torch.tensor(em_score, dtype=torch.float32) subset_match_score = torch.tensor(subset_match_score, dtype=torch.float32) base_metrics.update(em_score=em_score, subset_match_score=subset_match_score) # rouge_results = self.rouge_metric.compute() # rouge_dict = self.parse_score(rouge_results) return base_metrics def training_step(self, batch, batch_idx): loss = self._step(batch) tensorboard_logs = {'train_loss': loss} return {'loss': loss, 'log': tensorboard_logs} def training_epoch_end(self, outputs): avg_train_loss = torch.stack([x['loss'] for x in outputs]).mean() tensorboard_logs = {'avg_train_loss': avg_train_loss} # return {'avg_train_loss': avg_train_loss, 'log': tensorboard_logs, 'progress_bar': tensorboard_logs} def validation_step(self, batch, batch_idx): return self._generative_step(batch) def validation_epoch_end(self, outputs): avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean() tensorboard_logs = {'val_loss': avg_loss} if len(self.em_score_list) <= 2: average_em_score = sum(self.em_score_list) / len(self.em_score_list) average_subset_match_score = sum(self.subset_score_list) / len(self.subset_score_list) else: latest_em_score = self.em_score_list[:-2] latest_subset_score = self.subset_score_list[:-2] average_em_score = sum(latest_em_score) / len(latest_em_score) average_subset_match_score = sum(latest_subset_score) / len(latest_subset_score) average_em_score = torch.tensor(average_em_score, dtype=torch.float32) average_subset_match_score = torch.tensor(average_subset_match_score, dtype=torch.float32) tensorboard_logs.update(em_score=average_em_score, subset_match_score=average_subset_match_score) self.target_gen = [] self.prediction_gen = [] return { 'avg_val_loss': avg_loss, 'em_score': average_em_score, 'subset_match_socre': average_subset_match_score, 'log': tensorboard_logs, 'progress_bar': tensorboard_logs } def configure_optimizers(self): model = self.model no_decay = ["bias", "LayerNorm.weight"] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], "weight_decay": self.hyparams.weight_decay, }, { "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, ] optimizer = Adafactor(optimizer_grouped_parameters, lr=self.hyparams.learning_rate, scale_parameter=False, relative_step=False) self.opt = optimizer return [optimizer] def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, optimizer_closure=None, on_tpu=False, using_native_amp=False, using_lbfgs=False): optimizer.step(closure=optimizer_closure) optimizer.zero_grad() self.lr_scheduler.step() def get_tqdm_dict(self): tqdm_dict = {"loss": "{:.3f}".format(self.trainer.avg_loss), "lr": self.lr_scheduler.get_last_lr()[-1]} return tqdm_dict def train_dataloader(self): n_samples = self.n_obs['train'] train_dataset = get_dataset(tokenizer=self.tokenizer, data=self.train_data, num_samples=n_samples, args=self.hyparams) sampler = RandomSampler(train_dataset) dataloader = DataLoader(train_dataset, sampler=sampler, batch_size=self.hyparams.train_batch_size, drop_last=True, num_workers=4) # t_total = ( # (len(dataloader.dataset) // (self.hyparams.train_batch_size * max(1, self.hyparams.n_gpu))) # // self.hyparams.gradient_accumulation_steps # * float(self.hyparams.num_train_epochs) # ) t_total = 100000 scheduler = get_linear_schedule_with_warmup( self.opt, num_warmup_steps=self.hyparams.warmup_steps, num_training_steps=t_total ) self.lr_scheduler = scheduler return dataloader def val_dataloader(self): n_samples = self.n_obs['validation'] validation_dataset = get_dataset(tokenizer=self.tokenizer, data=self.val_data, num_samples=n_samples, args=self.hyparams) sampler = RandomSampler(validation_dataset) return DataLoader(validation_dataset, shuffle=False, batch_size=self.hyparams.eval_batch_size, sampler=sampler, num_workers=4) def test_dataloader(self): n_samples = self.n_obs['test'] test_dataset = get_dataset(tokenizer=self.tokenizer, data=self.test_data, num_samples=n_samples, args=self.hyparams) return DataLoader(test_dataset, batch_size=self.hyparams.eval_batch_size, num_workers=4) def on_save_checkpoint(self, checkpoint): save_path = self.output_dir.joinpath("best_tfmr") self.model.config.save_step = self.step_count self.model.save_pretrained(save_path) self.tokenizer.save_pretrained(save_path) import os import argparse import pytorch_lightning as pl from question_answering.t5_closed_book import T5FineTuner if __name__ == '__main__': args_dict = dict( output_dir="", # path to save the checkpoints model_name_or_path='t5-large', tokenizer_name_or_path='t5-large', max_input_length=128, max_output_length=128, freeze_encoder=False, freeze_embeds=False, learning_rate=1e-5, weight_decay=0.0, adam_epsilon=1e-8, warmup_steps=0, train_batch_size=4, eval_batch_size=4, num_train_epochs=2, gradient_accumulation_steps=10, n_gpu=1, resume_from_checkpoint=None, val_check_interval=0.5, n_val=4000, n_train=-1, n_test=-1, early_stop_callback=False, fp_16=False, opt_level='O1', max_grad_norm=1.0, seed=101, ) args_dict.update({'output_dir': 't5_large_MedQuAD_256', 'num_train_epochs': 100, 'train_batch_size': 16, 'eval_batch_size': 16, 'learning_rate': 1e-3}) args = argparse.Namespace(**args_dict) checkpoint_callback = pl.callbacks.ModelCheckpoint(dirpath=args.output_dir, monitor="em_score", mode="max", save_top_k=1) ## If resuming from checkpoint, add an arg resume_from_checkpoint train_params = dict( accumulate_grad_batches=args.gradient_accumulation_steps, gpus=args.n_gpu, max_epochs=args.num_train_epochs, # early_stop_callback=False, precision=16 if args.fp_16 else 32, # amp_level=args.opt_level, # resume_from_checkpoint=args.resume_from_checkpoint, gradient_clip_val=args.max_grad_norm, checkpoint_callback=checkpoint_callback, val_check_interval=args.val_check_interval, # accelerator='dp' # logger=wandb_logger, # callbacks=[LoggingCallback()], ) model = T5FineTuner(args) trainer = pl.Trainer(**train_params) trainer.fit(model) `
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5343/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5342
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5342/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5342/comments
https://api.github.com/repos/huggingface/datasets/issues/5342/events
https://github.com/huggingface/datasets/issues/5342
1,485,244,178
I_kwDODunzps5YhwcS
5,342
Emotion dataset cannot be downloaded
{ "login": "cbarond", "id": 78887193, "node_id": "MDQ6VXNlcjc4ODg3MTkz", "avatar_url": "https://avatars.githubusercontent.com/u/78887193?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cbarond", "html_url": "https://github.com/cbarond", "followers_url": "https://api.github.com/users/cbarond/followers", "following_url": "https://api.github.com/users/cbarond/following{/other_user}", "gists_url": "https://api.github.com/users/cbarond/gists{/gist_id}", "starred_url": "https://api.github.com/users/cbarond/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cbarond/subscriptions", "organizations_url": "https://api.github.com/users/cbarond/orgs", "repos_url": "https://api.github.com/users/cbarond/repos", "events_url": "https://api.github.com/users/cbarond/events{/privacy}", "received_events_url": "https://api.github.com/users/cbarond/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
closed
false
null
[]
null
7
2022-12-08T19:07:09
2023-01-02T12:05:37
2022-12-09T10:46:11
NONE
null
### Describe the bug The emotion dataset gives a FileNotFoundError. The full error is: `FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt?dl=1`. It was working yesterday (December 7, 2022), but stopped working today (December 8, 2022). ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("emotion") ``` ### Expected behavior The dataset should load properly. ### Environment info - `datasets` version: 2.7.1 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.9.13 - PyArrow version: 10.0.1 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5342/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5338
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5338/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5338/comments
https://api.github.com/repos/huggingface/datasets/issues/5338/events
https://github.com/huggingface/datasets/issues/5338
1,482,646,151
I_kwDODunzps5YX2KH
5,338
`map()` stops every 1000 steps
{ "login": "bayartsogt-ya", "id": 43239645, "node_id": "MDQ6VXNlcjQzMjM5NjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bayartsogt-ya", "html_url": "https://github.com/bayartsogt-ya", "followers_url": "https://api.github.com/users/bayartsogt-ya/followers", "following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}", "gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}", "starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions", "organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs", "repos_url": "https://api.github.com/users/bayartsogt-ya/repos", "events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}", "received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2022-12-07T19:09:40
2022-12-10T00:39:29
2022-12-10T00:39:28
NONE
null
### Describe the bug I am passing the following `prepare_dataset` function to `Dataset.map` (code is inspired from [here](https://github.com/huggingface/community-events/blob/main/whisper-fine-tuning-event/run_speech_recognition_seq2seq_streaming.py#L454)) ```python3 def prepare_dataset(batch): # load and resample audio data from 48 to 16kHz audio = batch["audio"] # compute log-Mel input features from input audio array batch["input_features"] = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0] # encode target text to label ids batch["labels"] = tokenizer(batch[text_column]).input_ids return batch ... train_ds = train_ds.map(prepare_dataset) ``` Here is the exact code I am running https://github.com/bayartsogt-ya/whisper-multiple-hf-datasets/blob/main/train.py#L70-L71 It starts using all the cores (I am not sure why because I did not pass `num_proc`) then progress bar stops at every 1k steps. (starts using a single core) then come back to using all the cores again. link to [screen record](https://youtu.be/jPQpQQGp6Gc) Can someone explain this process and maybe provide a way to improve this pipeline? cc: @lhoestq ### Steps to reproduce the bug 1. load the dataset 2. create a Whisper processor 3. create a `prepare_dataset` function 4. pass the function to `dataset.map(prepare_dataset)` ### Expected behavior - Use a single core per a function - not to stop at some point? ### Environment info - `datasets` version: 2.7.1.dev0 - Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.27 - Python version: 3.8.10 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5338/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5338/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5337
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5337/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5337/comments
https://api.github.com/repos/huggingface/datasets/issues/5337/events
https://github.com/huggingface/datasets/issues/5337
1,481,692,156
I_kwDODunzps5YUNP8
5,337
Support webdataset format
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2022-12-07T11:32:25
2023-01-04T20:35:31
null
MEMBER
null
Webdataset is an efficient format for iterable datasets. It would be nice to support it in `datasets`, as discussed in https://github.com/rom1504/img2dataset/issues/234. In particular it would be awesome to be able to load one using `load_dataset` in streaming mode (either from a local directory, or from a dataset on the Hugging Face Hub). Some datasets on the Hub are already in webdataset format. It terms of implementation, we can have something similar to the Parquet loader. I also think it's fine to have webdataset as an optional dependency.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5337/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5337/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5332
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5332/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5332/comments
https://api.github.com/repos/huggingface/datasets/issues/5332/events
https://github.com/huggingface/datasets/issues/5332
1,476,513,072
I_kwDODunzps5YAc0w
5,332
Passing numpy array to ClassLabel names causes ValueError
{ "login": "freddyheppell", "id": 1475568, "node_id": "MDQ6VXNlcjE0NzU1Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/1475568?v=4", "gravatar_id": "", "url": "https://api.github.com/users/freddyheppell", "html_url": "https://github.com/freddyheppell", "followers_url": "https://api.github.com/users/freddyheppell/followers", "following_url": "https://api.github.com/users/freddyheppell/following{/other_user}", "gists_url": "https://api.github.com/users/freddyheppell/gists{/gist_id}", "starred_url": "https://api.github.com/users/freddyheppell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/freddyheppell/subscriptions", "organizations_url": "https://api.github.com/users/freddyheppell/orgs", "repos_url": "https://api.github.com/users/freddyheppell/repos", "events_url": "https://api.github.com/users/freddyheppell/events{/privacy}", "received_events_url": "https://api.github.com/users/freddyheppell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
5
2022-12-05T12:59:03
2022-12-22T16:32:50
2022-12-22T16:32:50
CONTRIBUTOR
null
### Describe the bug If a numpy array is passed to the names argument of ClassLabel, creating a dataset with those features causes an error. ### Steps to reproduce the bug https://colab.research.google.com/drive/1cV_es1PWZiEuus17n-2C-w0KEoEZ68IX TLDR: If I define my classes as: ``` my_classes = np.array(['one', 'two', 'three']) ``` Then this errors: ```py features = Features({'value': Value('string'), 'label': ClassLabel(names=my_classes)}) dataset = Dataset.from_list(my_data, features=features) ``` ``` ValueError Traceback (most recent call last) [<ipython-input-8-a8a9d53ec82f>](https://localhost:8080/#) in <module> ----> 1 dataset = Dataset.from_list(my_data, features=features) 11 frames [/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in _asdict_inner(obj) 183 for f in fields(obj): 184 value = _asdict_inner(getattr(obj, f.name)) --> 185 if not f.init or value != f.default or f.metadata.get("include_in_asdict_even_if_is_default", False): 186 result[f.name] = value 187 return result ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ``` But this works: ``` features2 = Features({'value': Value('string'), 'label': ClassLabel(names=list(my_classes))}) dataset2 = Dataset.from_list(my_data, features=features2) ``` ### Expected behavior If I provide a numpy array of class names, I would expect either an error that the names list is the wrong type, or for it to be cast internally. ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.10 - Python version: 3.8.15 - PyArrow version: 10.0.1 - Pandas version: 1.5.2 Additionally: - Numpy version: 1.23.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5332/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5326
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5326/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5326/comments
https://api.github.com/repos/huggingface/datasets/issues/5326/events
https://github.com/huggingface/datasets/issues/5326
1,471,634,168
I_kwDODunzps5Xt1r4
5,326
No documentation for main branch is built
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2022-12-01T16:50:58
2022-12-02T16:26:01
2022-12-02T16:26:01
MEMBER
null
Since: - #5250 - Commit: 703b84311f4ead83c7f79639f2dfa739295f0be6 the docs for main branch are no longer built. The change introduced only triggers the docs building for releases.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5326/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5325
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5325/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5325/comments
https://api.github.com/repos/huggingface/datasets/issues/5325/events
https://github.com/huggingface/datasets/issues/5325
1,471,536,822
I_kwDODunzps5Xtd62
5,325
map(...batch_size=None) for IterableDataset
{ "login": "frankier", "id": 299380, "node_id": "MDQ6VXNlcjI5OTM4MA==", "avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frankier", "html_url": "https://github.com/frankier", "followers_url": "https://api.github.com/users/frankier/followers", "following_url": "https://api.github.com/users/frankier/following{/other_user}", "gists_url": "https://api.github.com/users/frankier/gists{/gist_id}", "starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frankier/subscriptions", "organizations_url": "https://api.github.com/users/frankier/orgs", "repos_url": "https://api.github.com/users/frankier/repos", "events_url": "https://api.github.com/users/frankier/events{/privacy}", "received_events_url": "https://api.github.com/users/frankier/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[ { "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false } ]
null
5
2022-12-01T15:43:42
2022-12-07T15:54:43
2022-12-07T15:54:42
CONTRIBUTOR
null
### Feature request Dataset.map(...) allows batch_size to be None. It would be nice if IterableDataset did too. ### Motivation Although it may seem a bit of a spurious request given that `IterableDataset` is meant for larger than memory datasets, but there are a couple of reasons why this might be nice. One is that load_dataset(...) can return either IterableDataset or Dataset. mypy will then complain if batch_size=None even if we know it is Dataset. Of course we can do: assert isinstance(d, datasets.DatasetDict) But it is a mild inconvenience. What's more annoying is that whenever we use something like e.g. `combine_datasets(...)`, we end up with the union again, and so have to do the assert again. Another is that we could actually end up with an IterableDataset small enough for memory in normal/correct usage, e.g. by filtering a massive IterableDataset. For practical usages, an alternative to this would be to convert from an iterable dataset to a map-style dataset, but it is not obvious how to do this. ### Your contribution Not this time.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5325/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5324
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5324/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5324/comments
https://api.github.com/repos/huggingface/datasets/issues/5324/events
https://github.com/huggingface/datasets/issues/5324
1,471,524,512
I_kwDODunzps5Xta6g
5,324
Fix docstrings and types in documentation that appears on the website
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
2
2022-12-01T15:34:53
2022-12-13T19:03:55
null
CONTRIBUTOR
null
While I was working on https://github.com/huggingface/datasets/pull/5313 I've noticed that we have a mess in how we annotate types and format args and return values in the code. And some of it is displayed in the [Reference section](https://huggingface.co/docs/datasets/package_reference/builder_classes) of the documentation on the website. Would be nice someday, maybe before releasing datasets 3.0.0, to unify it......
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5324/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5323
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5323/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5323/comments
https://api.github.com/repos/huggingface/datasets/issues/5323/events
https://github.com/huggingface/datasets/issues/5323
1,471,518,803
I_kwDODunzps5XtZhT
5,323
Duplicated Keys in Taskmaster-2 Dataset
{ "login": "liaeh", "id": 52380283, "node_id": "MDQ6VXNlcjUyMzgwMjgz", "avatar_url": "https://avatars.githubusercontent.com/u/52380283?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liaeh", "html_url": "https://github.com/liaeh", "followers_url": "https://api.github.com/users/liaeh/followers", "following_url": "https://api.github.com/users/liaeh/following{/other_user}", "gists_url": "https://api.github.com/users/liaeh/gists{/gist_id}", "starred_url": "https://api.github.com/users/liaeh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liaeh/subscriptions", "organizations_url": "https://api.github.com/users/liaeh/orgs", "repos_url": "https://api.github.com/users/liaeh/repos", "events_url": "https://api.github.com/users/liaeh/events{/privacy}", "received_events_url": "https://api.github.com/users/liaeh/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
2
2022-12-01T15:31:06
2022-12-01T16:26:06
2022-12-01T16:26:06
NONE
null
### Describe the bug Loading certain splits () of the taskmaster-2 dataset fails because of a DuplicatedKeysError. This occurs for the following domains: `'hotels', 'movies', 'music', 'sports'`. The domains `'flights', 'food-ordering', 'restaurant-search'` load fine. Output: ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("taskmaster2", "music") ``` Output: ``` --------------------------------------------------------------------------- DuplicatedKeysError Traceback (most recent call last) File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1532, in GeneratorBasedBuilder._prepare_split_single(self, arg) [1531](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1530) example = self.info.features.encode_example(record) if self.info.features is not None else record -> [1532](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1531) writer.write(example, key) [1533](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1532) num_examples_progress_update += 1 File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:475, in ArrowWriter.write(self, example, key, writer_batch_size) [474](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=473) if self._check_duplicates: --> [475](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=474) self.check_duplicate_keys() [476](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=475) # Re-intializing to empty list for next batch File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:492, in ArrowWriter.check_duplicate_keys(self) [486](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=485) duplicate_key_indices = [ [487](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=486) str(self._num_examples + index) [488](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=487) for index, (duplicate_hash, _) in enumerate(self.hkey_record) [489](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=488) if duplicate_hash == hash [490](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=489) ] --> [492](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=491) raise DuplicatedKeysError(key, duplicate_key_indices) [493](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=492) else: DuplicatedKeysError: Found multiple examples generated with the same key The examples at index 858, 859 have the key dlg-89174425-d57a-4db7-a92b-165c3bff6735 During handling of the above exception, another exception occurred: DuplicatedKeysError Traceback (most recent call last) File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1541, in GeneratorBasedBuilder._prepare_split_single(self, arg) [1540](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1539) num_shards = shard_id + 1 -> [1541](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1540) num_examples, num_bytes = writer.finalize() [1542](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1541) writer.close() File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:563, in ArrowWriter.finalize(self, close_stream) [562](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=561) if self._check_duplicates: --> [563](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=562) self.check_duplicate_keys() [564](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=563) # Re-intializing to empty list for next batch File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:492, in ArrowWriter.check_duplicate_keys(self) [486](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=485) duplicate_key_indices = [ [487](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=486) str(self._num_examples + index) [488](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=487) for index, (duplicate_hash, _) in enumerate(self.hkey_record) [489](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=488) if duplicate_hash == hash [490](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=489) ] --> [492](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=491) raise DuplicatedKeysError(key, duplicate_key_indices) [493](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=492) else: DuplicatedKeysError: Found multiple examples generated with the same key The examples at index 858, 859 have the key dlg-89174425-d57a-4db7-a92b-165c3bff6735 The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) Cell In[23], line 1 ----> 1 dataset = load_dataset("taskmaster2", "music") File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py:1741, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs) [1738](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1737) try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES [1740](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1739) # Download and prepare data -> [1741](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1740) builder_instance.download_and_prepare( [1742](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1741) download_config=download_config, [1743](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1742) download_mode=download_mode, [1744](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1743) ignore_verifications=ignore_verifications, [1745](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1744) try_from_hf_gcs=try_from_hf_gcs, [1746](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1745) use_auth_token=use_auth_token, [1747](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1746) num_proc=num_proc, [1748](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1747) ) [1750](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1749) # Build dataset for splits [1751](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1750) keep_in_memory = ( [1752](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1751) keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) [1753](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1752) ) File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:822, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) [820](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=819) if num_proc is not None: [821](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=820) prepare_split_kwargs["num_proc"] = num_proc --> [822](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=821) self._download_and_prepare( [823](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=822) dl_manager=dl_manager, [824](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=823) verify_infos=verify_infos, [825](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=824) **prepare_split_kwargs, [826](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=825) **download_and_prepare_kwargs, [827](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=826) ) [828](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=827) # Sync info [829](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=828) self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1555, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs) [1554](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1553) def _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs): -> [1555](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1554) super()._download_and_prepare( [1556](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1555) dl_manager, verify_infos, check_duplicate_keys=verify_infos, **prepare_splits_kwargs [1557](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1556) ) File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:913, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) [909](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=908) split_dict.add(split_generator.split_info) [911](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=910) try: [912](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=911) # Prepare split will record examples associated to the split --> [913](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=912) self._prepare_split(split_generator, **prepare_split_kwargs) [914](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=913) except OSError as e: [915](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=914) raise OSError( [916](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=915) "Cannot find data file. " [917](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=916) + (self.manual_download_instructions or "") [918](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=917) + "\nOriginal error:\n" [919](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=918) + str(e) [920](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=919) ) from None File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1396, in GeneratorBasedBuilder._prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) [1394](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1393) gen_kwargs = split_generator.gen_kwargs [1395](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1394) job_id = 0 -> [1396](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1395) for job_id, done, content in self._prepare_split_single( [1397](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1396) {"gen_kwargs": gen_kwargs, "job_id": job_id, **_prepare_split_args} [1398](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1397) ): [1399](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1398) if done: [1400](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1399) result = content File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1550, in GeneratorBasedBuilder._prepare_split_single(self, arg) [1548](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1547) if isinstance(e, SchemaInferenceError) and e.__context__ is not None: [1549](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1548) e = e.__context__ -> [1550](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1549) raise DatasetGenerationError("An error occurred while generating the dataset") from e [1552](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1551) yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior Loads the dataset ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5323/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5323/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5317
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5317/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5317/comments
https://api.github.com/repos/huggingface/datasets/issues/5317/events
https://github.com/huggingface/datasets/issues/5317
1,470,390,164
I_kwDODunzps5XpF-U
5,317
`ImageFolder` performs poorly with large datasets
{ "login": "salieri", "id": 1086393, "node_id": "MDQ6VXNlcjEwODYzOTM=", "avatar_url": "https://avatars.githubusercontent.com/u/1086393?v=4", "gravatar_id": "", "url": "https://api.github.com/users/salieri", "html_url": "https://github.com/salieri", "followers_url": "https://api.github.com/users/salieri/followers", "following_url": "https://api.github.com/users/salieri/following{/other_user}", "gists_url": "https://api.github.com/users/salieri/gists{/gist_id}", "starred_url": "https://api.github.com/users/salieri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/salieri/subscriptions", "organizations_url": "https://api.github.com/users/salieri/orgs", "repos_url": "https://api.github.com/users/salieri/repos", "events_url": "https://api.github.com/users/salieri/events{/privacy}", "received_events_url": "https://api.github.com/users/salieri/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
3
2022-12-01T00:04:21
2022-12-01T21:49:26
null
NONE
null
### Describe the bug While testing image dataset creation, I'm seeing significant performance bottlenecks with imagefolders when scanning a directory structure with large number of images. ## Setup * Nested directories (5 levels deep) * 3M+ images * 1 `metadata.jsonl` file ## Performance Degradation Point 1 Degradation occurs because [`get_data_files_patterns`](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L231-L243) runs the exact same scan for many different types of patterns, and there doesn't seem to be a way to easily limit this. It's controlled by the definition of [`ALL_DEFAULT_PATTERNS`](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L82-L85). One scan with 3M+ files takes about 10-15 minutes to complete on my setup, so having those extra scans really slows things down – from 10 minutes to 60+. Most of the scans return no matches, but they still take a significant amount of time to complete – hence the poor performance. As a side effect, when this scan is run on 3M+ image files, Python also consumes up to 12 GB of RAM, which is not ideal. ## Performance Degradation Point 2 The second performance bottleneck is in [`PackagedDatasetModuleFactory.get_module`](https://github.com/huggingface/datasets/blob/d7dfbc83d68e87ba002c5eb2555f7a932e59038a/src/datasets/load.py#L707-L711), which calls `DataFilesDict.from_local_or_remote`. It runs for a long time (60min+), consuming significant amounts of RAM – even more than the point 1 above. Based on `iostat -d 2`, it performs **zero** disk operations, which to me suggests that there is a code based bottleneck there that could be sorted out. ### Steps to reproduce the bug ```python from datasets import load_dataset import os import huggingface_hub dataset = load_dataset( 'imagefolder', data_dir='/some/path', # just to spell it out: split=None, drop_labels=True, keep_in_memory=False ) dataset.push_to_hub('account/dataset', private=True) ``` ### Expected behavior While it's certainly possible to write a custom loader to replace `ImageFolder` with, it'd be great if the off-the-shelf `ImageFolder` would by default have a setup that can scale to large datasets. Or perhaps there could be a dedicated loader just for large datasets that trades off flexibility for performance? As in, maybe you have to define explicitly how you want it to work rather than it trying to guess your data structure like `_get_data_files_patterns()` does? ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-4.14.296-222.539.amzn2.x86_64-x86_64-with-glibc2.2.5 - Python version: 3.7.10 - PyArrow version: 10.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5317/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5316
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5316/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5316/comments
https://api.github.com/repos/huggingface/datasets/issues/5316/events
https://github.com/huggingface/datasets/issues/5316
1,470,115,681
I_kwDODunzps5XoC9h
5,316
Bug in sample_by="paragraph"
{ "login": "adampauls", "id": 1243668, "node_id": "MDQ6VXNlcjEyNDM2Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/1243668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adampauls", "html_url": "https://github.com/adampauls", "followers_url": "https://api.github.com/users/adampauls/followers", "following_url": "https://api.github.com/users/adampauls/following{/other_user}", "gists_url": "https://api.github.com/users/adampauls/gists{/gist_id}", "starred_url": "https://api.github.com/users/adampauls/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adampauls/subscriptions", "organizations_url": "https://api.github.com/users/adampauls/orgs", "repos_url": "https://api.github.com/users/adampauls/repos", "events_url": "https://api.github.com/users/adampauls/events{/privacy}", "received_events_url": "https://api.github.com/users/adampauls/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
2022-11-30T19:24:13
2022-12-01T15:19:02
2022-12-01T15:19:02
NONE
null
### Describe the bug I think [this line](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/text/text.py#L96) is wrong and should be `batch = f.read(self.config.chunksize)`. Otherwise it will never terminate because even when `f` is finished reading, `batch` will still be truthy from the last iteration. ### Steps to reproduce the bug ``` > cat test.txt a b c d e f ```` ```python >>> import datasets >>> datasets.load_dataset("text", data_files={"train":"test.txt"}, sample_by="paragraph") ``` This will go on forever. ### Expected behavior Terminates very quickly. ### Environment info `version = "2.6.1"` but I think the bug is still there on main.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5316/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5315
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5315/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5315/comments
https://api.github.com/repos/huggingface/datasets/issues/5315/events
https://github.com/huggingface/datasets/issues/5315
1,470,026,797
I_kwDODunzps5XntQt
5,315
Adding new splits to a dataset script with existing old splits info in metadata's `dataset_info` fails
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
3
2022-11-30T18:02:15
2022-12-02T07:02:53
null
CONTRIBUTOR
null
### Describe the bug If you first create a custom dataset with a specific set of splits, generate metadata with `datasets-cli test ... --save_info`, then change your script to include more splits, it fails. That's what happened in https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/discussions/2#6385fd1269634850f8ddff48. ### Steps to reproduce the bug 1. create a dataset with a custom split that returns, for example, only `"train"` split in `_splits_generators'`. specifically, if really want to reproduce, copy `https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/blob/main/food_vision_199_classes.py 2. run `datasets-cli test dataset_script.py --save_info --all_configs` - this would generate metadata yaml in `README.md` that would contain info about splits, for example, like this: ``` splits: - name: train num_bytes: 2973286 num_examples: 19747 ``` 3. make changes to your script so that it returns another set of splits, for example, `"train"` and `"test"` (uncomment [these lines](https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/blob/main/food_vision_199_classes.py#L271)) 4. run `load_dataset` and get the following error: ```python Traceback (most recent call last): File "/home/daniel/code/pytorch/env/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/commands/test.py", line 141, in run builder.download_and_prepare( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 1555, in _download_and_prepare super()._download_and_prepare( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 913, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 1356, in _prepare_split split_info = self.info.splits[split_generator.name] File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/splits.py", line 525, in __getitem__ instructions = make_file_instructions( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/arrow_reader.py", line 111, in make_file_instructions name2filenames = { File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/arrow_reader.py", line 112, in <dictcomp> info.name: filenames_for_dataset_split( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/naming.py", line 78, in filenames_for_dataset_split prefix = filename_prefix_for_split(dataset_name, split) File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/naming.py", line 57, in filename_prefix_for_split if os.path.basename(name) != name: File "/home/daniel/code/pytorch/env/lib/python3.8/posixpath.py", line 143, in basename p = os.fspath(p) TypeError: expected str, bytes or os.PathLike object, not NoneType ``` 5. bonus: try to regenerate metadata in `README.md` with `datasets-cli` as in step 2 and get the same error. This is because `dataset.info.splits` contains only `"train"` split so when we are doing `self.info.splits[split_generator.name]` it tries to infer smth like `info.splits['train[50%]']` and that's not the case and it fails. ### Expected behavior to be discussed? This can be solved by removing splits information from metadata file first. But I wonder if there is a better way. ### Environment info - Datasets version: 2.7.1 - Python version: 3.8.13
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5315/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5314
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5314/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5314/comments
https://api.github.com/repos/huggingface/datasets/issues/5314/events
https://github.com/huggingface/datasets/issues/5314
1,469,685,118
I_kwDODunzps5XmZ1-
5,314
Datasets: classification_report() got an unexpected keyword argument 'suffix'
{ "login": "JonathanAlis", "id": 42126634, "node_id": "MDQ6VXNlcjQyMTI2NjM0", "avatar_url": "https://avatars.githubusercontent.com/u/42126634?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JonathanAlis", "html_url": "https://github.com/JonathanAlis", "followers_url": "https://api.github.com/users/JonathanAlis/followers", "following_url": "https://api.github.com/users/JonathanAlis/following{/other_user}", "gists_url": "https://api.github.com/users/JonathanAlis/gists{/gist_id}", "starred_url": "https://api.github.com/users/JonathanAlis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JonathanAlis/subscriptions", "organizations_url": "https://api.github.com/users/JonathanAlis/orgs", "repos_url": "https://api.github.com/users/JonathanAlis/repos", "events_url": "https://api.github.com/users/JonathanAlis/events{/privacy}", "received_events_url": "https://api.github.com/users/JonathanAlis/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2022-11-30T14:01:03
2022-12-01T15:00:46
null
NONE
null
https://github.com/huggingface/datasets/blob/main/metrics/seqeval/seqeval.py > import datasets predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] seqeval = datasets.load_metric("seqeval") results = seqeval.compute(predictions=predictions, references=references) print(list(results.keys())) print(results["overall_f1"]) print(results["PER"]["f1"]) It raises the error: > TypeError: classification_report() got an unexpected keyword argument 'suffix' For context, versions on my pip list -v > datasets 1.12.1 seqeval 1.2.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5314/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5306
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5306/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5306/comments
https://api.github.com/repos/huggingface/datasets/issues/5306/events
https://github.com/huggingface/datasets/issues/5306
1,465,968,639
I_kwDODunzps5XYOf_
5,306
Can't use custom feature description when loading a dataset
{ "login": "clefourrier", "id": 22726840, "node_id": "MDQ6VXNlcjIyNzI2ODQw", "avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clefourrier", "html_url": "https://github.com/clefourrier", "followers_url": "https://api.github.com/users/clefourrier/followers", "following_url": "https://api.github.com/users/clefourrier/following{/other_user}", "gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}", "starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions", "organizations_url": "https://api.github.com/users/clefourrier/orgs", "repos_url": "https://api.github.com/users/clefourrier/repos", "events_url": "https://api.github.com/users/clefourrier/events{/privacy}", "received_events_url": "https://api.github.com/users/clefourrier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2022-11-28T07:55:44
2022-11-28T08:11:45
2022-11-28T08:11:44
CONTRIBUTOR
null
### Describe the bug I have created a feature dictionary to describe my datasets' column types, to use when loading the dataset, following [the doc](https://huggingface.co/docs/datasets/main/en/about_dataset_features). It crashes at dataset load. ### Steps to reproduce the bug ```python # Creating features task_list = [f"motif_G{i}" for i in range(19, 53)] features = {t: Sequence(feature=Value(dtype="float64")) for t in task_list} for col_name in ["class_label"]: features[col_name] = Sequence(feature=Value(dtype="int64")) for col_name in ["num_nodes"]: features[col_name] = Value(dtype="int64") for col_name in ["num_bridges", "num_cycles", "avg_shortest_path_len"]: features[col_name] = Sequence(feature=Value(dtype="float64")) for col_name in ["edge_attr", "node_feat", "edge_index"]: features[col_name] = Sequence(feature=Sequence(feature=Value(dtype="int64"))) print(features) dataset = load_dataset(path=f"graphs-datasets/unbalanced-motifs-500K", split="train", features=features) ``` Last line will crash and say 'TypeError: argument of type 'Sequence' is not iterable'. Full stack: ``` Traceback (most recent call last): File "pretrain_tokengt.py", line 131, in <module> main(output_folder = "../workspace/pretraining", File "pretrain_tokengt.py", line 52, in main dataset = load_dataset(path=f"graphs-datasets/{dataset_name}", split="train", features=features) File "huggingface_env/lib/python3.8/site-packages/datasets/load.py", line 1718, in load_dataset builder_instance = load_dataset_builder( File "huggingface_env/lib/python3.8/site-packages/datasets/load.py", line 1514, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "huggingface_env/lib/python3.8/site-packages/datasets/builder.py", line 321, in __init__ info.update(self._info()) File "huggingface_env/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py", line 62, in _info return datasets.DatasetInfo(features=self.config.features) File "<string>", line 20, in __init__ File "huggingface_env/lib/python3.8/site-packages/datasets/info.py", line 155, in __post_init__ self.features = Features.from_dict(self.features) File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1599, in from_dict obj = generate_from_dict(dic) File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1282, in generate_from_dict return {key: generate_from_dict(value) for key, value in obj.items()} File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1282, in <dictcomp> return {key: generate_from_dict(value) for key, value in obj.items()} File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1281, in generate_from_dict if "_type" not in obj or isinstance(obj["_type"], dict): TypeError: argument of type 'Sequence' is not iterable ``` ### Expected behavior For it not to crash. ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-5.14.0-1054-oem-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5306/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5305
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5305/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5305/comments
https://api.github.com/repos/huggingface/datasets/issues/5305/events
https://github.com/huggingface/datasets/issues/5305
1,465,627,826
I_kwDODunzps5XW7Sy
5,305
Dataset joelito/mc4_legal does not work with multiple files
{ "login": "JoelNiklaus", "id": 3775944, "node_id": "MDQ6VXNlcjM3NzU5NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JoelNiklaus", "html_url": "https://github.com/JoelNiklaus", "followers_url": "https://api.github.com/users/JoelNiklaus/followers", "following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}", "gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}", "starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions", "organizations_url": "https://api.github.com/users/JoelNiklaus/orgs", "repos_url": "https://api.github.com/users/JoelNiklaus/repos", "events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}", "received_events_url": "https://api.github.com/users/JoelNiklaus/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
2
2022-11-28T00:16:16
2022-11-28T07:22:42
2022-11-28T07:22:42
CONTRIBUTOR
null
### Describe the bug The dataset https://huggingface.co/datasets/joelito/mc4_legal works for languages like bg with a single data file, but not for languages with multiple files like de. It shows zero rows for the de dataset. joelniklaus@Joels-MacBook-Pro ~/N/P/C/L/p/m/mc4_legal (main) [1]> python test_mc4_legal.py (debug) Found cached dataset mc4_legal (/Users/joelniklaus/.cache/huggingface/datasets/mc4_legal/de/0.0.0/fb6952a097180f8c936e2a7605525ff670354a344fc1a2c70107684d3f7cb02f) Dataset({ features: ['index', 'url', 'timestamp', 'matches', 'text'], num_rows: 0 }) joelniklaus@Joels-MacBook-Pro ~/N/P/C/L/p/m/mc4_legal (main)> python test_mc4_legal.py (debug) Downloading and preparing dataset mc4_legal/bg to /Users/joelniklaus/.cache/huggingface/datasets/mc4_legal/bg/0.0.0/fb6952a097180f8c936e2a7605525ff670354a344fc1a2c70107684d3f7cb02f... Downloading data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1240.55it/s] Dataset mc4_legal downloaded and prepared to /Users/joelniklaus/.cache/huggingface/datasets/mc4_legal/bg/0.0.0/fb6952a097180f8c936e2a7605525ff670354a344fc1a2c70107684d3f7cb02f. Subsequent calls will reuse this data. Dataset({ features: ['index', 'url', 'timestamp', 'matches', 'text'], num_rows: 204 }) ### Steps to reproduce the bug import datasets from datasets import load_dataset, get_dataset_config_names language = "bg" test = load_dataset("joelito/mc4_legal", language, split='train') ### Expected behavior It should display the correct number of rows for the de dataset which should be a large number (thousands or more). ### Environment info Package Version ------------------------ -------------- absl-py 1.3.0 aiohttp 3.8.1 aiosignal 1.2.0 astunparse 1.6.3 async-timeout 4.0.2 attrs 22.1.0 beautifulsoup4 4.11.1 blinker 1.4 blis 0.7.8 Bottleneck 1.3.4 brotlipy 0.7.0 cachetools 5.2.0 catalogue 2.0.7 certifi 2022.5.18.1 cffi 1.15.1 chardet 4.0.0 charset-normalizer 2.1.0 click 8.0.4 conllu 4.5.2 cryptography 38.0.1 cymem 2.0.6 datasets 2.6.1 dill 0.3.5.1 docker-pycreds 0.4.0 fasttext 0.9.2 fasttext-langdetect 1.0.3 filelock 3.0.12 flatbuffers 20210226132247 frozenlist 1.3.0 fsspec 2022.5.0 gast 0.4.0 gcloud 0.18.3 gitdb 4.0.9 GitPython 3.1.27 google-auth 2.9.0 google-auth-oauthlib 0.4.6 google-pasta 0.2.0 googleapis-common-protos 1.57.0 grpcio 1.47.0 h5py 3.7.0 httplib2 0.21.0 huggingface-hub 0.8.1 idna 3.4 importlib-metadata 4.12.0 Jinja2 3.1.2 joblib 1.0.1 keras 2.9.0 Keras-Preprocessing 1.1.2 langcodes 3.3.0 lxml 4.9.1 Markdown 3.3.7 MarkupSafe 2.1.1 mkl-fft 1.3.1 mkl-random 1.2.2 mkl-service 2.4.0 multidict 6.0.2 multiprocess 0.70.13 murmurhash 1.0.7 numexpr 2.8.1 numpy 1.22.3 oauth2client 4.1.3 oauthlib 3.2.1 opt-einsum 3.3.0 packaging 21.3 pandas 1.4.2 pathtools 0.1.2 pathy 0.6.1 pip 21.1.2 preshed 3.0.6 promise 2.3 protobuf 4.21.9 psutil 5.9.1 pyarrow 8.0.0 pyasn1 0.4.8 pyasn1-modules 0.2.8 pybind11 2.9.2 pycountry 22.3.5 pycparser 2.21 pydantic 1.8.2 PyJWT 2.4.0 pylzma 0.5.0 pyOpenSSL 22.0.0 pyparsing 3.0.4 PySocks 1.7.1 python-dateutil 2.8.2 pytz 2021.3 PyYAML 6.0 regex 2021.4.4 requests 2.28.1 requests-oauthlib 1.3.1 responses 0.18.0 rsa 4.8 sacremoses 0.0.45 scikit-learn 1.1.1 scipy 1.8.1 sentencepiece 0.1.96 sentry-sdk 1.6.0 setproctitle 1.2.3 setuptools 65.5.0 shortuuid 1.0.9 six 1.16.0 smart-open 5.2.1 smmap 5.0.0 soupsieve 2.3.2.post1 spacy 3.3.1 spacy-legacy 3.0.9 spacy-loggers 1.0.2 srsly 2.4.3 tabulate 0.8.9 tensorboard 2.9.1 tensorboard-data-server 0.6.1 tensorboard-plugin-wit 1.8.1 tensorflow 2.9.1 tensorflow-estimator 2.9.0 termcolor 2.1.0 thinc 8.0.17 threadpoolctl 3.1.0 tokenizers 0.12.1 torch 1.13.0 tqdm 4.64.0 transformers 4.20.1 typer 0.4.1 typing-extensions 4.3.0 Unidecode 1.3.6 urllib3 1.26.12 wandb 0.12.20 wasabi 0.9.1 web-anno-tsv 0.0.1 Werkzeug 2.1.2 wget 3.2 wheel 0.35.1 wrapt 1.14.1 xxhash 3.0.0 yarl 1.8.1 zipp 3.8.0 Python 3.8.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5305/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5304
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5304/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5304/comments
https://api.github.com/repos/huggingface/datasets/issues/5304/events
https://github.com/huggingface/datasets/issues/5304
1,465,110,367
I_kwDODunzps5XU89f
5,304
timit_asr doesn't load the test split.
{ "login": "seyong92", "id": 17842800, "node_id": "MDQ6VXNlcjE3ODQyODAw", "avatar_url": "https://avatars.githubusercontent.com/u/17842800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/seyong92", "html_url": "https://github.com/seyong92", "followers_url": "https://api.github.com/users/seyong92/followers", "following_url": "https://api.github.com/users/seyong92/following{/other_user}", "gists_url": "https://api.github.com/users/seyong92/gists{/gist_id}", "starred_url": "https://api.github.com/users/seyong92/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seyong92/subscriptions", "organizations_url": "https://api.github.com/users/seyong92/orgs", "repos_url": "https://api.github.com/users/seyong92/repos", "events_url": "https://api.github.com/users/seyong92/events{/privacy}", "received_events_url": "https://api.github.com/users/seyong92/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2022-11-26T10:18:22
2022-12-01T13:28:59
null
NONE
null
### Describe the bug When I use the function ```timit = load_dataset('timit_asr', data_dir=data_dir)```, it only loads train split, not test split. I tried to change the directory and filename to lower case to upper case for the test split, but it does not work at all. ```python DatasetDict({ train: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 4620 }) test: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 0 }) }) ``` The directory structure of both splits are same. (DIALECT_REGION / SPEAKER_CODE / DATA_FILES) ### Steps to reproduce the bug 1. just use ```timit = load_dataset('timit_asr', data_dir=data_dir)``` ### Expected behavior ```python DatasetDict({ train: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 4620 }) test: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 1680 }) }) ``` ### Environment info - ubuntu 20.04 - python 3.9.13 - datasets 2.7.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5304/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5298
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5298/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5298/comments
https://api.github.com/repos/huggingface/datasets/issues/5298/events
https://github.com/huggingface/datasets/issues/5298
1,464,681,871
I_kwDODunzps5XTUWP
5,298
Bug in xopen with Windows pathnames
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2022-11-25T15:21:32
2022-11-29T08:21:25
2022-11-29T08:21:25
MEMBER
null
Currently, `xopen` function has a bug with local Windows pathnames: From its implementation: ```python def xopen(file: str, mode="r", *args, **kwargs): file = _as_posix(PurePath(file)) main_hop, *rest_hops = file.split("::") if is_local_path(main_hop): return open(file, mode, *args, **kwargs) ``` On a Windows machine, if we pass the argument: ```python xopen("C:\\Users\\USERNAME\\filename.txt") ``` it returns ```python open("C:/Users/USERNAME/filename.txt") ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5298/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5296
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5296/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5296/comments
https://api.github.com/repos/huggingface/datasets/issues/5296/events
https://github.com/huggingface/datasets/issues/5296
1,464,553,580
I_kwDODunzps5XS1Bs
5,296
Bug in xjoin with Windows pathnames
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2022-11-25T13:29:33
2022-11-29T08:05:13
2022-11-29T08:05:13
MEMBER
null
Currently, `xjoin` function has a bug with local Windows pathnames: instead of returning the OS-dependent join pathname, it always returns it in POSIX format. ```python from datasets.download.streaming_download_manager import xjoin path = xjoin("C:\\Users\\USERNAME", "filename.txt") ``` Join path should be: ```python "C:\\Users\\USERNAME\\filename.txt" ``` However it is: ```python "C:/Users/USERNAME/filename.txt" ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5296/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5295
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5295/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5295/comments
https://api.github.com/repos/huggingface/datasets/issues/5295/events
https://github.com/huggingface/datasets/issues/5295
1,464,006,743
I_kwDODunzps5XQvhX
5,295
Extractions failed when .zip file located on read-only path (e.g., SageMaker FastFile mode)
{ "login": "verdimrc", "id": 2340781, "node_id": "MDQ6VXNlcjIzNDA3ODE=", "avatar_url": "https://avatars.githubusercontent.com/u/2340781?v=4", "gravatar_id": "", "url": "https://api.github.com/users/verdimrc", "html_url": "https://github.com/verdimrc", "followers_url": "https://api.github.com/users/verdimrc/followers", "following_url": "https://api.github.com/users/verdimrc/following{/other_user}", "gists_url": "https://api.github.com/users/verdimrc/gists{/gist_id}", "starred_url": "https://api.github.com/users/verdimrc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/verdimrc/subscriptions", "organizations_url": "https://api.github.com/users/verdimrc/orgs", "repos_url": "https://api.github.com/users/verdimrc/repos", "events_url": "https://api.github.com/users/verdimrc/events{/privacy}", "received_events_url": "https://api.github.com/users/verdimrc/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
2
2022-11-25T03:59:43
2022-12-01T13:56:40
null
NONE
null
### Describe the bug Hi, `load_dataset()` does not work .zip files located on a read-only directory. Looks like it's because Dataset creates a lock file in the [same directory](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/utils/extract.py) as the .zip file. Encountered this when attempting `load_dataset()` on a datadir with SageMaker FastFile mode. ### Steps to reproduce the bug ```python # Showing relevant lines only. hyperparameters = { "dataset_name": "ydshieh/coco_dataset_script", "dataset_config_name": 2017, "data_dir": "/opt/ml/input/data/coco", "cache_dir": "/tmp/huggingface-cache", # Fix dataset complains out-of-space. ... } estimator = PyTorch( base_job_name="clip", source_dir="../src/sm-entrypoint", entry_point="run_clip.py", # Transformers/src/examples/pytorch/contrastive-image-text/run_clip.py framework_version="1.12", py_version="py38", hyperparameters=hyperparameters, instance_count=1, instance_type="ml.p3.16xlarge", volume_size=100, distribution={"smdistributed": {"dataparallel": {"enabled": True}}}, ) fast_file = lambda x: TrainingInput(x, input_mode='FastFile') estimator.fit( { "pre-trained": fast_file("s3://vm-sagemakerr-us-east-1/clip/pre-trained-checkpoint/"), "coco": fast_file("s3://vm-sagemakerr-us-east-1/clip/coco-zip-files/"), } ) ``` Error message: ```text ErrorMessage "OSError: [Errno 30] Read-only file system: '/opt/ml/input/data/coco/image_info_test2017.zip.lock' """ The above exception was the direct cause of the following exception Traceback (most recent call last) File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/opt/conda/lib/python3.8/site-packages/mpi4py/__main__.py", line 7, in <module> main() File "/opt/conda/lib/python3.8/site-packages/mpi4py/run.py", line 198, in main run_command_line(args) File "/opt/conda/lib/python3.8/site-packages/mpi4py/run.py", line 47, in run_command_line run_path(sys.argv[0], run_name='__main__') File "/opt/conda/lib/python3.8/runpy.py", line 265, in run_path return _run_module_code(code, init_globals, run_name, File "/opt/conda/lib/python3.8/runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "run_clip_smddp.py", line 594, in <module> File "run_clip_smddp.py", line 327, in main dataset = load_dataset( File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1741, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 1555, in _download_and_prepare super()._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 891, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/ydshieh--coco_dataset_script/e033205c0266a54c10be132f9264f2a39dcf893e798f6756d224b1ff5078998f/coco_dataset_script.py", line 123, in _split_generators archive_path = dl_manager.download_and_extract(_DL_URLS) File "/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py", line 447, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py", line 419, in extract extracted_paths = map_nested( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 472, in map_nested mapped = pool.map(_single_map_nested, split_kwds) File "/opt/conda/lib/python3.8/multiprocessing/pool.py", line 364, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "/opt/conda/lib/python3.8/multiprocessing/pool.py", line 771, in get raise self._value OSError: [Errno 30] Read-only file system: '/opt/ml/input/data/coco/image_info_test2017.zip.lock'" ``` ### Expected behavior `load_dataset()` to succeed, just like when .zip file is passed in SageMaker File mode. ### Environment info * datasets-2.7.1 * transformers-4.24.0 * python-3.8 * torch-1.12 * SageMaker PyTorch DLC
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5295/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5293/comments
https://api.github.com/repos/huggingface/datasets/issues/5293/events
https://github.com/huggingface/datasets/issues/5293
1,463,669,201
I_kwDODunzps5XPdHR
5,293
Support streaming datasets with pathlib.Path.with_suffix
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2022-11-24T17:52:08
2022-11-29T07:06:33
2022-11-29T07:06:33
MEMBER
null
Extend support for streaming datasets that use `pathlib.Path.with_suffix`. This feature will be useful e.g. for datasets containing text files and annotated files with the same name but different extension.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5293/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5292
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5292/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5292/comments
https://api.github.com/repos/huggingface/datasets/issues/5292/events
https://github.com/huggingface/datasets/issues/5292
1,463,053,832
I_kwDODunzps5XNG4I
5,292
Missing documentation build for versions 2.7.1 and 2.6.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
2022-11-24T09:42:10
2022-11-24T10:10:02
2022-11-24T10:10:02
MEMBER
null
After the patch releases [2.7.1](https://github.com/huggingface/datasets/releases/tag/2.7.1) and [2.6.2](https://github.com/huggingface/datasets/releases/tag/2.6.2), the online docs were not properly built (the build_documentation workflow was not triggered). There was a fix by: - #5291 However, both documentations were built from main branch, instead of their corresponding version branch. We are rebuilding them.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5292/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5288/comments
https://api.github.com/repos/huggingface/datasets/issues/5288/events
https://github.com/huggingface/datasets/issues/5288
1,462,134,067
I_kwDODunzps5XJmUz
5,288
Lossy json serialization - deserialization of dataset info
{ "login": "anuragprat1k", "id": 57542204, "node_id": "MDQ6VXNlcjU3NTQyMjA0", "avatar_url": "https://avatars.githubusercontent.com/u/57542204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anuragprat1k", "html_url": "https://github.com/anuragprat1k", "followers_url": "https://api.github.com/users/anuragprat1k/followers", "following_url": "https://api.github.com/users/anuragprat1k/following{/other_user}", "gists_url": "https://api.github.com/users/anuragprat1k/gists{/gist_id}", "starred_url": "https://api.github.com/users/anuragprat1k/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anuragprat1k/subscriptions", "organizations_url": "https://api.github.com/users/anuragprat1k/orgs", "repos_url": "https://api.github.com/users/anuragprat1k/repos", "events_url": "https://api.github.com/users/anuragprat1k/events{/privacy}", "received_events_url": "https://api.github.com/users/anuragprat1k/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2022-11-23T17:20:15
2022-11-25T12:53:51
null
NONE
null
### Describe the bug Saving a dataset to disk as json (using `to_json`) and then loading it again (using `load_dataset`) results in features whose labels are not type-cast correctly. In the code snippet below, `features.label` should have a label of type `ClassLabel` but has type `Value` instead. ### Steps to reproduce the bug ``` from datasets import load_dataset def test_serdes_from_json(d): dataset = load_dataset(d, split="train") dataset.to_json('_test') dataset_loaded = load_dataset("json", data_files='_test', split='train') try: assert dataset_loaded.info.features == dataset.info.features, "features unequal!" except Exception as ex: print(f'{ex}') print(f'expected {dataset.info.features}, \nactual { dataset_loaded.info.features }') test_serdes_from_json('rotten_tomatoes') ``` Output ``` features unequal! expected {'text': Value(dtype='string', id=None), 'label': ClassLabel(names=['neg', 'pos'], id=None)}, actual {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)} ``` ### Expected behavior The deserialized `features.label` should have type `ClassLabel`. ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.10.144-127.601.amzn2.x86_64-x86_64-with-glibc2.17 - Python version: 3.7.13 - PyArrow version: 7.0.0 - Pandas version: 1.2.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5288/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5286/comments
https://api.github.com/repos/huggingface/datasets/issues/5286/events
https://github.com/huggingface/datasets/issues/5286
1,461,908,087
I_kwDODunzps5XIvJ3
5,286
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json
{ "login": "roritol", "id": 32490135, "node_id": "MDQ6VXNlcjMyNDkwMTM1", "avatar_url": "https://avatars.githubusercontent.com/u/32490135?v=4", "gravatar_id": "", "url": "https://api.github.com/users/roritol", "html_url": "https://github.com/roritol", "followers_url": "https://api.github.com/users/roritol/followers", "following_url": "https://api.github.com/users/roritol/following{/other_user}", "gists_url": "https://api.github.com/users/roritol/gists{/gist_id}", "starred_url": "https://api.github.com/users/roritol/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/roritol/subscriptions", "organizations_url": "https://api.github.com/users/roritol/orgs", "repos_url": "https://api.github.com/users/roritol/repos", "events_url": "https://api.github.com/users/roritol/events{/privacy}", "received_events_url": "https://api.github.com/users/roritol/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2022-11-23T14:54:15
2022-11-25T11:33:14
2022-11-25T11:33:14
NONE
null
### Describe the bug I follow the steps provided on the website [https://huggingface.co/datasets/wikipedia](https://huggingface.co/datasets/wikipedia) $ pip install apache_beam mwparserfromhell >>> from datasets import load_dataset >>> load_dataset("wikipedia", "20220301.en") however this results in the following error: raise MissingBeamOptions( datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')` If I then prompt the system with: >>> load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner') the following error occurs: raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json Here is the exact code: Python 3.10.6 (main, Nov 2 2022, 18:53:38) [GCC 11.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from datasets import load_dataset >>> load_dataset('wikipedia', '20220301.en') Downloading and preparing dataset wikipedia/20220301.en to /home/[EDITED]/.cache/huggingface/datasets/wikipedia/20220301.en/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559... Downloading: 100%|████████████████████████████████████████████████████████████████████████████| 15.3k/15.3k [00:00<00:00, 22.2MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 1741, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1879, in _download_and_prepare raise MissingBeamOptions( datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')` >>> load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner') Downloading and preparing dataset wikipedia/20220301.en to /home/[EDITED]/.cache/huggingface/datasets/wikipedia/20220301.en/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559... Downloading: 100%|████████████████████████████████████████████████████████████████████████████| 15.3k/15.3k [00:00<00:00, 18.8MB/s] Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 1741, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1909, in _download_and_prepare super()._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 891, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/rorytol/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia.py", line 945, in _split_generators downloaded_files = dl_manager.download_and_extract({"info": info_url}) File "/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py", line 447, in download_and_extract return self.extract(self.download(url_or_urls)) File "/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py", line 311, in download downloaded_path_or_paths = map_nested( File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 444, in map_nested mapped = [ File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 445, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 346, in _single_map_nested return function(data_struct) File "/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py", line 338, in _download return cached_path(url_or_filename, download_config=download_config) File "/usr/local/lib/python3.10/dist-packages/datasets/utils/file_utils.py", line 183, in cached_path output_path = get_from_cache( File "/usr/local/lib/python3.10/dist-packages/datasets/utils/file_utils.py", line 530, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json ### Steps to reproduce the bug $ pip install apache_beam mwparserfromhell >>> from datasets import load_dataset >>> load_dataset("wikipedia", "20220301.en") >>> load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner') ### Expected behavior Download the dataset ### Environment info Running linux on a remote workstation operated through a macbook terminal Python 3.10.6
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5286/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5286/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5284
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5284/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5284/comments
https://api.github.com/repos/huggingface/datasets/issues/5284/events
https://github.com/huggingface/datasets/issues/5284
1,461,519,733
I_kwDODunzps5XHQV1
5,284
Features of IterableDataset set to None by remove column
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[ { "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false } ]
null
18
2022-11-23T10:54:59
2023-02-02T09:05:51
2022-11-28T12:53:24
CONTRIBUTOR
null
### Describe the bug The `remove_column` method of the IterableDataset sets the dataset features to None. ### Steps to reproduce the bug ```python from datasets import Audio, load_dataset # load LS in streaming mode dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True) # check original features print("Original features: ", dataset.features.keys()) # define features to remove: we KEEP audio and text COLUMNS_TO_REMOVE = ['chapter_id', 'speaker_id', 'file', 'id'] dataset = dataset.remove_columns(COLUMNS_TO_REMOVE) # check processed features, uh-oh! print("Processed features: ", dataset.features) # streaming the first audio sample still works print("First sample:", next(iter(ds))) ``` **Print Output:** ``` Original features: dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id']) Processed features: None First sample: {'audio': {'path': '2277-149896-0000.flac', 'array': array([ 0.00186157, 0.0005188 , 0.00024414, ..., -0.00097656, -0.00109863, -0.00146484]), 'sampling_rate': 16000}, 'text': "HE WAS IN A FEVERED STATE OF MIND OWING TO THE BLIGHT HIS WIFE'S ACTION THREATENED TO CAST UPON HIS ENTIRE FUTURE"} ``` ### Expected behavior The features should be those **not** removed by the `remove_column` method, i.e. audio and text. ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.15 - PyArrow version: 9.0.0 - Pandas version: 1.3.5 (Running on Google Colab for a blog post: https://colab.research.google.com/drive/1ySCQREPZEl4msLfxb79pYYOWjUZhkr9y#scrollTo=8pRDGiVmH2ml) cc @polinaeterna @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5284/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5281/comments
https://api.github.com/repos/huggingface/datasets/issues/5281/events
https://github.com/huggingface/datasets/issues/5281
1,459,930,271
I_kwDODunzps5XBMSf
5,281
Support cloud storage in load_dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3761482852, "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue", "name": "good second issue", "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues" } ]
open
false
null
[]
null
3
2022-11-22T14:00:10
2023-02-01T16:31:56
null
MEMBER
null
Would be nice to be able to do ```python data_files=["s3://..."] storage_options = {...} load_dataset(..., data_files=data_files, storage_options=storage_options) ``` or even ```python load_dataset("gs://...") ``` The idea would be to use `fsspec` as in `download_and_prepare` and `save_to_disk`. This has been requested several times already. Some users want to use their data from private cloud storage to train models related: https://github.com/huggingface/datasets/issues/3490 https://github.com/huggingface/datasets/issues/5244 [forum](https://discuss.huggingface.co/t/how-to-use-s3-path-with-load-dataset-with-streaming-true/25739/2)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5281/reactions", "total_count": 10, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 6, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5281/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5280/comments
https://api.github.com/repos/huggingface/datasets/issues/5280/events
https://github.com/huggingface/datasets/issues/5280
1,459,823,179
I_kwDODunzps5XAyJL
5,280
Import error
{ "login": "feketedavid1012", "id": 40760055, "node_id": "MDQ6VXNlcjQwNzYwMDU1", "avatar_url": "https://avatars.githubusercontent.com/u/40760055?v=4", "gravatar_id": "", "url": "https://api.github.com/users/feketedavid1012", "html_url": "https://github.com/feketedavid1012", "followers_url": "https://api.github.com/users/feketedavid1012/followers", "following_url": "https://api.github.com/users/feketedavid1012/following{/other_user}", "gists_url": "https://api.github.com/users/feketedavid1012/gists{/gist_id}", "starred_url": "https://api.github.com/users/feketedavid1012/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/feketedavid1012/subscriptions", "organizations_url": "https://api.github.com/users/feketedavid1012/orgs", "repos_url": "https://api.github.com/users/feketedavid1012/repos", "events_url": "https://api.github.com/users/feketedavid1012/events{/privacy}", "received_events_url": "https://api.github.com/users/feketedavid1012/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
5
2022-11-22T12:56:43
2022-12-15T19:57:40
2022-12-15T19:57:40
NONE
null
https://github.com/huggingface/datasets/blob/cd3d8e637cfab62d352a3f4e5e60e96597b5f0e9/src/datasets/__init__.py#L28 Hy, I have error at the above line. I have python version 3.8.13, the message says I need python>=3.7, which is True, but I think the if statement not working properly (or the message wrong)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5280/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5278
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5278/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5278/comments
https://api.github.com/repos/huggingface/datasets/issues/5278/events
https://github.com/huggingface/datasets/issues/5278
1,459,574,490
I_kwDODunzps5W_1ba
5,278
load_dataset does not read jsonl metadata file properly
{ "login": "065294847", "id": 81414263, "node_id": "MDQ6VXNlcjgxNDE0MjYz", "avatar_url": "https://avatars.githubusercontent.com/u/81414263?v=4", "gravatar_id": "", "url": "https://api.github.com/users/065294847", "html_url": "https://github.com/065294847", "followers_url": "https://api.github.com/users/065294847/followers", "following_url": "https://api.github.com/users/065294847/following{/other_user}", "gists_url": "https://api.github.com/users/065294847/gists{/gist_id}", "starred_url": "https://api.github.com/users/065294847/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/065294847/subscriptions", "organizations_url": "https://api.github.com/users/065294847/orgs", "repos_url": "https://api.github.com/users/065294847/repos", "events_url": "https://api.github.com/users/065294847/events{/privacy}", "received_events_url": "https://api.github.com/users/065294847/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
6
2022-11-22T10:24:46
2022-11-23T11:38:35
2022-11-23T11:38:35
NONE
null
### Describe the bug Hi, I'm following [this page](https://huggingface.co/docs/datasets/image_dataset) to create a dataset of images and captions via an image folder and a metadata.json file, but I can't seem to get the dataloader to recognize the "text" column. It just spits out "image" and "label" as features. Below is code to reproduce my exact example/problem. ### Steps to reproduce the bug ```ruby dataset_link="19Unu89Ih_kP6zsE7f9Mkw8dy3NwHopRF" id = dataset_link output = 'Godardv01.zip' gdown.download(id=id, output=output, quiet=False) ds = load_dataset("imagefolder", data_dir="/kaggle/working/Volumes/TOSHIBA/Godard_imgs/Volumes/TOSHIBA/Godard_imgs/Full/train", split="train", drop_labels=False) print(ds) ``` ### Expected behavior I would expect that it returned "image" and "text" columns from the code above. ### Environment info - `datasets` version: 2.1.0 - Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyArrow version: 5.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5278/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5276/comments
https://api.github.com/repos/huggingface/datasets/issues/5276/events
https://github.com/huggingface/datasets/issues/5276
1,459,363,442
I_kwDODunzps5W_B5y
5,276
Bug in downloading common_voice data and snall chunk of it to one's own hub
{ "login": "capsabogdan", "id": 48530104, "node_id": "MDQ6VXNlcjQ4NTMwMTA0", "avatar_url": "https://avatars.githubusercontent.com/u/48530104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/capsabogdan", "html_url": "https://github.com/capsabogdan", "followers_url": "https://api.github.com/users/capsabogdan/followers", "following_url": "https://api.github.com/users/capsabogdan/following{/other_user}", "gists_url": "https://api.github.com/users/capsabogdan/gists{/gist_id}", "starred_url": "https://api.github.com/users/capsabogdan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/capsabogdan/subscriptions", "organizations_url": "https://api.github.com/users/capsabogdan/orgs", "repos_url": "https://api.github.com/users/capsabogdan/repos", "events_url": "https://api.github.com/users/capsabogdan/events{/privacy}", "received_events_url": "https://api.github.com/users/capsabogdan/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
17
2022-11-22T08:17:53
2022-11-30T16:59:49
null
NONE
null
### Describe the bug I'm trying to load the common voice dataset. Currently there is no implementation to download just par tof the data, and I need just one part of it, without downloading the entire dataset Help please? ![image](https://user-images.githubusercontent.com/48530104/203260511-26df766f-6013-4eaf-be26-8aa13794def2.png) ### Steps to reproduce the bug So here is what I have done: 1. Download common_voice data 2. Trim part of it and publish it to my own repo. 3. Download data from my own repo, but am getting this error. ### Expected behavior There shouldn't be an error in downloading part of the data and publishing it to one's own repo ### Environment info common_voice 11
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5276/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5275/comments
https://api.github.com/repos/huggingface/datasets/issues/5275/events
https://github.com/huggingface/datasets/issues/5275
1,459,358,919
I_kwDODunzps5W_AzH
5,275
YAML integer keys are not preserved Hub server-side
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
13
2022-11-22T08:14:47
2023-01-26T10:52:35
2023-01-26T10:40:21
MEMBER
null
After an internal discussion (https://github.com/huggingface/moon-landing/issues/4563): - YAML integer keys are not preserved server-side: they are transformed to strings - See for example this Hub PR: https://huggingface.co/datasets/acronym_identification/discussions/1/files - Original: ```yaml class_label: names: 0: B-long 1: B-short ``` - Returned by the server: ```yaml class_label: names: '0': B-long '1': B-short ``` - They are planning to enforce only string keys - Other projects already use interger-transformed-to string keys: e.g. `transformers` models `id2label`: https://huggingface.co/roberta-large-mnli/blob/main/config.json ```yaml "id2label": { "0": "CONTRADICTION", "1": "NEUTRAL", "2": "ENTAILMENT" } ``` On the other hand, at `datasets` we are currently using YAML integer keys for `dataset_info` `class_label`. Please note (thanks @lhoestq for pointing out) that previous versions (2.6 and 2.7) of `datasets` need being patched: ```python In [18]: Features._from_yaml_list([{'dtype': {'class_label': {'names': {'0': 'neg', '1': 'pos'}}}, 'name': 'label'}]) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-18-974f07eea526> in <module> ----> 1 Features._from_yaml_list(ry) ~/Desktop/hf/nlp/src/datasets/features/features.py in _from_yaml_list(cls, yaml_data) 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") 1744 -> 1745 return cls.from_dict(from_yaml_inner(yaml_data)) 1746 1747 def encode_example(self, example): ~/Desktop/hf/nlp/src/datasets/features/features.py in from_yaml_inner(obj) 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] -> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") ~/Desktop/hf/nlp/src/datasets/features/features.py in <dictcomp>(.0) 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] -> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") ~/Desktop/hf/nlp/src/datasets/features/features.py in from_yaml_inner(obj) 1734 return {"_type": snakecase_to_camelcase(obj["dtype"])} 1735 else: -> 1736 return from_yaml_inner(obj["dtype"]) 1737 else: 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} ~/Desktop/hf/nlp/src/datasets/features/features.py in from_yaml_inner(obj) 1736 return from_yaml_inner(obj["dtype"]) 1737 else: -> 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] ~/Desktop/hf/nlp/src/datasets/features/features.py in unsimplify(feature) 1704 if isinstance(feature.get("class_label"), dict) and isinstance(feature["class_label"].get("names"), dict): 1705 label_ids = sorted(feature["class_label"]["names"]) -> 1706 if label_ids and label_ids != list(range(label_ids[-1] + 1)): 1707 raise ValueError( 1708 f"ClassLabel expected a value for all label ids [0:{label_ids[-1] + 1}] but some ids are missing." TypeError: can only concatenate str (not "int") to str ``` TODO: - [x] Remove YAML integer keys from `dataset_info` metadata - [x] Make a patch release for affected `datasets` versions: 2.6 and 2.7 - [x] Communicate on the fix - [x] Wait for adoption - [x] Bulk edit the Hub to fix this in all canonical datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5275/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5275/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5274/comments
https://api.github.com/repos/huggingface/datasets/issues/5274/events
https://github.com/huggingface/datasets/issues/5274
1,458,646,455
I_kwDODunzps5W8S23
5,274
load_dataset possibly broken for gated datasets?
{ "login": "TristanThrush", "id": 20826878, "node_id": "MDQ6VXNlcjIwODI2ODc4", "avatar_url": "https://avatars.githubusercontent.com/u/20826878?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TristanThrush", "html_url": "https://github.com/TristanThrush", "followers_url": "https://api.github.com/users/TristanThrush/followers", "following_url": "https://api.github.com/users/TristanThrush/following{/other_user}", "gists_url": "https://api.github.com/users/TristanThrush/gists{/gist_id}", "starred_url": "https://api.github.com/users/TristanThrush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TristanThrush/subscriptions", "organizations_url": "https://api.github.com/users/TristanThrush/orgs", "repos_url": "https://api.github.com/users/TristanThrush/repos", "events_url": "https://api.github.com/users/TristanThrush/events{/privacy}", "received_events_url": "https://api.github.com/users/TristanThrush/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
6
2022-11-21T21:59:53
2022-11-28T02:50:42
2022-11-28T02:50:42
MEMBER
null
### Describe the bug When trying to download the [winoground dataset](https://huggingface.co/datasets/facebook/winoground), I get this error unless I roll back the version of huggingface-hub: ``` [/usr/local/lib/python3.7/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in validate_repo_id(repo_id) 165 if repo_id.count("/") > 1: 166 raise HFValidationError( --> 167 "Repo id must be in the form 'repo_name' or 'namespace/repo_name':" 168 f" '{repo_id}'. Use `repo_type` argument if needed." 169 ) HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'datasets/facebook/winoground'. Use `repo_type` argument if needed ``` ### Steps to reproduce the bug Install requirements: ``` pip install transformers pip install datasets # It works if you uncomment the following line, rolling back huggingface hub: # pip install huggingface-hub==0.10.1 ``` Then: ``` from datasets import load_dataset auth_token = "" # Replace with an auth token, which you can get from your huggingface account: Profile -> Settings -> Access Tokens -> New Token winoground = load_dataset("facebook/winoground", use_auth_token=auth_token)["test"] ``` ### Expected behavior Downloading of the datset ### Environment info Just a google colab; see here: https://colab.research.google.com/drive/15wwOSte2CjTazdnCWYUm2VPlFbk2NGc0?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5274/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/5274/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5273
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5273/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5273/comments
https://api.github.com/repos/huggingface/datasets/issues/5273/events
https://github.com/huggingface/datasets/issues/5273
1,458,018,050
I_kwDODunzps5W55cC
5,273
download_mode="force_redownload" does not refresh cached dataset
{ "login": "nomisto", "id": 28439912, "node_id": "MDQ6VXNlcjI4NDM5OTEy", "avatar_url": "https://avatars.githubusercontent.com/u/28439912?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nomisto", "html_url": "https://github.com/nomisto", "followers_url": "https://api.github.com/users/nomisto/followers", "following_url": "https://api.github.com/users/nomisto/following{/other_user}", "gists_url": "https://api.github.com/users/nomisto/gists{/gist_id}", "starred_url": "https://api.github.com/users/nomisto/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nomisto/subscriptions", "organizations_url": "https://api.github.com/users/nomisto/orgs", "repos_url": "https://api.github.com/users/nomisto/repos", "events_url": "https://api.github.com/users/nomisto/events{/privacy}", "received_events_url": "https://api.github.com/users/nomisto/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2022-11-21T14:12:43
2022-11-21T14:13:03
null
NONE
null
### Describe the bug `load_datasets` does not refresh dataset when features are imported from external file, even with `download_mode="force_redownload"`. The bug is not limited to nested fields, however it is more likely to occur with nested fields. ### Steps to reproduce the bug To reproduce the bug 3 files are needed: `dataset.py` (contains dataset loading script), `schema.py` (contains features of dataset) and `main.py` (to run `load_datasets`) `dataset.py` ```python import datasets from schema import features class NewDataset(datasets.GeneratorBasedBuilder): def _info(self): return datasets.DatasetInfo( features=features ) def _split_generators(self, dl_manager): return [ datasets.SplitGenerator( name=datasets.Split.TRAIN ) ] def _generate_examples(self): data = [ {"id": 0, "nested": []}, {"id": 1, "nested": []} ] for key, example in enumerate(data): yield key, example ``` `schema.py` ```python import datasets features = datasets.Features( { "id": datasets.Value("int32"), "nested": [ {"text": datasets.Value("string")} ] } ) ``` `main.py` ```python import datasets a = datasets.load_dataset("dataset.py") print(a["train"].info.features) ``` Now if `main.py` is run it prints the following correct output: `{'id': Value(dtype='int32', id=None), 'nested': [{'text': Value(dtype='string', id=None)}]}`. However, if f.e. the label of the feature "text" is changed to something else, f.e. to `schema.py` ```python import datasets features = datasets.Features( { "id": datasets.Value("int32"), "nested": [ {"textfoo": datasets.Value("string")} ] } ) ``` `main.py` still prints `{'id': Value(dtype='int32', id=None), 'nested': [{'text': Value(dtype='string', id=None)}]}`, even if run with `download_mode="force_redownload"`. The only fix is to delete the folder in the cache. ### Expected behavior The cached dataset is deleted and refreshed when using `load_datasets` with `download_mode="force_redownload"`. ### Environment info - `datasets` version: 2.7.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.9 - PyArrow version: 10.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5273/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5272
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5272/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5272/comments
https://api.github.com/repos/huggingface/datasets/issues/5272/events
https://github.com/huggingface/datasets/issues/5272
1,456,940,021
I_kwDODunzps5W1yP1
5,272
Use pyarrow Tensor dtype
{ "login": "franz101", "id": 18228395, "node_id": "MDQ6VXNlcjE4MjI4Mzk1", "avatar_url": "https://avatars.githubusercontent.com/u/18228395?v=4", "gravatar_id": "", "url": "https://api.github.com/users/franz101", "html_url": "https://github.com/franz101", "followers_url": "https://api.github.com/users/franz101/followers", "following_url": "https://api.github.com/users/franz101/following{/other_user}", "gists_url": "https://api.github.com/users/franz101/gists{/gist_id}", "starred_url": "https://api.github.com/users/franz101/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/franz101/subscriptions", "organizations_url": "https://api.github.com/users/franz101/orgs", "repos_url": "https://api.github.com/users/franz101/repos", "events_url": "https://api.github.com/users/franz101/events{/privacy}", "received_events_url": "https://api.github.com/users/franz101/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
6
2022-11-20T15:18:41
2022-11-21T17:57:55
null
NONE
null
### Feature request I was going the discussion of converting tensors to lists. Is there a way to leverage pyarrow's Tensors for nested arrays / embeddings? For example: ```python import pyarrow as pa import numpy as np x = np.array([[2, 2, 4], [4, 5, 100]], np.int32) pa.Tensor.from_numpy(x, dim_names=["dim1","dim2"]) ``` [Apache docs](https://arrow.apache.org/docs/python/generated/pyarrow.Tensor.html) Maybe this belongs into the pyarrow features / repo. ### Motivation Working with big data, we need to make sure to use the best data structures and IO out there ### Your contribution Can try to a PR if code changes necessary
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5272/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5272/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5270
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5270/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5270/comments
https://api.github.com/repos/huggingface/datasets/issues/5270/events
https://github.com/huggingface/datasets/issues/5270
1,456,508,990
I_kwDODunzps5W0JA-
5,270
When len(_URLS) > 16, download will hang
{ "login": "Freed-Wu", "id": 32936898, "node_id": "MDQ6VXNlcjMyOTM2ODk4", "avatar_url": "https://avatars.githubusercontent.com/u/32936898?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Freed-Wu", "html_url": "https://github.com/Freed-Wu", "followers_url": "https://api.github.com/users/Freed-Wu/followers", "following_url": "https://api.github.com/users/Freed-Wu/following{/other_user}", "gists_url": "https://api.github.com/users/Freed-Wu/gists{/gist_id}", "starred_url": "https://api.github.com/users/Freed-Wu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Freed-Wu/subscriptions", "organizations_url": "https://api.github.com/users/Freed-Wu/orgs", "repos_url": "https://api.github.com/users/Freed-Wu/repos", "events_url": "https://api.github.com/users/Freed-Wu/events{/privacy}", "received_events_url": "https://api.github.com/users/Freed-Wu/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
7
2022-11-19T14:27:41
2022-11-21T15:27:16
null
NONE
null
### Describe the bug ```python In [9]: dataset = load_dataset('Freed-Wu/kodak', split='test') Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.53k/2.53k [00:00<00:00, 1.88MB/s] [11/19/22 22:16:21] WARNING Using custom data configuration default builder.py:379 Downloading and preparing dataset kodak/default to /home/wzy/.cache/huggingface/datasets/Freed-Wu___kodak/default/0.0.1/bd1cc3434212e3e654f7e16ad618f8a1470b5982b086c91b1d6bc7187183c6e9... Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 531k/531k [00:02<00:00, 239kB/s] #10: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:04<00:00, 4.06s/obj] Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 534k/534k [00:02<00:00, 193kB/s] #14: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:04<00:00, 4.37s/obj] Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 692k/692k [00:02<00:00, 269kB/s] #12: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:04<00:00, 4.44s/obj] Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 566k/566k [00:02<00:00, 210kB/s] #5: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:04<00:00, 4.53s/obj] Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 613k/613k [00:02<00:00, 235kB/s] #13: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:04<00:00, 4.53s/obj] Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 786k/786k [00:02<00:00, 342kB/s] #3: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:04<00:00, 4.60s/obj] Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 619k/619k [00:02<00:00, 254kB/s] #4: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:04<00:00, 4.68s/obj] Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 737k/737k [00:02<00:00, 271kB/s] Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 788k/788k [00:02<00:00, 285kB/s] #6: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:05<00:00, 5.04s/obj] Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 618k/618k [00:04<00:00, 153kB/s] #0: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:11<00:00, 5.69s/obj] ^CProcess ForkPoolWorker-47: Process ForkPoolWorker-46: Process ForkPoolWorker-36: Process ForkPoolWorker-38:██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:05<00:00, 5.04s/obj] Process ForkPoolWorker-37: Process ForkPoolWorker-45: Process ForkPoolWorker-39: Process ForkPoolWorker-43: Process ForkPoolWorker-33: Process ForkPoolWorker-18: Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: KeyboardInterrupt File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): KeyboardInterrupt File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() KeyboardInterrupt File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() KeyboardInterrupt File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/queues.py", line 365, in get res = self._reader.recv_bytes() File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() KeyboardInterrupt File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() File "/usr/lib/python3.10/multiprocessing/connection.py", line 221, in recv_bytes buf = self._recv_bytes(maxlength) KeyboardInterrupt KeyboardInterrupt File "/usr/lib/python3.10/multiprocessing/connection.py", line 419, in _recv_bytes buf = self._recv(4) File "/usr/lib/python3.10/multiprocessing/connection.py", line 384, in _recv chunk = read(handle, remaining) KeyboardInterrupt Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() KeyboardInterrupt Process ForkPoolWorker-20: Process ForkPoolWorker-44: Process ForkPoolWorker-22: Traceback (most recent call last): File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested return function(data_struct) File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 561, in get_from_cache response = http_head( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 476, in http_head response = _request_with_retry( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect self.sock = conn = self._new_conn() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) KeyboardInterrupt #1: 0%| | 0/2 [03:00<?, ?obj/s] Traceback (most recent call last): Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested return function(data_struct) File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 659, in get_from_cache http_get( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 442, in http_get response = _request_with_retry( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect self.sock = conn = self._new_conn() File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 72, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested return function(data_struct) File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) KeyboardInterrupt File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 561, in get_from_cache response = http_head( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 476, in http_head response = _request_with_retry( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect self.sock = conn = self._new_conn() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 72, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "/usr/lib/python3.10/socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): KeyboardInterrupt #3: 0%| | 0/2 [03:00<?, ?obj/s] #11: 0%| | 0/1 [00:49<?, ?obj/s] Traceback (most recent call last): File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested return function(data_struct) File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 561, in get_from_cache response = http_head( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 476, in http_head response = _request_with_retry( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 723, in send history = [resp for resp in gen] File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 723, in <listcomp> history = [resp for resp in gen] File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 266, in resolve_redirects resp = self.send( File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect self.sock = conn = self._new_conn() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) KeyboardInterrupt #5: 0%| | 0/1 [03:00<?, ?obj/s] KeyboardInterrupt Process ForkPoolWorker-42: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested return function(data_struct) File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 561, in get_from_cache response = http_head( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 476, in http_head response = _request_with_retry( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect self.sock = conn = self._new_conn() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 72, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "/usr/lib/python3.10/socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): KeyboardInterrupt #9: 0%| | 0/1 [00:51<?, ?obj/s] ``` ### Steps to reproduce the bug ```python """Kodak. Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. """ import datasets NUMBER = 17 _DESCRIPTION = """\ The pictures below link to lossless, true color (24 bits per pixel, aka "full color") images. It is my understanding they have been released by the Eastman Kodak Company for unrestricted usage. Many sites use them as a standard test suite for compression testing, etc. Prior to this site, they were only available in the Sun Raster format via ftp. This meant that the images could not be previewed before downloading. Since their release, however, the lossless PNG format has been incorporated into all the major browsers. Since PNG supports 24-bit lossless color (which GIF and JPEG do not), it is now possible to offer this browser-friendly access to the images. """ _HOMEPAGE = "https://r0k.us/graphics/kodak/" _LICENSE = "GPLv3" _URLS = [ f"https://github.com/MohamedBakrAli/Kodak-Lossless-True-Color-Image-Suite/raw/master/PhotoCD_PCD0992/{i}.png" for i in range(1, 1 + NUMBER) ] class Kodak(datasets.GeneratorBasedBuilder): """Kodak datasets.""" VERSION = datasets.Version("0.0.1") def _info(self): features = datasets.Features( { "image": datasets.Image(), } ) return datasets.DatasetInfo( description=_DESCRIPTION, features=features, homepage=_HOMEPAGE, license=_LICENSE, ) def _split_generators(self, dl_manager): """Return SplitGenerators.""" file_paths = dl_manager.download_and_extract(_URLS) return [ datasets.SplitGenerator( name=datasets.Split.TEST, gen_kwargs={ "file_paths": file_paths, }, ), ] def _generate_examples(self, file_paths): """Yield examples.""" for file_path in file_paths: yield file_path, {"image": file_path} ``` ### Expected behavior When `len(_URLS) < 16`, it works. ```python In [3]: dataset = load_dataset('Freed-Wu/kodak', split='test') Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.53k/2.53k [00:00<00:00, 3.02MB/s] [11/19/22 22:04:28] WARNING Using custom data configuration default builder.py:379 Downloading and preparing dataset kodak/default to /home/wzy/.cache/huggingface/datasets/Freed-Wu___kodak/default/0.0.1/d26017602a592b5bfa7e008127cdf9dec5af220c9068005f1b4eda036031f475... Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 593k/593k [00:00<00:00, 2.88MB/s] Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 621k/621k [00:03<00:00, 166kB/s] Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 531k/531k [00:01<00:00, 366kB/s] 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:13<00:00, 1.18it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 3832.38it/s] Dataset kodak downloaded and prepared to /home/wzy/.cache/huggingface/datasets/Freed-Wu___kodak/default/0.0.1/d26017602a592b5bfa7e008127cdf9dec5af220c9068005f1b4eda036031f475. Subsequent calls will reuse this data. ``` ### Environment info - `datasets` version: 2.7.0 - Platform: Linux-6.0.8-arch1-1-x86_64-with-glibc2.36 - Python version: 3.10.8 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5270/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5269
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5269/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5269/comments
https://api.github.com/repos/huggingface/datasets/issues/5269/events
https://github.com/huggingface/datasets/issues/5269
1,456,485,799
I_kwDODunzps5W0DWn
5,269
Shell completions
{ "login": "Freed-Wu", "id": 32936898, "node_id": "MDQ6VXNlcjMyOTM2ODk4", "avatar_url": "https://avatars.githubusercontent.com/u/32936898?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Freed-Wu", "html_url": "https://github.com/Freed-Wu", "followers_url": "https://api.github.com/users/Freed-Wu/followers", "following_url": "https://api.github.com/users/Freed-Wu/following{/other_user}", "gists_url": "https://api.github.com/users/Freed-Wu/gists{/gist_id}", "starred_url": "https://api.github.com/users/Freed-Wu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Freed-Wu/subscriptions", "organizations_url": "https://api.github.com/users/Freed-Wu/orgs", "repos_url": "https://api.github.com/users/Freed-Wu/repos", "events_url": "https://api.github.com/users/Freed-Wu/events{/privacy}", "received_events_url": "https://api.github.com/users/Freed-Wu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2022-11-19T13:48:59
2022-11-21T15:06:15
2022-11-21T15:06:14
NONE
null
### Feature request Like <https://github.com/huggingface/huggingface_hub/issues/1197>, datasets-cli maybe need it, too. ### Motivation See above. ### Your contribution Maybe.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5269/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5265
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5265/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5265/comments
https://api.github.com/repos/huggingface/datasets/issues/5265/events
https://github.com/huggingface/datasets/issues/5265
1,455,274,864
I_kwDODunzps5Wvbtw
5,265
Get an IterableDataset from a map-style Dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
1
2022-11-18T14:54:40
2023-02-01T16:36:03
2023-02-01T16:36:03
MEMBER
null
This is useful to leverage iterable datasets specific features like: - fast approximate shuffling - lazy map, filter etc. Iterating over the resulting iterable dataset should be at least as fast at iterating over the map-style dataset. Here are some ideas regarding the API: ```python # 1. # - consistency with load_dataset(..., streaming=True) # - gives intuition that map/filter/etc. are done on-the-fly ids = ds.stream() # 2. # - more explicit on the output type # - but maybe sounds like a conversion tool rather than a step in a processing pipeline ids = ds.as_iterable_dataset() ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5265/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5264
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5264/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5264/comments
https://api.github.com/repos/huggingface/datasets/issues/5264/events
https://github.com/huggingface/datasets/issues/5264
1,455,252,906
I_kwDODunzps5WvWWq
5,264
`datasets` can't read a Parquet file in Python 3.9.13
{ "login": "loubnabnl", "id": 44069155, "node_id": "MDQ6VXNlcjQ0MDY5MTU1", "avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loubnabnl", "html_url": "https://github.com/loubnabnl", "followers_url": "https://api.github.com/users/loubnabnl/followers", "following_url": "https://api.github.com/users/loubnabnl/following{/other_user}", "gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}", "starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions", "organizations_url": "https://api.github.com/users/loubnabnl/orgs", "repos_url": "https://api.github.com/users/loubnabnl/repos", "events_url": "https://api.github.com/users/loubnabnl/events{/privacy}", "received_events_url": "https://api.github.com/users/loubnabnl/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
15
2022-11-18T14:44:01
2022-11-22T11:18:08
2022-11-22T11:18:08
NONE
null
### Describe the bug I have an error when trying to load this [dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj) (it's private but I can add you to the bigcode org). `datasets` can't read one of the parquet files in the Java subset ```python from datasets import load_dataset ds = load_dataset("bigcode/the-stack-dedup-pjj", data_dir="data/java", split="train", revision="v1.1.a1", use_auth_token=True) ```` ``` File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. ``` It seems to be an issue with new Python versions, Because it works in these two environements: ``` - `datasets` version: 2.6.1 - Platform: Linux-5.4.0-131-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 9.0.0 - Pandas version: 1.3.4 ``` ``` - `datasets` version: 2.6.1 - Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-debian-10.13 - Python version: 3.7.12 - PyArrow version: 9.0.0 - Pandas version: 1.3.4 ``` But not in this: ``` - `datasets` version: 2.6.1 - Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.3.4 ``` ### Steps to reproduce the bug Load the dataset in python 3.9.13 ### Expected behavior Load the dataset without the pyarrow error. ### Environment info ``` - `datasets` version: 2.6.1 - Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.3.4 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5264/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5264/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5263/comments
https://api.github.com/repos/huggingface/datasets/issues/5263/events
https://github.com/huggingface/datasets/issues/5263
1,455,252,626
I_kwDODunzps5WvWSS
5,263
Save a dataset in a determined number of shards
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
0
2022-11-18T14:43:54
2022-12-14T18:22:59
2022-12-14T18:22:59
MEMBER
null
This is useful to distribute the shards to training nodes. This can be implemented in `save_to_disk` and can also leverage multiprocessing to speed up the process
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5263/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5262
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5262/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5262/comments
https://api.github.com/repos/huggingface/datasets/issues/5262/events
https://github.com/huggingface/datasets/issues/5262
1,455,171,100
I_kwDODunzps5WvCYc
5,262
AttributeError: 'Value' object has no attribute 'names'
{ "login": "emnaboughariou", "id": 102913847, "node_id": "U_kgDOBiJXNw", "avatar_url": "https://avatars.githubusercontent.com/u/102913847?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emnaboughariou", "html_url": "https://github.com/emnaboughariou", "followers_url": "https://api.github.com/users/emnaboughariou/followers", "following_url": "https://api.github.com/users/emnaboughariou/following{/other_user}", "gists_url": "https://api.github.com/users/emnaboughariou/gists{/gist_id}", "starred_url": "https://api.github.com/users/emnaboughariou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emnaboughariou/subscriptions", "organizations_url": "https://api.github.com/users/emnaboughariou/orgs", "repos_url": "https://api.github.com/users/emnaboughariou/repos", "events_url": "https://api.github.com/users/emnaboughariou/events{/privacy}", "received_events_url": "https://api.github.com/users/emnaboughariou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2022-11-18T13:58:42
2022-11-22T10:09:24
2022-11-22T10:09:23
NONE
null
Hello I'm trying to build a model for custom token classification I already followed the token classification course on huggingface while adapting the code to my work, this message occures : 'Value' object has no attribute 'names' Here's my code: `raw_datasets` generates DatasetDict({ train: Dataset({ features: ['isDisf', 'pos', 'tokens', 'id'], num_rows: 14 }) }) `raw_datasets["train"][3]["isDisf"]` generates ['B_RM', 'I_RM', 'I_RM', 'B_RP', 'I_RP', 'O', 'O'] `dis_feature = raw_datasets["train"].features["isDisf"] dis_feature` generates Sequence(feature=Value(dtype='string', id=None), length=-1, id=None) and `label_names = dis_feature.feature.names label_names` generates AttributeError Traceback (most recent call last) [<ipython-input-28-972fd54a869a>](https://localhost:8080/#) in <module> ----> 1 label_names = dis_feature.feature.names 2 label_names AttributeError: 'Value' object has AttributeError: 'Value' object has no attribute 'names' Thank you for your help
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5262/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5261
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5261/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5261/comments
https://api.github.com/repos/huggingface/datasets/issues/5261/events
https://github.com/huggingface/datasets/issues/5261
1,454,647,861
I_kwDODunzps5WtCo1
5,261
Add PubTables-1M
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
1
2022-11-18T07:56:36
2022-11-18T08:02:18
null
CONTRIBUTOR
null
### Name PubTables-1M ### Paper https://openaccess.thecvf.com/content/CVPR2022/html/Smock_PubTables-1M_Towards_Comprehensive_Table_Extraction_From_Unstructured_Documents_CVPR_2022_paper.html ### Data https://github.com/microsoft/table-transformer ### Motivation Table Transformer is now available in 🤗 Transformer, and it was trained on PubTables-1M. It's a large dataset for table extraction and structure recognition in unstructured documents.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5261/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5260
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5260/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5260/comments
https://api.github.com/repos/huggingface/datasets/issues/5260/events
https://github.com/huggingface/datasets/issues/5260
1,453,921,697
I_kwDODunzps5WqRWh
5,260
consumer-finance-complaints dataset not loading
{ "login": "adiprasad", "id": 8098496, "node_id": "MDQ6VXNlcjgwOTg0OTY=", "avatar_url": "https://avatars.githubusercontent.com/u/8098496?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adiprasad", "html_url": "https://github.com/adiprasad", "followers_url": "https://api.github.com/users/adiprasad/followers", "following_url": "https://api.github.com/users/adiprasad/following{/other_user}", "gists_url": "https://api.github.com/users/adiprasad/gists{/gist_id}", "starred_url": "https://api.github.com/users/adiprasad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adiprasad/subscriptions", "organizations_url": "https://api.github.com/users/adiprasad/orgs", "repos_url": "https://api.github.com/users/adiprasad/repos", "events_url": "https://api.github.com/users/adiprasad/events{/privacy}", "received_events_url": "https://api.github.com/users/adiprasad/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
3
2022-11-17T20:10:26
2022-11-18T10:16:53
null
NONE
null
### Describe the bug Error during dataset loading ### Steps to reproduce the bug ``` >>> import datasets >>> cf_raw = datasets.load_dataset("consumer-finance-complaints") Downloading builder script: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8.42k/8.42k [00:00<00:00, 3.33MB/s] Downloading metadata: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.60k/5.60k [00:00<00:00, 2.90MB/s] Downloading readme: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16.6k/16.6k [00:00<00:00, 510kB/s] Downloading and preparing dataset consumer-finance-complaints/default to /root/.cache/huggingface/datasets/consumer-finance-complaints/default/0.0.0/30e483d37fb4b25bb98cad1bfd2dc48f6ed6d1f3371eb4568c625a61d1a79b69... Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 511M/511M [00:04<00:00, 103MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/load.py", line 1741, in load_dataset builder_instance.download_and_prepare( File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/builder.py", line 1555, in _download_and_prepare super()._download_and_prepare( File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/builder.py", line 931, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=1605177353, num_examples=2455765, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=2043641693, num_examples=3079747, shard_lengths=[721000, 656000, 788000, 846000, 68747], dataset_name='consumer-finance-complaints')}] ``` ### Expected behavior dataset should load ### Environment info >>> datasets.__version__ '2.7.0' Python 3.8.10 "Ubuntu 20.04.4 LTS"
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5260/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5259
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5259/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5259/comments
https://api.github.com/repos/huggingface/datasets/issues/5259/events
https://github.com/huggingface/datasets/issues/5259
1,453,555,923
I_kwDODunzps5Wo4DT
5,259
datasets 2.7 introduces sharding error
{ "login": "DCNemesis", "id": 3616964, "node_id": "MDQ6VXNlcjM2MTY5NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DCNemesis", "html_url": "https://github.com/DCNemesis", "followers_url": "https://api.github.com/users/DCNemesis/followers", "following_url": "https://api.github.com/users/DCNemesis/following{/other_user}", "gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}", "starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions", "organizations_url": "https://api.github.com/users/DCNemesis/orgs", "repos_url": "https://api.github.com/users/DCNemesis/repos", "events_url": "https://api.github.com/users/DCNemesis/events{/privacy}", "received_events_url": "https://api.github.com/users/DCNemesis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2022-11-17T15:36:52
2022-12-24T01:44:02
2022-11-18T12:52:05
NONE
null
### Describe the bug dataset fails to load with runtime error `RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize: - key audio_files has length 46 - key data has length 0 To fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, and use tuples otherwise. In the end there should only be one single list, or several lists with the same length.` ### Steps to reproduce the bug With datasets[audio] 2.7 loaded, and logged into hugging face, `data = datasets.load_dataset('sil-ai/bloom-speech', 'bis', use_auth_token=True)` creates the error. Full stack trace: ```--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) [<ipython-input-7-8cb9ca0f79f0>](https://localhost:8080/#) in <module> ----> 1 data = datasets.load_dataset('sil-ai/bloom-speech', 'bis', use_auth_token=True) 5 frames [/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs) 1745 try_from_hf_gcs=try_from_hf_gcs, 1746 use_auth_token=use_auth_token, -> 1747 num_proc=num_proc, 1748 ) 1749 [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 824 verify_infos=verify_infos, 825 **prepare_split_kwargs, --> 826 **download_and_prepare_kwargs, 827 ) 828 # Sync info [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs) 1554 def _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs): 1555 super()._download_and_prepare( -> 1556 dl_manager, verify_infos, check_duplicate_keys=verify_infos, **prepare_splits_kwargs 1557 ) 1558 [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 911 try: 912 # Prepare split will record examples associated to the split --> 913 self._prepare_split(split_generator, **prepare_split_kwargs) 914 except OSError as e: 915 raise OSError( [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) 1362 fpath = path_join(self._output_dir, fname) 1363 -> 1364 num_input_shards = _number_of_shards_in_gen_kwargs(split_generator.gen_kwargs) 1365 if num_input_shards <= 1 and num_proc is not None: 1366 logger.warning( [/usr/local/lib/python3.7/dist-packages/datasets/utils/sharding.py](https://localhost:8080/#) in _number_of_shards_in_gen_kwargs(gen_kwargs) 16 + "\n".join(f"\t- key {key} has length {length}" for key, length in lists_lengths.items()) 17 + "\nTo fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, " ---> 18 + "and use tuples otherwise. In the end there should only be one single list, or several lists with the same length." 19 ) 20 ) RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize: - key audio_files has length 46 - key data has length 0 To fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, and use tuples otherwise. In the end there should only be one single list, or several lists with the same length.``` ### Expected behavior the dataset loads in datasets version 2.6.1 and should load with datasets 2.7 ### Environment info - `datasets` version: 2.7.0 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.15 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5259/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5258
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5258/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5258/comments
https://api.github.com/repos/huggingface/datasets/issues/5258/events
https://github.com/huggingface/datasets/issues/5258
1,453,516,636
I_kwDODunzps5Woudc
5,258
Restore order of split names in dataset_info for canonical datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4564477500, "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution", "name": "dataset contribution", "color": "0e8a16", "default": false, "description": "Contribution to a dataset script" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
3
2022-11-17T15:13:15
2022-11-19T06:51:38
2022-11-19T06:51:37
MEMBER
null
After a bulk edit of canonical datasets to create the YAML `dataset_info` metadata, the split names were accidentally sorted alphabetically. See for example: - https://huggingface.co/datasets/bc2gm_corpus/commit/2384629484401ecf4bb77cd808816719c424e57c Note that this order is the one appearing in the preview of the datasets. I'm making a bulk edit to align the order of the splits appearing in the metadata info with the order appearing in the loading script.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5258/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5255
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5255/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5255/comments
https://api.github.com/repos/huggingface/datasets/issues/5255/events
https://github.com/huggingface/datasets/issues/5255
1,452,631,517
I_kwDODunzps5WlWXd
5,255
Add a Depth Estimation dataset - DIODE / NYUDepth / KITTI
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[ { "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false } ]
null
21
2022-11-17T03:22:22
2022-12-17T12:20:38
2022-12-17T12:20:37
MEMBER
null
### Name NYUDepth ### Paper http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf ### Data https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html ### Motivation Depth estimation is an important problem in computer vision. We have a couple of Depth Estimation models on Hub as well: * [GLPN](https://huggingface.co/docs/transformers/model_doc/glpn) * [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) Would be nice to have a dataset for depth estimation. These datasets usually have three things: input image, depth map image, and depth mask (validity mask to indicate if a reading for a pixel is valid or not). Since we already have [semantic segmentation datasets on the Hub](https://huggingface.co/datasets?task_categories=task_categories:image-segmentation&sort=downloads), I don't think we need any extended utilities to support this addition. Having this dataset would also allow us to author data preprocessing guides for depth estimation, particularly like the ones we have for other tasks ([example](https://huggingface.co/docs/datasets/image_classification)). Ccing @osanseviero @nateraw @NielsRogge Happy to work on adding it.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5255/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5251
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5251/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5251/comments
https://api.github.com/repos/huggingface/datasets/issues/5251/events
https://github.com/huggingface/datasets/issues/5251
1,451,761,321
I_kwDODunzps5WiB6p
5,251
Docs are not generated after latest release
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
null
[]
null
8
2022-11-16T14:59:31
2022-11-22T16:27:50
2022-11-22T16:27:50
MEMBER
null
After the latest `datasets` release version 0.7.0, the docs were not generated. As we have changed the release procedure (so that now we do not push directly to main branch), maybe we should also change the corresponding GitHub action: https://github.com/huggingface/datasets/blob/edf1902f954c5568daadebcd8754bdad44b02a85/.github/workflows/build_documentation.yml#L3-L8 Related to: - #5250 CC: @mishig25
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5251/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5249
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5249/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5249/comments
https://api.github.com/repos/huggingface/datasets/issues/5249/events
https://github.com/huggingface/datasets/issues/5249
1,451,692,247
I_kwDODunzps5WhxDX
5,249
Protect the main branch from inadvertent direct pushes
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2022-11-16T14:19:03
2022-11-16T14:36:14
null
MEMBER
null
We have decided to implement a protection mechanism in this repository, so that nobody (not even administrators) can inadvertently push accidentally directly to the main branch. See context here: - d7c942228b8dcf4de64b00a3053dce59b335f618 To do: - [x] Protect main branch - Settings > Branches > Branch protection rules > main > Edit - [x] Check: Do not allow bypassing the above settings - The above settings will apply to administrators and custom roles with the "bypass branch protections" permission. - [x] Additionally, uncheck: Require approvals [under "Require a pull request before merging", which was already checked] - Before, we could exceptionally merge a non-approved PR, using Administrator bypass - Now that Administrator bypass is no longer possible, we would always need an approval to be able to merge; and pull request authors cannot approve their own pull requests. This could be an inconvenient in some exceptional circumstances when an urgent fix is needed - Nevertheless, although it is no longer enforced, it is strongly recommended to merge PRs only if they have at least one approval - [ ] #5250 - So that direct pushes to main branch are no longer necessary
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5249/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5249/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5245
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5245/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5245/comments
https://api.github.com/repos/huggingface/datasets/issues/5245/events
https://github.com/huggingface/datasets/issues/5245
1,450,376,433
I_kwDODunzps5Wcvzx
5,245
Unable to rename columns in streaming dataset
{ "login": "peregilk", "id": 9079808, "node_id": "MDQ6VXNlcjkwNzk4MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/9079808?v=4", "gravatar_id": "", "url": "https://api.github.com/users/peregilk", "html_url": "https://github.com/peregilk", "followers_url": "https://api.github.com/users/peregilk/followers", "following_url": "https://api.github.com/users/peregilk/following{/other_user}", "gists_url": "https://api.github.com/users/peregilk/gists{/gist_id}", "starred_url": "https://api.github.com/users/peregilk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/peregilk/subscriptions", "organizations_url": "https://api.github.com/users/peregilk/orgs", "repos_url": "https://api.github.com/users/peregilk/repos", "events_url": "https://api.github.com/users/peregilk/events{/privacy}", "received_events_url": "https://api.github.com/users/peregilk/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[ { "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false } ]
null
7
2022-11-15T21:04:41
2022-11-28T12:53:24
2022-11-28T12:53:24
NONE
null
### Describe the bug Trying to rename column in a streaming datasets, destroys the features object. ### Steps to reproduce the bug The following code illustrates the error: ``` from datasets import load_dataset dataset = load_dataset('mc4', 'en', streaming=True, split='train') dataset.info.features # {'text': Value(dtype='string', id=None), 'timestamp': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None)} dataset = dataset.rename_column("text", "content") dataset.info.features # This returned object is now None! ``` ### Expected behavior This should just alter the renamed column. ### Environment info datasets 2.6.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5245/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5245/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5244
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5244/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5244/comments
https://api.github.com/repos/huggingface/datasets/issues/5244/events
https://github.com/huggingface/datasets/issues/5244
1,450,019,225
I_kwDODunzps5WbYmZ
5,244
Allow dataset streaming from private a private source when loading a dataset with a dataset loading script
{ "login": "Hubert-Bonisseur", "id": 48770768, "node_id": "MDQ6VXNlcjQ4NzcwNzY4", "avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hubert-Bonisseur", "html_url": "https://github.com/Hubert-Bonisseur", "followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers", "following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}", "gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions", "organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs", "repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos", "events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}", "received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
5
2022-11-15T16:02:10
2022-11-23T14:02:30
null
NONE
null
### Feature request Add arguments to the function _get_authentication_headers_for_url_ like custom_endpoint and custom_token in order to add flexibility when downloading files from a private source. It should also be possible to provide these arguments from the dataset loading script, maybe giving them to the dl_manager ### Motivation It is possible to share a dataset hosted on another platform by writing a dataset loading script. It works perfectly for publicly available resources. For resources that require authentication, you can provide a [download_custom](https://huggingface.co/docs/datasets/package_reference/builder_classes#datasets.DownloadManager) method to the download_manager. Unfortunately, this function doesn't work with **dataset streaming**. A solution so as to allow dataset streaming from private sources would be a more flexible _get_authentication_headers_for_url_ function. ### Your contribution Would you be interested in this improvement ? If so I could provide a PR. I've got something working locally, but it's not very clean, I'd need some guidance regarding integration.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5244/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5244/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5243
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5243/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5243/comments
https://api.github.com/repos/huggingface/datasets/issues/5243/events
https://github.com/huggingface/datasets/issues/5243
1,449,523,962
I_kwDODunzps5WZfr6
5,243
Download only split data
{ "login": "capsabogdan", "id": 48530104, "node_id": "MDQ6VXNlcjQ4NTMwMTA0", "avatar_url": "https://avatars.githubusercontent.com/u/48530104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/capsabogdan", "html_url": "https://github.com/capsabogdan", "followers_url": "https://api.github.com/users/capsabogdan/followers", "following_url": "https://api.github.com/users/capsabogdan/following{/other_user}", "gists_url": "https://api.github.com/users/capsabogdan/gists{/gist_id}", "starred_url": "https://api.github.com/users/capsabogdan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/capsabogdan/subscriptions", "organizations_url": "https://api.github.com/users/capsabogdan/orgs", "repos_url": "https://api.github.com/users/capsabogdan/repos", "events_url": "https://api.github.com/users/capsabogdan/events{/privacy}", "received_events_url": "https://api.github.com/users/capsabogdan/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
4
2022-11-15T10:15:54
2023-01-05T09:01:07
null
NONE
null
### Feature request Is it possible to download only the data that I am requesting and not the entire dataset? I run out of disk spaceas it seems to download the entire dataset, instead of only the part needed. common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", cache_dir="cache/path...", use_auth_token=True, download_config=DownloadConfig(delete_extracted='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') ) ### Motivation efficiency improvement ### Your contribution n/a
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5243/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5242
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5242/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5242/comments
https://api.github.com/repos/huggingface/datasets/issues/5242/events
https://github.com/huggingface/datasets/issues/5242
1,449,069,382
I_kwDODunzps5WXwtG
5,242
Failed Data Processing upon upload with zip file full of images
{ "login": "scrambled2", "id": 82735473, "node_id": "MDQ6VXNlcjgyNzM1NDcz", "avatar_url": "https://avatars.githubusercontent.com/u/82735473?v=4", "gravatar_id": "", "url": "https://api.github.com/users/scrambled2", "html_url": "https://github.com/scrambled2", "followers_url": "https://api.github.com/users/scrambled2/followers", "following_url": "https://api.github.com/users/scrambled2/following{/other_user}", "gists_url": "https://api.github.com/users/scrambled2/gists{/gist_id}", "starred_url": "https://api.github.com/users/scrambled2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/scrambled2/subscriptions", "organizations_url": "https://api.github.com/users/scrambled2/orgs", "repos_url": "https://api.github.com/users/scrambled2/repos", "events_url": "https://api.github.com/users/scrambled2/events{/privacy}", "received_events_url": "https://api.github.com/users/scrambled2/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2022-11-15T02:47:52
2022-11-15T17:59:23
null
NONE
null
I went to autotrain and under image classification arrived where it was time to prepare my dataset. Screenshot below ![image](https://user-images.githubusercontent.com/82735473/201814099-3cc5ff8a-88dc-4f5f-8140-f19560641d83.png) I chose the method 2 option. I have a csv file with two columns. ~23,000 files. I uploaded this and chose the image_relpath, and target columns. The image uploader said that I could only upload 10,000 singular images at a time so the 2nd option was to zip the images up and upload a zip archive which I did. That all uploaded. Now I have the message below. It appears the zip archive does just uncompress on the Hugging Face end? What am I missing here? ![image](https://user-images.githubusercontent.com/82735473/201813838-b50dbbbc-34e8-4d73-9c07-12f9e41c62eb.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5242/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5232
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5232/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5232/comments
https://api.github.com/repos/huggingface/datasets/issues/5232/events
https://github.com/huggingface/datasets/issues/5232
1,446,294,165
I_kwDODunzps5WNLKV
5,232
Incompatible dill versions in datasets 2.6.1
{ "login": "vinaykakade", "id": 10574123, "node_id": "MDQ6VXNlcjEwNTc0MTIz", "avatar_url": "https://avatars.githubusercontent.com/u/10574123?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vinaykakade", "html_url": "https://github.com/vinaykakade", "followers_url": "https://api.github.com/users/vinaykakade/followers", "following_url": "https://api.github.com/users/vinaykakade/following{/other_user}", "gists_url": "https://api.github.com/users/vinaykakade/gists{/gist_id}", "starred_url": "https://api.github.com/users/vinaykakade/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vinaykakade/subscriptions", "organizations_url": "https://api.github.com/users/vinaykakade/orgs", "repos_url": "https://api.github.com/users/vinaykakade/repos", "events_url": "https://api.github.com/users/vinaykakade/events{/privacy}", "received_events_url": "https://api.github.com/users/vinaykakade/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2022-11-12T06:46:23
2022-11-14T08:24:43
2022-11-14T08:07:59
NONE
null
### Describe the bug datasets version 2.6.1 has a dependency on dill<0.3.6. This causes a conflict with dill>=0.3.6 used by multiprocess dependency in datasets 2.6.1 This issue is already fixed in https://github.com/huggingface/datasets/pull/5166/files, but not yet been released. Please release a new version of the datasets library to fix this. ### Steps to reproduce the bug 1. Create requirements.in with only dependency being datasets (or datasets[s3]) 2. Run pip-compile 3. The output is as follows: ``` Could not find a version that matches dill<0.3.6,>=0.3.6 (from datasets[s3]==2.6.1->-r requirements.in (line 1)) Tried: 0.2, 0.2, 0.2.1, 0.2.1, 0.2.2, 0.2.2, 0.2.3, 0.2.3, 0.2.4, 0.2.4, 0.2.5, 0.2.5, 0.2.6, 0.2.7, 0.2.7.1, 0.2.8, 0.2.8.1, 0.2.8.2, 0.2.9, 0.3.0, 0.3.1, 0.3.1.1, 0.3.2, 0.3.3, 0.3.3, 0.3.4, 0.3.4, 0.3.5, 0.3.5, 0.3.5.1, 0.3.5.1, 0.3.6, 0.3.6 Skipped pre-versions: 0.1a1, 0.2a1, 0.2a1, 0.2b1, 0.2b1 There are incompatible versions in the resolved dependencies: dill<0.3.6 (from datasets[s3]==2.6.1->-r requirements.in (line 1)) dill>=0.3.6 (from multiprocess==0.70.14->datasets[s3]==2.6.1->-r requirements.in (line 1)) ``` ### Expected behavior pip-compile produces requirements.txt without any conflicts ### Environment info datasets version 2.6.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5232/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5232/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5231
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5231/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5231/comments
https://api.github.com/repos/huggingface/datasets/issues/5231/events
https://github.com/huggingface/datasets/issues/5231
1,445,883,267
I_kwDODunzps5WLm2D
5,231
Using `set_format(type='torch', columns=columns)` makes Array2D/3D columns stop formatting correctly
{ "login": "plamb-viso", "id": 99206017, "node_id": "U_kgDOBenDgQ", "avatar_url": "https://avatars.githubusercontent.com/u/99206017?v=4", "gravatar_id": "", "url": "https://api.github.com/users/plamb-viso", "html_url": "https://github.com/plamb-viso", "followers_url": "https://api.github.com/users/plamb-viso/followers", "following_url": "https://api.github.com/users/plamb-viso/following{/other_user}", "gists_url": "https://api.github.com/users/plamb-viso/gists{/gist_id}", "starred_url": "https://api.github.com/users/plamb-viso/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/plamb-viso/subscriptions", "organizations_url": "https://api.github.com/users/plamb-viso/orgs", "repos_url": "https://api.github.com/users/plamb-viso/repos", "events_url": "https://api.github.com/users/plamb-viso/events{/privacy}", "received_events_url": "https://api.github.com/users/plamb-viso/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2022-11-11T18:54:36
2022-11-11T20:42:29
2022-11-11T18:59:50
NONE
null
I have a Dataset with two Features defined as follows: ``` 'image': Array3D(dtype="int64", shape=(3, 224, 224)), 'bbox': Array2D(dtype="int64", shape=(512, 4)), ``` On said dataset, if I `dataset.set_format(type='torch')` and then use the dataset in a dataloader, these columns are correctly cast to Tensors of (batch_size, 3, 224, 244) for example. However, if I `dataset.set_format(type='torch', columns=['image', 'bbox'])` these columns are cast to Lists of tensors and miss the batch size completely (the 3 dimension is the list length). I'm currently digging through datasets formatting code to try and find out why, but was curious if someone knew an immediate solution for this.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5231/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5231/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5230
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5230/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5230/comments
https://api.github.com/repos/huggingface/datasets/issues/5230/events
https://github.com/huggingface/datasets/issues/5230
1,445,507,580
I_kwDODunzps5WKLH8
5,230
dataclasses error when importing the library in python 3.11
{ "login": "yonikremer", "id": 76044840, "node_id": "MDQ6VXNlcjc2MDQ0ODQw", "avatar_url": "https://avatars.githubusercontent.com/u/76044840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yonikremer", "html_url": "https://github.com/yonikremer", "followers_url": "https://api.github.com/users/yonikremer/followers", "following_url": "https://api.github.com/users/yonikremer/following{/other_user}", "gists_url": "https://api.github.com/users/yonikremer/gists{/gist_id}", "starred_url": "https://api.github.com/users/yonikremer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yonikremer/subscriptions", "organizations_url": "https://api.github.com/users/yonikremer/orgs", "repos_url": "https://api.github.com/users/yonikremer/repos", "events_url": "https://api.github.com/users/yonikremer/events{/privacy}", "received_events_url": "https://api.github.com/users/yonikremer/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
2
2022-11-11T13:53:49
2022-11-14T20:51:44
2022-11-14T15:27:37
NONE
null
### Describe the bug When I import datasets using python 3.11 the dataclasses standard library raises the following error: `ValueError: mutable default <class 'datasets.utils.version.Version'> for field version is not allowed: use default_factory` When I tried to import the library using the following jupyter notebook: ``` %%bash # create python 3.11 conda env conda create --yes --quiet -n myenv -c conda-forge python=3.11 # activate is source activate myenv # install pyarrow /opt/conda/envs/myenv/bin/python -m pip install --quiet --extra-index-url https://pypi.fury.io/arrow-nightlies/ \ --prefer-binary --pre pyarrow # install datasets /opt/conda/envs/myenv/bin/python -m pip install --quiet datasets ``` ``` # create a python file that only imports datasets with open("import_datasets.py", 'w') as f: f.write("import datasets") # run it with the env !/opt/conda/envs/myenv/bin/python import_datasets.py ``` I get the following error: ``` Traceback (most recent call last): File "/kaggle/working/import_datasets.py", line 1, in <module> import datasets File "/opt/conda/envs/myenv/lib/python3.11/site-packages/datasets/__init__.py", line 45, in <module> from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File "/opt/conda/envs/myenv/lib/python3.11/site-packages/datasets/builder.py", line 91, in <module> @dataclass ^^^^^^^^^ File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 1221, in dataclass return wrap(cls) ^^^^^^^^^ File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 1211, in wrap return _process_class(cls, init, repr, eq, order, unsafe_hash, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 959, in _process_class cls_fields.append(_get_field(cls, name, type, kw_only)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 816, in _get_field raise ValueError(f'mutable default {type(f.default)} for field ' ValueError: mutable default <class 'datasets.utils.version.Version'> for field version is not allowed: use default_factory ``` This is probably due to one of the following changes in the [dataclasses standard library](https://docs.python.org/3/library/dataclasses.html) in version 3.11: 1. Changed in version 3.11: Instead of looking for and disallowing objects of type list, dict, or set, unhashable objects are now not allowed as default values. Unhashability is used to approximate mutability. 2. fields may optionally specify a default value, using normal Python syntax: ``` @dataclass class C: a: int # 'a' has no default value b: int = 0 # assign a default value for 'b' In this example, both a and b will be included in the added __init__() method, which will be defined as: def __init__(self, a: int, b: int = 0): ``` 3. Changed in version 3.11: If a field name is already included in the __slots__ of a base class, it will not be included in the generated __slots__ to prevent [overriding them](https://docs.python.org/3/reference/datamodel.html#datamodel-note-slots). Therefore, do not use __slots__ to retrieve the field names of a dataclass. Use [fields()](https://docs.python.org/3/library/dataclasses.html#dataclasses.fields) instead. To be able to determine inherited slots, base class __slots__ may be any iterable, but not an iterator. 4. weakref_slot: If true (the default is False), add a slot named “__weakref__”, which is required to make an instance weakref-able. It is an error to specify weakref_slot=True without also specifying slots=True. [TypeError](https://docs.python.org/3/library/exceptions.html#TypeError) will be raised if a field without a default value follows a field with a default value. This is true whether this occurs in a single class, or as a result of class inheritance. ### Steps to reproduce the bug Steps to reproduce the behavior: 1. go to [the notebook in kaggle](https://www.kaggle.com/yonikremer/repreducing-issue) 2. rub both of the cells ### Expected behavior I'm expecting no issues. This error should not occur. ### Environment info kaggle kernels, with default settings: pin to original environment, no accelerator.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5230/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5230/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5229
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5229/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5229/comments
https://api.github.com/repos/huggingface/datasets/issues/5229/events
https://github.com/huggingface/datasets/issues/5229
1,445,121,028
I_kwDODunzps5WIswE
5,229
Type error when calling `map` over dataset containing 0-d tensors
{ "login": "phipsgabler", "id": 7878215, "node_id": "MDQ6VXNlcjc4NzgyMTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7878215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phipsgabler", "html_url": "https://github.com/phipsgabler", "followers_url": "https://api.github.com/users/phipsgabler/followers", "following_url": "https://api.github.com/users/phipsgabler/following{/other_user}", "gists_url": "https://api.github.com/users/phipsgabler/gists{/gist_id}", "starred_url": "https://api.github.com/users/phipsgabler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phipsgabler/subscriptions", "organizations_url": "https://api.github.com/users/phipsgabler/orgs", "repos_url": "https://api.github.com/users/phipsgabler/repos", "events_url": "https://api.github.com/users/phipsgabler/events{/privacy}", "received_events_url": "https://api.github.com/users/phipsgabler/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2022-11-11T08:27:28
2023-01-13T16:00:53
2023-01-13T16:00:53
NONE
null
### Describe the bug 0-dimensional tensors in a dataset lead to `TypeError: iteration over a 0-d array` when calling `map`. It is easy to generate such tensors by using `.with_format("...")` on the whole dataset. ### Steps to reproduce the bug ``` ds = datasets.Dataset.from_list([{"a": 1}, {"a": 1}]).with_format("torch") ds.map(None) ``` ### Expected behavior Getting back `ds` without errors. ### Environment info Python 3.10.8 datasets 2.6. torch 1.13.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5229/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5229/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5228
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5228/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5228/comments
https://api.github.com/repos/huggingface/datasets/issues/5228/events
https://github.com/huggingface/datasets/issues/5228
1,444,763,105
I_kwDODunzps5WHVXh
5,228
Loading a dataset from the hub fails if you happen to have a folder of the same name
{ "login": "dakinggg", "id": 43149077, "node_id": "MDQ6VXNlcjQzMTQ5MDc3", "avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dakinggg", "html_url": "https://github.com/dakinggg", "followers_url": "https://api.github.com/users/dakinggg/followers", "following_url": "https://api.github.com/users/dakinggg/following{/other_user}", "gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}", "starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions", "organizations_url": "https://api.github.com/users/dakinggg/orgs", "repos_url": "https://api.github.com/users/dakinggg/repos", "events_url": "https://api.github.com/users/dakinggg/events{/privacy}", "received_events_url": "https://api.github.com/users/dakinggg/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2022-11-11T00:51:54
2022-11-14T18:17:34
null
NONE
null
### Describe the bug I'm not 100% sure this should be considered a bug, but it was certainly annoying to figure out the cause of. And perhaps I am just missing a specific argument needed to avoid this conflict. Basically I had a situation where multiple workers were downloading different parts of the glue dataset and then training on them. Additionally, they were writing their checkpoints to a folder called `glue`. This meant that once one worker had created the `glue` folder to write checkpoints to, the next worker to try to load a glue dataset would fail as shown in the minimal repro below. I'm not sure what the solution would be since I'm not super familiar with the `datasets` code, but I would expect `load_dataset` to not crash just because i have a local folder with the same name as a dataset from the hub. ### Steps to reproduce the bug ``` In [1]: import datasets In [2]: rte = datasets.load_dataset('glue', 'rte') Downloading and preparing dataset glue/rte to /Users/danielking/.cache/huggingface/datasets/glue/rte/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad... Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 697k/697k [00:00<00:00, 6.08MB/s] Dataset glue downloaded and prepared to /Users/danielking/.cache/huggingface/datasets/glue/rte/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad. Subsequent calls will reuse this data. 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 773.81it/s] In [3]: import os In [4]: os.mkdir('glue') In [5]: rte = datasets.load_dataset('glue', 'rte') --------------------------------------------------------------------------- EmptyDatasetError Traceback (most recent call last) <ipython-input-5-0d6b9ad8bbd0> in <cell line: 1>() ----> 1 rte = datasets.load_dataset('glue', 'rte') ~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1717 1718 # Create a dataset builder -> 1719 builder_instance = load_dataset_builder( 1720 path=path, 1721 name=name, ~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1495 download_config = download_config.copy() if download_config else DownloadConfig() 1496 download_config.use_auth_token = use_auth_token -> 1497 dataset_module = dataset_module_factory( 1498 path, 1499 revision=revision, ~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1152 ).get_module() 1153 elif os.path.isdir(path): -> 1154 return LocalDatasetModuleFactoryWithoutScript( 1155 path, data_dir=data_dir, data_files=data_files, download_mode=download_mode 1156 ).get_module() ~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/load.py in get_module(self) 624 base_path = os.path.join(self.path, self.data_dir) if self.data_dir else self.path 625 patterns = ( --> 626 sanitize_patterns(self.data_files) if self.data_files is not None else get_data_patterns_locally(base_path) 627 ) 628 data_files = DataFilesDict.from_local_or_remote( ~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/data_files.py in get_data_patterns_locally(base_path) 458 return _get_data_files_patterns(resolver) 459 except FileNotFoundError: --> 460 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None 461 462 EmptyDatasetError: The directory at glue doesn't contain any data files ``` ### Expected behavior Dataset is still able to be loaded from the hub even if I have a local folder with the same name. ### Environment info datasets version: 2.6.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5228/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5228/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5227
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5227/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5227/comments
https://api.github.com/repos/huggingface/datasets/issues/5227/events
https://github.com/huggingface/datasets/issues/5227
1,444,620,094
I_kwDODunzps5WGyc-
5,227
datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files
{ "login": "ScottM-wizard", "id": 102275116, "node_id": "U_kgDOBhiYLA", "avatar_url": "https://avatars.githubusercontent.com/u/102275116?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ScottM-wizard", "html_url": "https://github.com/ScottM-wizard", "followers_url": "https://api.github.com/users/ScottM-wizard/followers", "following_url": "https://api.github.com/users/ScottM-wizard/following{/other_user}", "gists_url": "https://api.github.com/users/ScottM-wizard/gists{/gist_id}", "starred_url": "https://api.github.com/users/ScottM-wizard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ScottM-wizard/subscriptions", "organizations_url": "https://api.github.com/users/ScottM-wizard/orgs", "repos_url": "https://api.github.com/users/ScottM-wizard/repos", "events_url": "https://api.github.com/users/ScottM-wizard/events{/privacy}", "received_events_url": "https://api.github.com/users/ScottM-wizard/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2022-11-10T21:57:06
2022-11-10T22:05:43
2022-11-10T22:05:43
NONE
null
### Describe the bug From these lines: from datasets import list_datasets, load_dataset dataset = load_dataset("wikisql","binary") I get error message: datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files And yet the 'wikisql' is reported to exist via the list_datasets(). Any help appreciated. ### Steps to reproduce the bug From these lines: from datasets import list_datasets, load_dataset dataset = load_dataset("wikisql","binary") I get error message: datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files And yet the 'wikisql' is reported to exist via the list_datasets(). Any help appreciated. ### Expected behavior Dataset should load. This same code used to work. ### Environment info Mac OS
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5227/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5226
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5226/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5226/comments
https://api.github.com/repos/huggingface/datasets/issues/5226/events
https://github.com/huggingface/datasets/issues/5226
1,444,385,148
I_kwDODunzps5WF5F8
5,226
Q: Memory release when removing the column?
{ "login": "bayartsogt-ya", "id": 43239645, "node_id": "MDQ6VXNlcjQzMjM5NjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bayartsogt-ya", "html_url": "https://github.com/bayartsogt-ya", "followers_url": "https://api.github.com/users/bayartsogt-ya/followers", "following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}", "gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}", "starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions", "organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs", "repos_url": "https://api.github.com/users/bayartsogt-ya/repos", "events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}", "received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2022-11-10T18:35:27
2022-11-29T15:10:10
2022-11-29T15:10:10
NONE
null
### Describe the bug How do I release memory when I use methods like `.remove_columns()` or `clear()` in notebooks? ```python from datasets import load_dataset common_voice = load_dataset("mozilla-foundation/common_voice_11_0", "ja", use_auth_token=True) # check memory -> RAM Used (GB): 0.704 / Total (GB) 33.670 common_voice = common_voice.remove_columns(column_names=common_voice.column_names['train']) common_voice.clear() # check memory -> RAM Used (GB): 0.705 / Total (GB) 33.670 ``` I tried `gc.collect()` but did not help ### Steps to reproduce the bug 1. load dataset 2. remove all the columns 3. check memory is reduced or not [link to reproduce](https://www.kaggle.com/code/bayartsogtya/huggingface-dataset-memory-issue/notebook?scriptVersionId=110630567) ### Expected behavior Memory released when I remove the column ### Environment info - `datasets` version: 2.1.0 - Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyArrow version: 8.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5226/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5225
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5225/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5225/comments
https://api.github.com/repos/huggingface/datasets/issues/5225/events
https://github.com/huggingface/datasets/issues/5225
1,444,305,183
I_kwDODunzps5WFlkf
5,225
Add video feature
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892884, "node_id": "MDU6TGFiZWwxOTM1ODkyODg0", "url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted", "name": "help wanted", "color": "008672", "default": true, "description": "Extra attention is needed" }, { "id": 3608941089, "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision", "name": "vision", "color": "bfdadc", "default": false, "description": "Vision datasets" } ]
open
false
null
[]
null
7
2022-11-10T17:36:11
2022-12-02T15:13:15
null
CONTRIBUTOR
null
### Feature request Add a `Video` feature to the library so folks can include videos in their datasets. ### Motivation Being able to load Video data would be quite helpful. However, there are some challenges when it comes to videos: 1. Videos, unlike images, can end up being extremely large files 2. Often times when training video models, you need to do some very specific sampling. Videos might end up needing to be broken down into X number of clips used for training/inference 3. Videos have an additional audio stream, which must be accounted for 4. The feature needs to be able to encode/decode videos (with right video settings) from bytes. ### Your contribution I did work on this a while back in [this (now closed) PR](https://github.com/huggingface/datasets/pull/4532). It used a library I made called [encoded_video](https://github.com/nateraw/encoded-video), which is basically the utils from [pytorchvideo](https://github.com/facebookresearch/pytorchvideo), but without the `torch` dep. It included the ability to read/write from bytes, as we need to do here. We don't want to be using a sketchy library that I made as a dependency in this repo, though. Would love to use this issue as a place to: - brainstorm ideas on how to do this right - list ways/examples to work around it for now CC @sayakpaul @mariosasko @fcakyon
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5225/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5225/timeline
null
null
null
null
false