url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
48
51
id
int64
600M
1.08B
node_id
stringlengths
18
24
number
int64
2
3.45k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
int64
1,587B
1,640B
updated_at
int64
1,588B
1,640B
closed_at
int64
1,588B
1,640B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
null
pull_request
null
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/2129
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2129/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2129/comments
https://api.github.com/repos/huggingface/datasets/issues/2129/events
https://github.com/huggingface/datasets/issues/2129
843,033,656
MDU6SXNzdWU4NDMwMzM2NTY=
2,129
How to train BERT model with next sentence prediction?
{ "login": "jnishi", "id": 836541, "node_id": "MDQ6VXNlcjgzNjU0MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/836541?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jnishi", "html_url": "https://github.com/jnishi", "followers_url": "https://api.github.com/users/jnishi/followers", "following_url": "https://api.github.com/users/jnishi/following{/other_user}", "gists_url": "https://api.github.com/users/jnishi/gists{/gist_id}", "starred_url": "https://api.github.com/users/jnishi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jnishi/subscriptions", "organizations_url": "https://api.github.com/users/jnishi/orgs", "repos_url": "https://api.github.com/users/jnishi/repos", "events_url": "https://api.github.com/users/jnishi/events{/privacy}", "received_events_url": "https://api.github.com/users/jnishi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi !\r\nWe're not using `TextDatasetForNextSentencePrediction` in `datasets`.\r\nAlthough you can probably use the `TextDatasetForNextSentencePrediction.create_examples_from_document` on a dataset to prepare it for next sentence prediction.", "Thanks.\r\n\r\nDo you mean that `TextDatasetForNextSentencePrediction.create_exapmles_from_document` can be applied to dataset object other than `TextDatasetForNextSentencePrediction` e.g. a `Dataset` object which is loaded by `datasets.load_dataset`?", "It would probably require a bit of tweaking, but you can apply it to a dataset, yes.\r\nThis should give you a new dataset with sentence pairs you can train a model on.\r\n\r\nYou can find the documentation about dataset processing here:\r\nhttps://huggingface.co/docs/datasets/processing.html#processing-data-with-map", "Thank you for detail information.\r\n\r\nI'll try to apply `create_examples_from_document` to `Dataset` object.\r\n" ]
1,617,000,483,000
1,617,253,120,000
1,617,253,120,000
NONE
null
Hello. I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2129/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2128
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2128/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2128/comments
https://api.github.com/repos/huggingface/datasets/issues/2128/events
https://github.com/huggingface/datasets/issues/2128
843,023,910
MDU6SXNzdWU4NDMwMjM5MTA=
2,128
Dialogue action slot name and value are reversed in MultiWoZ 2.2
{ "login": "adamlin120", "id": 31605305, "node_id": "MDQ6VXNlcjMxNjA1MzA1", "avatar_url": "https://avatars.githubusercontent.com/u/31605305?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adamlin120", "html_url": "https://github.com/adamlin120", "followers_url": "https://api.github.com/users/adamlin120/followers", "following_url": "https://api.github.com/users/adamlin120/following{/other_user}", "gists_url": "https://api.github.com/users/adamlin120/gists{/gist_id}", "starred_url": "https://api.github.com/users/adamlin120/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adamlin120/subscriptions", "organizations_url": "https://api.github.com/users/adamlin120/orgs", "repos_url": "https://api.github.com/users/adamlin120/repos", "events_url": "https://api.github.com/users/adamlin120/events{/privacy}", "received_events_url": "https://api.github.com/users/adamlin120/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "Hi\r\nGood catch ! Thanks for reporting\r\n\r\nIf you are interested in contributing, feel free to open a PR to fix this :) " ]
1,616,999,642,000
1,617,194,881,000
1,617,194,881,000
CONTRIBUTOR
null
Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial! I spot an error that the order of Dialogue action slot names and values are reversed. https://github.com/huggingface/datasets/blob/649b2c469779bc4221e1b6969aa2496d63eb5953/datasets/multi_woz_v22/multi_woz_v22.py#L251-L262
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2128/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/2128/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2125
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2125/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2125/comments
https://api.github.com/repos/huggingface/datasets/issues/2125/events
https://github.com/huggingface/datasets/issues/2125
842,690,570
MDU6SXNzdWU4NDI2OTA1NzA=
2,125
Is dataset timit_asr broken?
{ "login": "kosuke-kitahara", "id": 42398050, "node_id": "MDQ6VXNlcjQyMzk4MDUw", "avatar_url": "https://avatars.githubusercontent.com/u/42398050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kosuke-kitahara", "html_url": "https://github.com/kosuke-kitahara", "followers_url": "https://api.github.com/users/kosuke-kitahara/followers", "following_url": "https://api.github.com/users/kosuke-kitahara/following{/other_user}", "gists_url": "https://api.github.com/users/kosuke-kitahara/gists{/gist_id}", "starred_url": "https://api.github.com/users/kosuke-kitahara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kosuke-kitahara/subscriptions", "organizations_url": "https://api.github.com/users/kosuke-kitahara/orgs", "repos_url": "https://api.github.com/users/kosuke-kitahara/repos", "events_url": "https://api.github.com/users/kosuke-kitahara/events{/privacy}", "received_events_url": "https://api.github.com/users/kosuke-kitahara/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthanks for the report, but this is a duplicate of #2052. ", "@mariosasko \r\nThank you for your quick response! Following #2052, I've fixed the problem." ]
1,616,920,218,000
1,616,934,565,000
1,616,934,565,000
NONE
null
Using `timit_asr` dataset, I saw all records are the same. ``` python from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") from datasets import ClassLabel import random import pandas as pd from IPython.display import display, HTML def show_random_elements(dataset, num_examples=10): assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset." picks = [] for _ in range(num_examples): pick = random.randint(0, len(dataset)-1) while pick in picks: pick = random.randint(0, len(dataset)-1) picks.append(pick) df = pd.DataFrame(dataset[picks]) display(HTML(df.to_html())) show_random_elements(timit['train'].remove_columns(["file", "phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"]), num_examples=20) ``` `output` <img width="312" alt="Screen Shot 2021-03-28 at 17 29 04" src="https://user-images.githubusercontent.com/42398050/112746646-21acee80-8feb-11eb-84f3-dbb5d4269724.png"> I double-checked it [here](https://huggingface.co/datasets/viewer/), and met the same problem. <img width="1374" alt="Screen Shot 2021-03-28 at 17 32 07" src="https://user-images.githubusercontent.com/42398050/112746698-9bdd7300-8feb-11eb-97ed-5babead385f4.png">
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2125/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2125/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2124
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2124/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2124/comments
https://api.github.com/repos/huggingface/datasets/issues/2124/events
https://github.com/huggingface/datasets/issues/2124
842,627,729
MDU6SXNzdWU4NDI2Mjc3Mjk=
2,124
Adding ScaNN library to do MIPS?
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I haven't played with it (yet) but it sounds really cool !\r\n" ]
1,616,890,020,000
1,617,024,223,000
null
NONE
null
@lhoestq Hi I am thinking of adding this new google library to do the MIPS similar to **add_faiss_idex**. As the paper suggests, it is really fast when it comes to retrieving the nearest neighbors. https://github.com/google-research/google-research/tree/master/scann ![image](https://user-images.githubusercontent.com/16892570/112738294-78ec9800-8fc6-11eb-9a5f-3d7ee5818e76.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2124/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2124/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2123
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2123/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2123/comments
https://api.github.com/repos/huggingface/datasets/issues/2123/events
https://github.com/huggingface/datasets/issues/2123
842,577,285
MDU6SXNzdWU4NDI1NzcyODU=
2,123
Problem downloading GEM wiki_auto_asset_turk dataset
{ "login": "mille-s", "id": 29705940, "node_id": "MDQ6VXNlcjI5NzA1OTQw", "avatar_url": "https://avatars.githubusercontent.com/u/29705940?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mille-s", "html_url": "https://github.com/mille-s", "followers_url": "https://api.github.com/users/mille-s/followers", "following_url": "https://api.github.com/users/mille-s/following{/other_user}", "gists_url": "https://api.github.com/users/mille-s/gists{/gist_id}", "starred_url": "https://api.github.com/users/mille-s/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mille-s/subscriptions", "organizations_url": "https://api.github.com/users/mille-s/orgs", "repos_url": "https://api.github.com/users/mille-s/repos", "events_url": "https://api.github.com/users/mille-s/events{/privacy}", "received_events_url": "https://api.github.com/users/mille-s/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nsadly I can't replicate the problem on my Windows machine. Try to update the library to the newest version with:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets\r\n``` ", "Thanks for the answer! I updated the library but unfortunately it didn't solve the problem.", "Is there an error message ?\r\nWhat stacktrace do you get if you interrupt the execution of the program while downloading ?", "Sorry for the long time since my last comment, I tried again and don't seem to have the problem anymore, thanks for your support!", "Great ! I'm closing the issue then. Feel free to re-open if you experience this issue again" ]
1,616,870,488,000
1,620,836,118,000
1,620,836,117,000
NONE
null
@yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_dataset #dataset = load_dataset('gem', 'web_nlg_en') dataset = load_dataset('gem', 'wiki_auto_asset_turk') ``` **Expected behavior:** I expect the dataset to start downloading (download bar appears and progresses toward 100%) **Actual behavior:** Instead of seeing the download bar appearing, nothing happens; the following appears in the console as expected, but nothing more: Downloading: 36.6kB [00:00, 37.2MB/s] Downloading: 41.7kB [00:00, ?B/s] Downloading and preparing dataset gem/wiki_auto_asset_turk (download: 121.37 MiB, generated: 145.69 MiB, post-processed: Unknown size, total: 267.07 MiB) to C:\Users\sfmil\.cache\huggingface\datasets\gem\wiki_auto_asset_turk\1.0.0\f252756d7f1b8f019aac71a1623b2950acfe10d25d956668ac4eae4e93c58b8d... ### Is this a regression? No, it was the first time I was trying to download this dataset (same for the other ones). ### Debug info - Python version: Python 3.8.2 - OS version: Windows 10 Family
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2123/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2123/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2120
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2120/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2120/comments
https://api.github.com/repos/huggingface/datasets/issues/2120/events
https://github.com/huggingface/datasets/issues/2120
841,954,521
MDU6SXNzdWU4NDE5NTQ1MjE=
2,120
dataset viewer does not work anymore
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Thanks for reporting :) We're looking into it", "Back up. " ]
1,616,764,933,000
1,616,773,942,000
1,616,773,942,000
NONE
null
Hi I normally use this link to see all datasets and how I can load them https://huggingface.co/datasets/viewer/ Now I am getting 502 Bad Gateway nginx/1.18.0 (Ubuntu) could you bring this webpage back ? this was very helpful @lhoestq thanks for your help
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2120/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2117/comments
https://api.github.com/repos/huggingface/datasets/issues/2117/events
https://github.com/huggingface/datasets/issues/2117
841,535,283
MDU6SXNzdWU4NDE1MzUyODM=
2,117
load_metric from local "glue.py" meet error 'NoneType' object is not callable
{ "login": "Frankie123421", "id": 54012361, "node_id": "MDQ6VXNlcjU0MDEyMzYx", "avatar_url": "https://avatars.githubusercontent.com/u/54012361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Frankie123421", "html_url": "https://github.com/Frankie123421", "followers_url": "https://api.github.com/users/Frankie123421/followers", "following_url": "https://api.github.com/users/Frankie123421/following{/other_user}", "gists_url": "https://api.github.com/users/Frankie123421/gists{/gist_id}", "starred_url": "https://api.github.com/users/Frankie123421/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Frankie123421/subscriptions", "organizations_url": "https://api.github.com/users/Frankie123421/orgs", "repos_url": "https://api.github.com/users/Frankie123421/repos", "events_url": "https://api.github.com/users/Frankie123421/events{/privacy}", "received_events_url": "https://api.github.com/users/Frankie123421/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@Frankie123421 what was the resolution to this?", "> @Frankie123421 what was the resolution to this?\r\n\r\nuse glue_metric.py instead of glue.py in load_metric", "thank you!" ]
1,616,726,122,000
1,629,927,845,000
1,616,726,426,000
NONE
null
actual_task = "mnli" if task == "mnli-mm" else task dataset = load_dataset(path='/home/glue.py', name=actual_task) metric = load_metric(path='/home/glue.py', name=actual_task) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-7ab77a465d81> in <module> 1 actual_task = "mnli" if task == "mnli-mm" else task 2 dataset = load_dataset(path='/home/jcli/glue.py', name=actual_task) ----> 3 metric = load_metric(path='/home/jcli/glue.py', name=actual_task) ~/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs) 508 keep_in_memory=keep_in_memory, 509 experiment_id=experiment_id, --> 510 **metric_init_kwargs, 511 ) 512 TypeError: 'NoneType' object is not callable Please help
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2117/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2117/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2116
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2116/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2116/comments
https://api.github.com/repos/huggingface/datasets/issues/2116/events
https://github.com/huggingface/datasets/issues/2116
841,481,292
MDU6SXNzdWU4NDE0ODEyOTI=
2,116
Creating custom dataset results in error while calling the map() function
{ "login": "GeetDsa", "id": 13940397, "node_id": "MDQ6VXNlcjEzOTQwMzk3", "avatar_url": "https://avatars.githubusercontent.com/u/13940397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GeetDsa", "html_url": "https://github.com/GeetDsa", "followers_url": "https://api.github.com/users/GeetDsa/followers", "following_url": "https://api.github.com/users/GeetDsa/following{/other_user}", "gists_url": "https://api.github.com/users/GeetDsa/gists{/gist_id}", "starred_url": "https://api.github.com/users/GeetDsa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GeetDsa/subscriptions", "organizations_url": "https://api.github.com/users/GeetDsa/orgs", "repos_url": "https://api.github.com/users/GeetDsa/repos", "events_url": "https://api.github.com/users/GeetDsa/events{/privacy}", "received_events_url": "https://api.github.com/users/GeetDsa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthe `_data` attribute is missing due to `MyDataset.__init__` not calling the parent `__init__`. However, I don't think it's a good idea to subclass the `datasets.Dataset` class (e.g. it's kind of dangerous to override `datasets.Dataset.__getitem__`). Instead, it's better to follow the \"association over inheritance\" approach with a simple wrapper class that delegates calls to a wrapped `Dataset` (map, etc.). Btw, the library offers the `datasets.Dataset.from_pandas` class method to directly create a `datasets.Dataset` from the dataframe." ]
1,616,719,066,000
1,617,201,032,000
1,617,201,032,000
NONE
null
calling `map()` of `datasets` library results into an error while defining a Custom dataset. Reproducible example: ``` import datasets class MyDataset(datasets.Dataset): def __init__(self, sentences): "Initialization" self.samples = sentences def __len__(self): "Denotes the total number of samples" return len(self.samples) def __getitem__(self, index): "Generates one sample of data" # Select sample # Load data and get label samples = self.samples[index] return samples def preprocess_function_train(examples): inputs = examples labels = [example+tokenizer.eos_token for example in examples ] inputs = tokenizer(inputs, max_length=30, padding=True, truncation=True) labels = tokenizer(labels, max_length=30, padding=True, truncation=True) model_inputs = inputs model_inputs["labels"] = labels["input_ids"] print("about to return") return model_inputs ##train["sentence"] is dataframe column train_dataset = MyDataset(train['sentence'].values.tolist()) train_dataset = train_dataset.map( preprocess_function, batched = True, batch_size=32 ) ``` Stack trace of error: ``` Traceback (most recent call last): File "dir/train_generate.py", line 362, in <module> main() File "dir/train_generate.py", line 245, in main train_dataset = train_dataset.map( File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1244, in map return self._map_single( File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 149, in wrapper unformatted_columns = set(self.column_names) - set(self._format_columns or []) File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 526, in column_names return self._data.column_names AttributeError: 'MyDataset' object has no attribute '_data' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2116/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2116/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2115/comments
https://api.github.com/repos/huggingface/datasets/issues/2115/events
https://github.com/huggingface/datasets/issues/2115
841,283,974
MDU6SXNzdWU4NDEyODM5NzQ=
2,115
The datasets.map() implementation modifies the datatype of os.environ object
{ "login": "leleamol", "id": 19983848, "node_id": "MDQ6VXNlcjE5OTgzODQ4", "avatar_url": "https://avatars.githubusercontent.com/u/19983848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leleamol", "html_url": "https://github.com/leleamol", "followers_url": "https://api.github.com/users/leleamol/followers", "following_url": "https://api.github.com/users/leleamol/following{/other_user}", "gists_url": "https://api.github.com/users/leleamol/gists{/gist_id}", "starred_url": "https://api.github.com/users/leleamol/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leleamol/subscriptions", "organizations_url": "https://api.github.com/users/leleamol/orgs", "repos_url": "https://api.github.com/users/leleamol/repos", "events_url": "https://api.github.com/users/leleamol/events{/privacy}", "received_events_url": "https://api.github.com/users/leleamol/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,616,704,159,000
1,616,771,632,000
1,616,771,632,000
NONE
null
In our testing, we noticed that the datasets.map() implementation is modifying the datatype of python os.environ object from '_Environ' to 'dict'. This causes following function calls to fail as follows: ` x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None) TypeError: get() takes no keyword arguments ` It looks like the following line in datasets.map implementation introduced this functionality. https://github.com/huggingface/datasets/blob/0cb1ac06acb0df44a1cf4128d03a01865faa2504/src/datasets/arrow_dataset.py#L1421 Here is the test script to reproduce this error. ``` from datasets import load_dataset from transformers import AutoTokenizer import os def test_train(): model_checkpoint = "distilgpt2" datasets = load_dataset('wikitext', 'wikitext-2-raw-v1') tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True) tokenizer.pad_token = tokenizer.eos_token def tokenize_function(examples): y = tokenizer(examples['text'], truncation=True, max_length=64) return y x = os.environ.get("TEST_ENV_VARIABLE_BEFORE_dataset_map", default=None) print(f"Testing environment variable: TEST_ENV_VARIABLE_BEFORE_dataset_map {x}") print(f"Data type of os.environ before datasets.map = {os.environ.__class__.__name__}") datasets.map(tokenize_function, batched=True, num_proc=2, remove_columns=["text"]) print(f"Data type of os.environ after datasets.map = {os.environ.__class__.__name__}") x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None) print(f"Testing environment variable: TEST_ENV_VARIABLE_AFTER_dataset_map {x}") if __name__ == "__main__": test_train() ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2115/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2115/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2108
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2108/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2108/comments
https://api.github.com/repos/huggingface/datasets/issues/2108/events
https://github.com/huggingface/datasets/issues/2108
840,181,055
MDU6SXNzdWU4NDAxODEwNTU=
2,108
Is there a way to use a GPU only when training an Index in the process of add_faisis_index?
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
open
false
null
[]
null
[]
1,616,621,536,000
1,616,653,903,000
null
NONE
null
Motivation - Some FAISS indexes like IVF consist of the training step that clusters the dataset into a given number of indexes. It would be nice if we can use a GPU to do the training step and covert the index back to CPU as mention in [this faiss example](https://gist.github.com/mdouze/46d6bbbaabca0b9778fca37ed2bcccf6).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2108/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2108/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2106
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2106/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2106/comments
https://api.github.com/repos/huggingface/datasets/issues/2106/events
https://github.com/huggingface/datasets/issues/2106
839,084,264
MDU6SXNzdWU4MzkwODQyNjQ=
2,106
WMT19 Dataset for Kazakh-English is not formatted correctly
{ "login": "trina731", "id": 22580542, "node_id": "MDQ6VXNlcjIyNTgwNTQy", "avatar_url": "https://avatars.githubusercontent.com/u/22580542?v=4", "gravatar_id": "", "url": "https://api.github.com/users/trina731", "html_url": "https://github.com/trina731", "followers_url": "https://api.github.com/users/trina731/followers", "following_url": "https://api.github.com/users/trina731/following{/other_user}", "gists_url": "https://api.github.com/users/trina731/gists{/gist_id}", "starred_url": "https://api.github.com/users/trina731/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/trina731/subscriptions", "organizations_url": "https://api.github.com/users/trina731/orgs", "repos_url": "https://api.github.com/users/trina731/repos", "events_url": "https://api.github.com/users/trina731/events{/privacy}", "received_events_url": "https://api.github.com/users/trina731/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[ "Hi ! Thanks for reporting\r\n\r\nBy looking at the raw `news-commentary-v14.en-kk.tsv` file, it looks like there are at least 17 lines with this issue.\r\nMoreover these issues are not always the same:\r\n- L97 is only `kk` text and must be appended at the end of the `kk` text of the **next** line\r\n- L2897 is only `kk` text and must be appended at the end of the `kk` text of the **previous** line\r\n- L1247 and L1248 are only `kk` texts and must be inserted at the **beginning** of the `kk` text of the next line\r\n- (and there are many others)\r\n\r\nIt would be nice to have a corrected version of this file ! The file is available in the `wmt/news-commentary` repository on the Datasets Hub here:\r\nhttps://huggingface.co/datasets/wmt/news-commentary/tree/main/v14/training\r\n\r\nThen maybe we can notify the WMT authors and host the corrected version somewhere" ]
1,616,530,487,000
1,616,708,180,000
null
NONE
null
In addition to the bug of languages being switched from Issue @415, there are incorrect translations in the dataset because the English-Kazakh translations have a one off formatting error. The News Commentary v14 parallel data set for kk-en from http://www.statmt.org/wmt19/translation-task.html has a bug here: > Line 94. The Swiss National Bank, for its part, has been battling with the deflationary effects of the franc’s dramatic appreciation over the past few years. Швейцарияның Ұлттық банкі өз тарапынан, соңғы бірнеше жыл ішінде франк құнының қатты өсуінің дефляциялық әсерімен күресіп келеді. > > Line 95. Дефляциялық күштер 2008 жылы терең және ұзаққа созылған жаһандық дағдарысқа байланысты орын алған ірі экономикалық және қаржылық орын алмасулардың арқасында босатылды. Жеке қарыз қаражаты үлесінің қысқаруы орталық банктің рефляцияға жұмсалған күш-жігеріне тұрақты соққан қарсы желдей болды. > > Line 96. The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008. Private deleveraging became a steady headwind to central bank efforts to reflate. 2009 жылы, алдыңғы қатарлы экономикалардың шамамен үштен бірі бағаның төмендеуін көрсетті, бұл соғыстан кейінгі жоғары деңгей болды. As you can see, line 95 has only the Kazakh translation which should be part of line 96. This causes all of the following English-Kazakh translation pairs to be one off rendering ALL of those translations incorrect. This issue was not fixed when the dataset was imported to Huggingface. By running this code ``` import datasets from datasets import load_dataset dataset = load_dataset('wmt19', 'kk-en') for key in dataset['train']['translation']: if 'The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008.' in key['kk']: print(key['en']) print(key['kk']) break ``` we get: > 2009 жылы, алдыңғы қатарлы экономикалардың шамамен үштен бірі бағаның төмендеуін көрсетті, бұл соғыстан кейінгі жоғары деңгей болды. > The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008. Private deleveraging became a steady headwind to central bank efforts to reflate. which shows that the issue still persists in the Huggingface dataset. The Kazakh sentence matches up to the next English sentence in the dataset instead of the current one. Please let me know if there's you have any ideas to fix this one-off error from the dataset or if this can be fixed by Huggingface.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2106/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2106/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2105
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2105/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2105/comments
https://api.github.com/repos/huggingface/datasets/issues/2105/events
https://github.com/huggingface/datasets/issues/2105
839,059,226
MDU6SXNzdWU4MzkwNTkyMjY=
2,105
Request to remove S2ORC dataset
{ "login": "kyleclo", "id": 13603748, "node_id": "MDQ6VXNlcjEzNjAzNzQ4", "avatar_url": "https://avatars.githubusercontent.com/u/13603748?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kyleclo", "html_url": "https://github.com/kyleclo", "followers_url": "https://api.github.com/users/kyleclo/followers", "following_url": "https://api.github.com/users/kyleclo/following{/other_user}", "gists_url": "https://api.github.com/users/kyleclo/gists{/gist_id}", "starred_url": "https://api.github.com/users/kyleclo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kyleclo/subscriptions", "organizations_url": "https://api.github.com/users/kyleclo/orgs", "repos_url": "https://api.github.com/users/kyleclo/repos", "events_url": "https://api.github.com/users/kyleclo/events{/privacy}", "received_events_url": "https://api.github.com/users/kyleclo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hello @kyleclo! Currently, we are getting the data from your bucket, so if you remove it the HF script won't work anymore :) \r\n\r\nUntil you solve things on your end, @lhoestq suggested we just return a warning message when people try to load that dataset from HF. What would you like it to say?", "Hi @kyleclo, as of today, you have not removed your bucket data yet, and therefore HuggingFace can download it from there.\r\n\r\nIs it OK? Are you planning to eventually delete it? Thank you.", "Hi! Sorry I missed @yjernite 's previous message, thanks for responding! \r\n\r\nIs there an option where we can keep our data in our bucket, but the HF script no longer pulls data from it? " ]
1,616,528,586,000
1,628,104,682,000
null
NONE
null
Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work out those hurdles! Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2105/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/2105/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2104
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2104/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2104/comments
https://api.github.com/repos/huggingface/datasets/issues/2104/events
https://github.com/huggingface/datasets/issues/2104
839,027,834
MDU6SXNzdWU4MzkwMjc4MzQ=
2,104
Trouble loading wiki_movies
{ "login": "adityaarunsinghal", "id": 35391599, "node_id": "MDQ6VXNlcjM1MzkxNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/35391599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adityaarunsinghal", "html_url": "https://github.com/adityaarunsinghal", "followers_url": "https://api.github.com/users/adityaarunsinghal/followers", "following_url": "https://api.github.com/users/adityaarunsinghal/following{/other_user}", "gists_url": "https://api.github.com/users/adityaarunsinghal/gists{/gist_id}", "starred_url": "https://api.github.com/users/adityaarunsinghal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adityaarunsinghal/subscriptions", "organizations_url": "https://api.github.com/users/adityaarunsinghal/orgs", "repos_url": "https://api.github.com/users/adityaarunsinghal/repos", "events_url": "https://api.github.com/users/adityaarunsinghal/events{/privacy}", "received_events_url": "https://api.github.com/users/adityaarunsinghal/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! `wiki_movies` was added in `datasets==1.2.0`. However it looks like you have `datasets==1.1.2`.\r\n\r\nTo use `wiki_movies`, please update `datasets` with\r\n```\r\npip install --upgrade datasets\r\n```", "Thanks a lot! That solved it and I was able to upload a model trained on it as well :)" ]
1,616,525,994,000
1,617,664,646,000
null
NONE
null
Hello, I am trying to load_dataset("wiki_movies") and it gives me this error - `FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/wiki_movies/wiki_movies.py` Trying to do `python run_mlm.py \ --model_name_or_path roberta-base \ --dataset_name wiki_movies \` also gives the same error. Is this something on my end? From what I can tell, this dataset was re-added by @lhoestq a few months ago. Thank you!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2104/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2104/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2103
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2103/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2103/comments
https://api.github.com/repos/huggingface/datasets/issues/2103/events
https://github.com/huggingface/datasets/issues/2103
838,946,916
MDU6SXNzdWU4Mzg5NDY5MTY=
2,103
citation, homepage, and license fields of `dataset_info.json` are duplicated many times
{ "login": "samsontmr", "id": 15007950, "node_id": "MDQ6VXNlcjE1MDA3OTUw", "avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/samsontmr", "html_url": "https://github.com/samsontmr", "followers_url": "https://api.github.com/users/samsontmr/followers", "following_url": "https://api.github.com/users/samsontmr/following{/other_user}", "gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}", "starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions", "organizations_url": "https://api.github.com/users/samsontmr/orgs", "repos_url": "https://api.github.com/users/samsontmr/repos", "events_url": "https://api.github.com/users/samsontmr/events{/privacy}", "received_events_url": "https://api.github.com/users/samsontmr/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
[ "Thanks for reporting :)\r\nMaybe we can concatenate fields only if they are different.\r\n\r\nCurrently this is done here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/src/datasets/info.py#L180-L196\r\n\r\nThis can be a good first contribution to the library.\r\nPlease comment if you'd like to improve this and open a PR :)" ]
1,616,519,889,000
1,617,719,999,000
1,617,719,999,000
NONE
null
This happens after a `map` operation when `num_proc` is set to `>1`. I tested this by cleaning up the json before running the `map` op on the dataset so it's unlikely it's coming from an earlier concatenation. Example result: ``` "citation": "@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n ``` @lhoestq and I believe this is happening due to the fields being concatenated `num_proc` times.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2103/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2103/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2099
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2099/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2099/comments
https://api.github.com/repos/huggingface/datasets/issues/2099/events
https://github.com/huggingface/datasets/issues/2099
838,523,819
MDU6SXNzdWU4Mzg1MjM4MTk=
2,099
load_from_disk takes a long time to load local dataset
{ "login": "samsontmr", "id": 15007950, "node_id": "MDQ6VXNlcjE1MDA3OTUw", "avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/samsontmr", "html_url": "https://github.com/samsontmr", "followers_url": "https://api.github.com/users/samsontmr/followers", "following_url": "https://api.github.com/users/samsontmr/following{/other_user}", "gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}", "starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions", "organizations_url": "https://api.github.com/users/samsontmr/orgs", "repos_url": "https://api.github.com/users/samsontmr/repos", "events_url": "https://api.github.com/users/samsontmr/events{/privacy}", "received_events_url": "https://api.github.com/users/samsontmr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi !\r\nCan you share more information about the features of your dataset ? You can get them by printing `my_dataset.features`\r\nCan you also share the code of your `map` function ?", "It is actually just the tokenized `wikipedia` dataset with `input_ids`, `attention_mask`, etc, with one extra column which is a list of integers. The `text` column is removed during tokenization.\r\n\r\n```\r\ndef add_len_and_seq(example):\r\n end_idx = example['input_ids'].index(SEP)\r\n example['actual_len'] = end_idx-1\r\n seq_len = len(example['input_ids'])\r\n \r\n\r\n example['seq'] = [PAD_ID] + [np.uint8(example['some_integer'])]*(end_idx-1) + [PAD_ID]*(seq_len-end_idx)\r\n \r\n return example\r\n```\r\n", "Is `PAD_ID` a python integer ? You need all the integers in `example['seq']` to have the same type.\r\nDoes this work if you remove the `np.uint8` and use python integers instead ?", "yup I casted it to `np.uint8` outside the function where it was defined. It was originally using python integers.", "Strangely, even when I manually created `np.arrays` of specific `dtypes`, the types in the final `dataset_info.json` that gets written are still `int64`.\r\n\r\nUpdate: I tried creating lists of `int8`s and got the same result.", "Yes this is a known issue: #625 \r\nWe're working on making the precision kept for numpy :)\r\nTo specify the precision of the integers, currently one needs to specify the output features with `.map(..., features=output_features)`", "Do you know what step is taking forever in the code ?\r\nWhat happens if you interrupt the execution of the dataset loading ?", "After a synchronous discussion, we found that the cache file sizes have an enormous effect on the loading speed: smaller cache files result in faster load times. `num_proc` controls the number of cache files that are being written and is inversely proportional to the individual file size. In other words, increase `num_proc` for smaller cache files :)\r\n\r\nMaybe this can be highlighted somewhere in the docs." ]
1,616,491,717,000
1,616,519,536,000
1,616,519,536,000
NONE
null
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helping (the total size seems to be smaller though). Does anyone know what could be the issue? Or does the casting of that column to `int8` need to happen in the function that writes the arrow table instead of in the `map` where I create the list of integers? Tagging @lhoestq since you seem to be working on these issues and PRs :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2099/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2098
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2098/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2098/comments
https://api.github.com/repos/huggingface/datasets/issues/2098/events
https://github.com/huggingface/datasets/issues/2098
838,447,959
MDU6SXNzdWU4Mzg0NDc5NTk=
2,098
SQuAD version
{ "login": "h-peng17", "id": 39556019, "node_id": "MDQ6VXNlcjM5NTU2MDE5", "avatar_url": "https://avatars.githubusercontent.com/u/39556019?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h-peng17", "html_url": "https://github.com/h-peng17", "followers_url": "https://api.github.com/users/h-peng17/followers", "following_url": "https://api.github.com/users/h-peng17/following{/other_user}", "gists_url": "https://api.github.com/users/h-peng17/gists{/gist_id}", "starred_url": "https://api.github.com/users/h-peng17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h-peng17/subscriptions", "organizations_url": "https://api.github.com/users/h-peng17/orgs", "repos_url": "https://api.github.com/users/h-peng17/repos", "events_url": "https://api.github.com/users/h-peng17/events{/privacy}", "received_events_url": "https://api.github.com/users/h-peng17/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! This is 1.1 as specified by the download urls here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/datasets/squad/squad.py#L50-L55", "Got it. Thank you~" ]
1,616,485,674,000
1,616,752,134,000
1,616,752,134,000
NONE
null
Hi~ I want train on squad dataset. What's the version of the squad? Is it 1.1 or 1.0? I'm new in QA, I don't find some descriptions about it.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2098/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2098/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2096
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2096/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2096/comments
https://api.github.com/repos/huggingface/datasets/issues/2096/events
https://github.com/huggingface/datasets/issues/2096
838,038,379
MDU6SXNzdWU4MzgwMzgzNzk=
2,096
CoNLL 2003 dataset not including German
{ "login": "rxian", "id": 8406802, "node_id": "MDQ6VXNlcjg0MDY4MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/8406802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rxian", "html_url": "https://github.com/rxian", "followers_url": "https://api.github.com/users/rxian/followers", "following_url": "https://api.github.com/users/rxian/following{/other_user}", "gists_url": "https://api.github.com/users/rxian/gists{/gist_id}", "starred_url": "https://api.github.com/users/rxian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rxian/subscriptions", "organizations_url": "https://api.github.com/users/rxian/orgs", "repos_url": "https://api.github.com/users/rxian/repos", "events_url": "https://api.github.com/users/rxian/events{/privacy}", "received_events_url": "https://api.github.com/users/rxian/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[]
1,616,441,036,000
1,617,097,535,000
null
NONE
null
Hello, thanks for all the work on developing and maintaining this amazing platform, which I am enjoying working with! I was wondering if there is a reason why the German CoNLL 2003 dataset is not included in the [repository](https://github.com/huggingface/datasets/tree/master/datasets/conll2003), since a copy of it could be found in some places on the internet such as GitHub? I could help adding the German data to the hub, unless there are some copyright issues that I am unaware of... This is considering that many work use the union of CoNLL 2002 and 2003 datasets for comparing cross-lingual NER transfer performance in `en`, `de`, `es`, and `nl`. E.g., [XLM-R](https://www.aclweb.org/anthology/2020.acl-main.747.pdf). ## Adding a Dataset - **Name:** CoNLL 2003 German - **Paper:** https://www.aclweb.org/anthology/W03-0419/ - **Data:** https://github.com/huggingface/datasets/tree/master/datasets/conll2003
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2096/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2096/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2092
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2092/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2092/comments
https://api.github.com/repos/huggingface/datasets/issues/2092/events
https://github.com/huggingface/datasets/issues/2092
836,984,043
MDU6SXNzdWU4MzY5ODQwNDM=
2,092
How to disable making arrow tables in load_dataset ?
{ "login": "Jeevesh8", "id": 48825663, "node_id": "MDQ6VXNlcjQ4ODI1NjYz", "avatar_url": "https://avatars.githubusercontent.com/u/48825663?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jeevesh8", "html_url": "https://github.com/Jeevesh8", "followers_url": "https://api.github.com/users/Jeevesh8/followers", "following_url": "https://api.github.com/users/Jeevesh8/following{/other_user}", "gists_url": "https://api.github.com/users/Jeevesh8/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jeevesh8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jeevesh8/subscriptions", "organizations_url": "https://api.github.com/users/Jeevesh8/orgs", "repos_url": "https://api.github.com/users/Jeevesh8/repos", "events_url": "https://api.github.com/users/Jeevesh8/events{/privacy}", "received_events_url": "https://api.github.com/users/Jeevesh8/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! We plan to add streaming features in the future.\r\n\r\nThis should allow to load a dataset instantaneously without generating the arrow table. The trade-off is that accessing examples from a streaming dataset must be done in an iterative way, and with an additional (but hopefully minor) overhead.\r\nWhat do you think about this ?\r\n\r\nIf you have ideas or suggestions of what you expect from such features as a user, feel free to share them, this is really valuable to us !", "People mainly want this feature either because it takes too much time too make arrow tables, or they occupy too much memory on the disk. I think both the problem can be solved if we provide arrow tables themselves on datasets hub. Can we do this currently @lhoestq ? \r\n", "@lhoestq I think the ```try_from_hf_gcs``` provide the same functionality. What all datasets are available on HF GCS? Are all the datasets on huggingFace datasets hub are made available on GCS, automatically?", "Only datasets like wikipedia, wiki40b, wiki_dpr and natural questions are available already processed on the HF google storage. This is used to download directly the arrow file instead of building it from the original data files.", "@lhoestq How can we make sure that the data we upload on HuggingFace hub is available in form of preprocessed arrow files ?", "We're still working on this :) This will be available soon\r\nUsers will be able to put their processed arrow files on the Hub" ]
1,616,302,207,000
1,616,783,860,000
null
NONE
null
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2092/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2092/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2089
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2089/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2089/comments
https://api.github.com/repos/huggingface/datasets/issues/2089/events
https://github.com/huggingface/datasets/issues/2089
836,788,019
MDU6SXNzdWU4MzY3ODgwMTk=
2,089
Add documentaton for dataset README.md files
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! We are using the [datasets-tagging app](https://github.com/huggingface/datasets-tagging) to select the tags to add.\r\n\r\nWe are also adding the full list of tags in #2107 \r\nThis covers multilinguality, language_creators, licenses, size_categories and task_categories.\r\n\r\nIn general if you want to add a tag that doesn't exist (for example for a custom license) you must make it start with `other-` and then a custom tag name.\r\n\r\nedit (@theo-m) if you ever find yourself resorting to adding an `other-*` tag, please do ping us somewhere so we can think about adding it to the \"official\" list :)", "@lhoestq hmm - ok thanks for the answer.\r\nTo be honest I am not sure if this issue can be closed now.\r\nI just wanted to point out that this should either be documented or linked in the documentation.\r\nIf you feel like it is (will be) please just close this.", "We're still working on the validation+documentation in this.\r\nFeel free to keep this issue open till we've added them", "@lhoestq what is the status on this? Did you add documentation?", "Hi ! There's the tagging app at https://huggingface.co/datasets/tagging/ that you can use.\r\nIt shows the list of all the tags you can use.\r\n\r\nIt is based on all the tag sets defined in this folder:\r\nhttps://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources", "@lhoestq is there something like this form Models?", "I don't think so. Feel free to take a look at the tags of other models (example [here](https://huggingface.co/bert-base-uncased/blob/main/README.md)). But we should definitely have some docs or an app to write the tags. Feel free to open an issue in the `transformers` repo or in the `huggingface_hub` repo so we can discuss this" ]
1,616,240,678,000
1,626,111,700,000
null
CONTRIBUTOR
null
Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what should be passed to language_creators? - which values should licenses have? What do I say when it is a custom license? Should I add a link? - how should I choose size_categories ? What are valid ranges? - what are valid task_categories? Thanks Philip
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2089/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2089/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2084
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2084/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2084/comments
https://api.github.com/repos/huggingface/datasets/issues/2084/events
https://github.com/huggingface/datasets/issues/2084
835,750,671
MDU6SXNzdWU4MzU3NTA2NzE=
2,084
CUAD - Contract Understanding Atticus Dataset
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "+1 on this request" ]
1,616,146,063,000
1,618,563,044,000
1,618,563,044,000
CONTRIBUTOR
null
## Adding a Dataset - **Name:** CUAD - Contract Understanding Atticus Dataset - **Description:** As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community. - **Paper:** https://arxiv.org/abs/2103.06268 - **Data:** https://github.com/TheAtticusProject/cuad/ - **Motivation:** good domain specific datasets are valuable Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2084/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2084/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2083
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2083/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2083/comments
https://api.github.com/repos/huggingface/datasets/issues/2083/events
https://github.com/huggingface/datasets/issues/2083
835,695,425
MDU6SXNzdWU4MzU2OTU0MjU=
2,083
`concatenate_datasets` throws error when changing the order of datasets to concatenate
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthis bug is related to `Dataset.{remove_columns, rename_column, flatten}` not propagating the change to the schema metadata when the info features are updated, so this line is the culprit:\r\n```python\r\ncommon_voice_train = common_voice_train.remove_columns(['client_id', 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment'])\r\n\r\n``` \r\nThe order is important because the resulting dataset inherits the schema metadata of the first dataset passed to the `concatenate_datasets(...)` function (`pa.concat_tables` [docs](https://arrow.apache.org/docs/python/generated/pyarrow.concat_tables.html)). I'll try to fix this ASAP." ]
1,616,142,588,000
1,617,960,333,000
1,617,960,333,000
MEMBER
null
Hey, I played around with the `concatenate_datasets(...)` function: https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate_datasets#datasets.concatenate_datasets and noticed that when the order in which the datasets are concatenated changes an error is thrown where it should not IMO. Here is a google colab to reproduce the error: https://colab.research.google.com/drive/17VTFU4KQ735-waWZJjeOHS6yDTfV5ekK?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2083/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2083/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2080/comments
https://api.github.com/repos/huggingface/datasets/issues/2080/events
https://github.com/huggingface/datasets/issues/2080
835,023,000
MDU6SXNzdWU4MzUwMjMwMDA=
2,080
Multidimensional arrays in a Dataset
{ "login": "vermouthmjl", "id": 3142085, "node_id": "MDQ6VXNlcjMxNDIwODU=", "avatar_url": "https://avatars.githubusercontent.com/u/3142085?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vermouthmjl", "html_url": "https://github.com/vermouthmjl", "followers_url": "https://api.github.com/users/vermouthmjl/followers", "following_url": "https://api.github.com/users/vermouthmjl/following{/other_user}", "gists_url": "https://api.github.com/users/vermouthmjl/gists{/gist_id}", "starred_url": "https://api.github.com/users/vermouthmjl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vermouthmjl/subscriptions", "organizations_url": "https://api.github.com/users/vermouthmjl/orgs", "repos_url": "https://api.github.com/users/vermouthmjl/repos", "events_url": "https://api.github.com/users/vermouthmjl/events{/privacy}", "received_events_url": "https://api.github.com/users/vermouthmjl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi !\r\n\r\nThis is actually supported ! but not yet in `from_pandas`.\r\nYou can use `from_dict` for now instead:\r\n```python\r\nfrom datasets import Dataset, Array2D, Features, Value\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\ndataset = {\r\n 'bbox': [\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]])\r\n ],\r\n 'input_ids': [1, 2, 3, 4]\r\n}\r\ndataset = Dataset.from_dict(dataset)\r\n```\r\n\r\nThis will work but to use it with the torch formatter you must specify the `Array2D` feature type in order to tell the shape:\r\n```python\r\nfrom datasets import Dataset, Array2D, Features, Value\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\ndataset = {\r\n 'bbox': [\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]])\r\n ],\r\n 'input_ids': [1, 2, 3, 4]\r\n}\r\ndataset = Dataset.from_dict(dataset, features=Features({\r\n \"bbox\": Array2D(shape=(3, 4), dtype=\"int64\"),\r\n \"input_ids\": Value(\"int64\")\r\n}))\r\ndataset.set_format(\"torch\")\r\nprint(dataset[0]['bbox'])\r\n# tensor([[1, 2, 3, 4],\r\n# [1, 2, 3, 4],\r\n# [1, 2, 3, 4]])\r\n```\r\nIf you don't specify the `Array2D` feature type, then the inferred type will be Sequence(Sequence(Value(\"int64\"))) and therefore the torch formatter will return list of tensors", "Thanks for the explanation. \r\nWith my original DataFrame, I did\r\n```\r\ndataset = dataset.to_dict(\"list\")\r\n```\r\nand then the rest of the transformation from dictionary works just fine." ]
1,616,084,954,000
1,616,676,413,000
1,616,676,413,000
NONE
null
Hi, I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional arrays as a value for a column in a row. The following code results in conversion error in pyarrow (`pyarrow.lib.ArrowInvalid: ('Can only convert 1-dimensional array values', 'Conversion failed for column bbox with type object')`) ``` from datasets import Dataset import pandas as pd import numpy as np dataset = pd.DataFrame({ 'bbox': [ np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]), np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]), np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]), np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]) ], 'input_ids': [1, 2, 3, 4] }) dataset = Dataset.from_pandas(dataset) ``` Since I wanted to use pytorch for the downstream training task, I also tried a few ways to directly put in a column of 2-D pytorch tensor in a formatted dataset, but I can only have a list of 1-D tensors, or a list of arrays, or a list of lists. ``` import torch from datasets import Dataset import pandas as pd dataset = pd.DataFrame({ 'bbox': [ [[1,2,3,4],[1,2,3,4],[1,2,3,4]], [[1,2,3,4],[1,2,3,4],[1,2,3,4]], [[1,2,3,4],[1,2,3,4],[1,2,3,4]], [[1,2,3,4],[1,2,3,4],[1,2,3,4]] ], 'input_ids': [1, 2, 3, 4] }) dataset = Dataset.from_pandas(dataset) def test(examples): return {'bbbox': torch.Tensor(examples['bbox'])} dataset = dataset.map(test) print(dataset[0]['bbox']) print(dataset[0]['bbbox']) dataset.set_format(type='torch', columns=['input_ids', 'bbox'], output_all_columns=True) print(dataset[0]['bbox']) print(dataset[0]['bbbox']) def test2(examples): return {'bbbox': torch.stack(examples['bbox'])} dataset = dataset.map(test2) print(dataset[0]['bbox']) print(dataset[0]['bbbox']) ``` Is is possible to support n-D arrays/tensors in datasets? It seems that it can also be useful for this [feature request](https://github.com/huggingface/datasets/issues/263).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2080/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2078
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2078/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2078/comments
https://api.github.com/repos/huggingface/datasets/issues/2078/events
https://github.com/huggingface/datasets/issues/2078
834,694,819
MDU6SXNzdWU4MzQ2OTQ4MTk=
2,078
MemoryError when computing WER metric
{ "login": "diego-fustes", "id": 5707233, "node_id": "MDQ6VXNlcjU3MDcyMzM=", "avatar_url": "https://avatars.githubusercontent.com/u/5707233?v=4", "gravatar_id": "", "url": "https://api.github.com/users/diego-fustes", "html_url": "https://github.com/diego-fustes", "followers_url": "https://api.github.com/users/diego-fustes/followers", "following_url": "https://api.github.com/users/diego-fustes/following{/other_user}", "gists_url": "https://api.github.com/users/diego-fustes/gists{/gist_id}", "starred_url": "https://api.github.com/users/diego-fustes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/diego-fustes/subscriptions", "organizations_url": "https://api.github.com/users/diego-fustes/orgs", "repos_url": "https://api.github.com/users/diego-fustes/repos", "events_url": "https://api.github.com/users/diego-fustes/events{/privacy}", "received_events_url": "https://api.github.com/users/diego-fustes/received_events", "type": "User", "site_admin": false }
[ { "id": 2067393914, "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug", "name": "metric bug", "color": "25b21e", "default": false, "description": "A bug in a metric script" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! Thanks for reporting.\r\nWe're indeed using `jiwer` to compute the WER.\r\n\r\nMaybe instead of calling `jiwer.wer` once for all the preditions/references we can compute the WER iteratively to avoid memory issues ? I'm not too familial with `jiwer` but this must be possible.\r\n\r\nCurrently the code to compute the WER is defined here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/metrics/wer/wer.py#L93-L94", "Hi,\r\n\r\nI've just pushed a pull request that is related to this issue https://github.com/huggingface/datasets/pull/2169. It's not iterative, but it should avoid memory errors. It's based on the editdistance python library. An iterative implementation should be as easy as storing scores and words stepwise and dividing at the end. ", "I see, this was solved by other thread. Ok, let me know if you want to switch the implementation for any reason :)", "Thanks for diving into this anyway ^^'\r\nAs you said this actually got solved a few days ago", "Someone created an issue https://github.com/jitsi/jiwer/issues/40 at jiwer which shows that this is still a problem in the current version. Would be curious to figure out how this can be fixed by jiwer... :) I assume that it runs of out memory because it's trying to compute the WER over (too many) test samples?", "Hi !\r\n\r\nIt's computed iteratively so not sure what could go wrong\r\n\r\nhttps://github.com/huggingface/datasets/blob/8afd0ba8c27800a55ea69d9fcd702dc97d9c16d8/metrics/wer/wer.py#L100-L106\r\n\r\n@NiklasHoltmeyer what version of `datasets` are you running ?\r\n", "One possible explanation might be that it is the user who is passing all the sentences in a single element to `wer.compute`?\r\n\r\nAs current implementation iterates over the elements of `predictions` and `references`, this can be problematic if `predictions` and `references` contain a single huge element each. \r\n\r\nThis could be the case, for example, with a single string with all sentences:\r\n```python\r\nresult[\"predicted\"] = \"One sentence. Other sentence.\"\r\n```\r\nor with a __double__ nested list of sentence lists\r\n```python\r\nresult[\"predicted\"] = [[ [\"One sentence.\"], [\"Other sentence\"] ]]\r\n```\r\n\r\nThe user should check the dimensions of the data structure passed to `predictions` and `references`.", "Hi all,\r\n\r\nin my case I was using and older version of datasets and, as @albertvillanova points out, passing the full list of sentences for the metric calculation. The problem was in the way jiwer implements WER, as it tries to compute WER for the full list at once instead of doing it element-wise. I think that with the latest implementation of datasets, or by using the alternative WER function that I've contributed on this [pull request](https://github.com/huggingface/datasets/pull/2169) there shouldn't be memory errors.", "@lhoestq i was using Datasets==1.5.0 with 1.6.1 it worked (atleast the first run) but 1.5.0 is not compatible with my preprocessing. i cant save my dataset to a parquet file while using the latest datasets version\r\n\r\n-> \r\n```\r\n File \"../preprocess_dataset.py\", line 132, in <module>\r\n pq.write_table(train_dataset.data, f'{resampled_data_dir}/{data_args.dataset_config_name}.train.parquet')\r\n File \"/usr/local/lib/python3.8/dist-packages/pyarrow/parquet.py\", line 1674, in write_table\r\n writer.write_table(table, row_group_size=row_group_size)\r\n File \"/usr/local/lib/python3.8/dist-packages/pyarrow/parquet.py\", line 588, in write_table\r\n self.writer.write_table(table, row_group_size=row_group_size)\r\nTypeError: Argument 'table' has incorrect type (expected pyarrow.lib.Table, got ConcatenationTable)\r\n``` \r\n\r\nif i do \r\n```\r\nimport pyarrow.parquet as pq\r\n...\r\n...\r\npq.write_table(train_dataset.data, 'train.parquet')\r\npq.write_table(eval_dataset.data, 'eval.parquet')\r\n```\r\n\r\nwhile using 1.6.1. and its working with 1.5.0\r\n", "Hi ! You can pass dataset.data.table instead of dataset.data to pq.write_table", "This seems to be working so far! Thanks!" ]
1,616,067,005,000
1,619,857,909,000
1,617,693,643,000
NONE
null
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File "/home/diego/IpGlobal/wav2vec/test_wav2vec.py", line 51, in <module> print(wer.compute(predictions=result["predicted"], references=result["target"])) File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/datasets/metric.py", line 403, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/diego/.cache/huggingface/modules/datasets_modules/metrics/wer/73b2d32b723b7fb8f204d785c00980ae4d937f12a65466f8fdf78706e2951281/wer.py", line 94, in _compute return wer(references, predictions) File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 81, in wer truth, hypothesis, truth_transform, hypothesis_transform, **kwargs File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 192, in compute_measures H, S, D, I = _get_operation_counts(truth, hypothesis) File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 273, in _get_operation_counts editops = Levenshtein.editops(source_string, destination_string) MemoryError` My system has more than 10GB of available RAM. Looking at the code, I think that it could be related to the way jiwer does the calculation, as it is pasting all the sentences in a single string before calling Levenshtein editops function.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2078/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2078/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2076
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2076/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2076/comments
https://api.github.com/repos/huggingface/datasets/issues/2076/events
https://github.com/huggingface/datasets/issues/2076
834,445,296
MDU6SXNzdWU4MzQ0NDUyOTY=
2,076
Issue: Dataset download error
{ "login": "XuhuiZhou", "id": 20436061, "node_id": "MDQ6VXNlcjIwNDM2MDYx", "avatar_url": "https://avatars.githubusercontent.com/u/20436061?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XuhuiZhou", "html_url": "https://github.com/XuhuiZhou", "followers_url": "https://api.github.com/users/XuhuiZhou/followers", "following_url": "https://api.github.com/users/XuhuiZhou/following{/other_user}", "gists_url": "https://api.github.com/users/XuhuiZhou/gists{/gist_id}", "starred_url": "https://api.github.com/users/XuhuiZhou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XuhuiZhou/subscriptions", "organizations_url": "https://api.github.com/users/XuhuiZhou/orgs", "repos_url": "https://api.github.com/users/XuhuiZhou/repos", "events_url": "https://api.github.com/users/XuhuiZhou/events{/privacy}", "received_events_url": "https://api.github.com/users/XuhuiZhou/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[ "Hi @XuhuiZhou, thanks for reporting this issue. \r\n\r\nIndeed, the old links are no longer valid (404 Not Found error), and the script must be updated with the new links to Google Drive.", "It would be nice to update the urls indeed !\r\n\r\nTo do this, you just need to replace the urls in `iwslt2017.py` and then update the dataset_infos.json file with\r\n```\r\ndatasets-cli test ./datasets/iwslt2017 --all_configs --save_infos --ignore_verifications\r\n```", "Is this a command to update my local files or fix the file Github repo in general? (I am not so familiar with the datasets-cli command here)\r\n\r\nI also took a brief look at the **Sharing your dataset** section, looks like I could fix that locally and push it to the repo? I guess we are \"canonical\" category?", "This command will update your local file. Then you can open a Pull Request to push your fix to the github repo :)\r\nAnd yes you are right, it is a \"canonical\" dataset, i.e. a dataset script defined in this github repo (as opposed to dataset repositories of users on the huggingface hub)", "Hi, thanks for the answer. \r\n\r\nI gave a try to the problem today. But I encountered an upload error: \r\n\r\n```\r\ngit push -u origin fix_link_iwslt\r\nEnter passphrase for key '/home2/xuhuizh/.ssh/id_rsa': \r\nERROR: Permission to huggingface/datasets.git denied to XuhuiZhou.\r\nfatal: Could not read from remote repository.\r\n\r\nPlease make sure you have the correct access rights\r\nand the repository exists.\r\n```\r\n\r\nAny insight here? \r\n\r\nBy the way, when I run the datasets-cli command, it shows the following error, but does not seem to be the error coming from `iwslt.py`\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home2/xuhuizh/anaconda3/envs/UMT/bin/datasets-cli\", line 33, in <module>\r\n sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/commands/datasets_cli.py\", line 35, in main\r\n service.run()\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/commands/test.py\", line 141, in run\r\n try_from_hf_gcs=False,\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/builder.py\", line 579, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/builder.py\", line 639, in _download_and_prepare\r\n self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/utils/info_utils.py\", line 32, in verify_checksums\r\n raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\ndatasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz'}\r\n```", "Hi ! To create a PR on this repo your must fork it and create a branch on your fork. See how to fork the repo [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#start-by-preparing-your-environment).\r\nAnd to make the command work without the `ExpectedMoreDownloadedFiles` error, you just need to use the `--ignore_verifications` flag.", "Hi @XuhuiZhou,\r\n\r\nAs @lhoestq has well explained, you need to fork HF's repository, create a feature branch in your fork, push your changes to it and then open a Pull Request to HF's upstream repository. This is so because at HuggingFace Datasets we follow a development model called \"Fork and Pull Model\". You can find more information here:\r\n- [Understanding the GitHub flow](https://guides.github.com/introduction/flow/)\r\n- [Forking Projects](https://guides.github.com/activities/forking/)\r\n\r\nAlternatively, if you find all these steps too complicated, you can use the GitHub official command line tool: [GitHub CLI](https://cli.github.com/). Once installed, in order to create a Pull Request, you only need to use this command:\r\n```shell\r\ngh pr create --web\r\n```\r\nThis utility will automatically create the fork, push your changes and open a Pull Request, under the hood." ]
1,616,049,366,000
1,616,413,951,000
null
NONE
null
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2076/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2076/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2075
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2075/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2075/comments
https://api.github.com/repos/huggingface/datasets/issues/2075/events
https://github.com/huggingface/datasets/issues/2075
834,301,246
MDU6SXNzdWU4MzQzMDEyNDY=
2,075
ConnectionError: Couldn't reach common_voice.py
{ "login": "LifaSun", "id": 6188893, "node_id": "MDQ6VXNlcjYxODg4OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6188893?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LifaSun", "html_url": "https://github.com/LifaSun", "followers_url": "https://api.github.com/users/LifaSun/followers", "following_url": "https://api.github.com/users/LifaSun/following{/other_user}", "gists_url": "https://api.github.com/users/LifaSun/gists{/gist_id}", "starred_url": "https://api.github.com/users/LifaSun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LifaSun/subscriptions", "organizations_url": "https://api.github.com/users/LifaSun/orgs", "repos_url": "https://api.github.com/users/LifaSun/repos", "events_url": "https://api.github.com/users/LifaSun/events{/privacy}", "received_events_url": "https://api.github.com/users/LifaSun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @LifaSun, thanks for reporting this issue.\r\n\r\nSometimes, GitHub has some connectivity problems. Could you confirm that the problem persists?", "@albertvillanova Thanks! It works well now. " ]
1,616,030,346,000
1,616,236,181,000
1,616,236,181,000
NONE
null
When I run: from datasets import load_dataset, load_metric common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation") common_voice_test = load_dataset("common_voice", "zh-CN", split="test") Got: ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/master/datasets/common_voice/common_voice.py Version: 1.4.1 Thanks! @lhoestq @LysandreJik @thomwolf
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2075/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2075/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2071
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2071/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2071/comments
https://api.github.com/repos/huggingface/datasets/issues/2071/events
https://github.com/huggingface/datasets/issues/2071
833,950,824
MDU6SXNzdWU4MzM5NTA4MjQ=
2,071
Multiprocessing is slower than single process
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "dupe of #1992" ]
1,615,997,338,000
1,616,058,623,000
1,616,058,623,000
CONTRIBUTOR
null
```python # benchmark_filter.py import logging import sys import time from datasets import load_dataset, set_caching_enabled if __name__ == "__main__": set_caching_enabled(False) logging.basicConfig(level=logging.DEBUG) bc = load_dataset("bookcorpus") now = time.time() try: bc["train"].filter(lambda x: len(x["text"]) < 64, num_proc=int(sys.argv[1])) except Exception as e: print(f"cancelled: {e}") elapsed = time.time() - now print(elapsed) ``` Running `python benchmark_filter.py 1` (20min+) is faster than `python benchmark_filter.py 2` (2hrs+)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2071/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2071/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2070
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2070/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2070/comments
https://api.github.com/repos/huggingface/datasets/issues/2070/events
https://github.com/huggingface/datasets/issues/2070
833,799,035
MDU6SXNzdWU4MzM3OTkwMzU=
2,070
ArrowInvalid issue for squad v2 dataset
{ "login": "MichaelYxWang", "id": 29818977, "node_id": "MDQ6VXNlcjI5ODE4OTc3", "avatar_url": "https://avatars.githubusercontent.com/u/29818977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MichaelYxWang", "html_url": "https://github.com/MichaelYxWang", "followers_url": "https://api.github.com/users/MichaelYxWang/followers", "following_url": "https://api.github.com/users/MichaelYxWang/following{/other_user}", "gists_url": "https://api.github.com/users/MichaelYxWang/gists{/gist_id}", "starred_url": "https://api.github.com/users/MichaelYxWang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MichaelYxWang/subscriptions", "organizations_url": "https://api.github.com/users/MichaelYxWang/orgs", "repos_url": "https://api.github.com/users/MichaelYxWang/repos", "events_url": "https://api.github.com/users/MichaelYxWang/events{/privacy}", "received_events_url": "https://api.github.com/users/MichaelYxWang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! This error happens when you use `map` in batched mode and then your function doesn't return the same number of values per column.\r\n\r\nIndeed since you're using `map` in batched mode, `prepare_validation_features` must take a batch as input (i.e. a dictionary of multiple rows of the dataset), and return a batch.\r\n\r\nHowever it seems like `tokenized_examples` doesn't have the same number of elements in each field. One field seems to have `1180` elements while `candidate_attention_mask` only has `1178`." ]
1,615,989,109,000
1,628,099,836,000
1,628,099,836,000
NONE
null
Hello, I am using the huggingface official question answering example notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb). In the prepare_validation_features function, I made some modifications to tokenize a new set of quesions with the original contexts and save them in three different list called candidate_input_dis, candidate_attetion_mask and candidate_token_type_ids. When I try to run the next cell for dataset.map, I got the following error: `ArrowInvalid: Column 1 named candidate_attention_mask expected length 1180 but got length 1178` My code is as follows: ``` def generate_candidate_questions(examples): val_questions = examples["question"] candididate_questions = random.sample(datasets["train"]["question"], len(val_questions)) candididate_questions = [x[:max_length] for x in candididate_questions] return candididate_questions def prepare_validation_features(examples, use_mixing=False): pad_on_right = tokenizer.padding_side == "right" tokenized_examples = tokenizer( examples["question" if pad_on_right else "context"], examples["context" if pad_on_right else "question"], truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) if use_mixing: candidate_questions = generate_candidate_questions(examples) tokenized_candidates = tokenizer( candidate_questions if pad_on_right else examples["context"], examples["context"] if pad_on_right else candidate_questions, truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") tokenized_examples["example_id"] = [] if use_mixing: tokenized_examples["candidate_input_ids"] = tokenized_candidates["input_ids"] tokenized_examples["candidate_attention_mask"] = tokenized_candidates["attention_mask"] tokenized_examples["candidate_token_type_ids"] = tokenized_candidates["token_type_ids"] for i in range(len(tokenized_examples["input_ids"])): sequence_ids = tokenized_examples.sequence_ids(i) context_index = 1 if pad_on_right else 0 sample_index = sample_mapping[i] tokenized_examples["example_id"].append(examples["id"][sample_index]) tokenized_examples["offset_mapping"][i] = [ (o if sequence_ids[k] == context_index else None) for k, o in enumerate(tokenized_examples["offset_mapping"][i]) ] return tokenized_examples validation_features = datasets["validation"].map( lambda xs: prepare_validation_features(xs, True), batched=True, remove_columns=datasets["validation"].column_names ) ``` I guess this might happen because of the batched=True. I see similar issues in this repo related to arrow table length mismatch error, but in their cases, the numbers vary a lot. In my case, this error always happens when the expected length and unexpected length are very close. Thanks for the help!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2070/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2070/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2068
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2068/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2068/comments
https://api.github.com/repos/huggingface/datasets/issues/2068/events
https://github.com/huggingface/datasets/issues/2068
833,602,832
MDU6SXNzdWU4MzM2MDI4MzI=
2,068
PyTorch not available error on SageMaker GPU docker though it is installed
{ "login": "sivakhno", "id": 1651457, "node_id": "MDQ6VXNlcjE2NTE0NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1651457?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sivakhno", "html_url": "https://github.com/sivakhno", "followers_url": "https://api.github.com/users/sivakhno/followers", "following_url": "https://api.github.com/users/sivakhno/following{/other_user}", "gists_url": "https://api.github.com/users/sivakhno/gists{/gist_id}", "starred_url": "https://api.github.com/users/sivakhno/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sivakhno/subscriptions", "organizations_url": "https://api.github.com/users/sivakhno/orgs", "repos_url": "https://api.github.com/users/sivakhno/repos", "events_url": "https://api.github.com/users/sivakhno/events{/privacy}", "received_events_url": "https://api.github.com/users/sivakhno/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "cc @philschmid ", "Hey @sivakhno,\r\n\r\nhow does your `requirements.txt` look like to install the `datasets` library and which version of it are you running? Can you try to install `datasets>=1.4.0`", "Hi @philschmid - thanks for suggestion. I am using `datasets==1.4.1`. \r\nI have also tried using `torch=1.6.0` (docker `763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.6.0-gpu-py3 `), but the error is the same. ", "Could paste the code you use the start your training job and the fine-tuning script you run? ", "@sivakhno this should be now fixed in `datasets>=1.5.0`. ", "@philschmid Recently released tensorflow-macos seems to be missing. ", "I've created a PR to add this. " ]
1,615,975,467,000
1,623,646,050,000
1,623,646,050,000
NONE
null
I get en error when running data loading using SageMaker SDK ``` File "main.py", line 34, in <module> run_training() File "main.py", line 25, in run_training dm.setup('fit') File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(*args, **kwargs) File "/opt/ml/code/data_module.py", line 103, in setup self.dataset[split].set_format(type="torch", columns=self.columns) File "/opt/conda/lib/python3.6/site-packages/datasets/fingerprint.py", line 337, in wrapper out = func(self, *args, **kwargs) File "/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 995, in set_format _ = get_formatter(type, **format_kwargs) File "/opt/conda/lib/python3.6/site-packages/datasets/formatting/__init__.py", line 114, in get_formatter raise _FORMAT_TYPES_ALIASES_UNAVAILABLE[format_type] ValueError: PyTorch needs to be installed to be able to return PyTorch tensors. ``` when trying to execute dataset loading using this notebook https://github.com/PyTorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb, specifically lines ``` self.columns = [c for c in self.dataset[split].column_names if c in self.loader_columns] self.dataset[split].set_format(type="torch", columns=self.columns) ``` The SageMaker docker image used is 763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.4.0-gpu-py3 . By running container interactively I have checked that torch loading completes successfully by executing `https://github.com/huggingface/datasets/blob/master/src/datasets/config.py#L39`. Also as a first line in the data loading module I have ``` import os os.environ["USE_TF"] = "0" os.environ["USE_TORCH"] = "1" ```` But unfortunately the error stills persists. Any suggestions would be appreciated as I am stack. Many Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2068/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2068/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2067
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2067/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2067/comments
https://api.github.com/repos/huggingface/datasets/issues/2067/events
https://github.com/huggingface/datasets/issues/2067
833,559,940
MDU6SXNzdWU4MzM1NTk5NDA=
2,067
Multiprocessing windows error
{ "login": "flozi00", "id": 47894090, "node_id": "MDQ6VXNlcjQ3ODk0MDkw", "avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flozi00", "html_url": "https://github.com/flozi00", "followers_url": "https://api.github.com/users/flozi00/followers", "following_url": "https://api.github.com/users/flozi00/following{/other_user}", "gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}", "starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flozi00/subscriptions", "organizations_url": "https://api.github.com/users/flozi00/orgs", "repos_url": "https://api.github.com/users/flozi00/repos", "events_url": "https://api.github.com/users/flozi00/events{/privacy}", "received_events_url": "https://api.github.com/users/flozi00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting.\r\nThis looks like a bug, could you try to provide a minimal code example that reproduces the issue ? This would be very helpful !\r\n\r\nOtherwise I can try to run the wav2vec2 code above on my side but probably not this week..", "```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('glue', 'mrpc', split='train')\r\n\r\n\r\nupdated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4)\r\n\r\n```", "\r\n\r\n\r\n\r\n\r\nI was able to copy some of the shell \r\nThis is repeating every half second\r\nWin 10, Anaconda with python 3.8, datasets installed from main branche\r\n```\r\n\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 287, in _fixup_main_from_path\r\n _check_not_importing_main()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 116, in spawn_main\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 134, in _check_not_importing_main\r\n main_content = runpy.run_path(main_path,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 265, in run_path\r\n exitcode = _main(fd, parent_sentinel)\r\n raise RuntimeError('''\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 125, in _main\r\nRuntimeError:\r\n An attempt has been made to start a new process before the\r\n current process has finished its bootstrapping phase.\r\n\r\n This probably means that you are not using fork to start your\r\n child processes and you have forgotten to use the proper idiom\r\n in the main module:\r\n\r\n if __name__ == '__main__':\r\n freeze_support()\r\n ...\r\n\r\n The \"freeze_support()\" line can be omitted if the program\r\n is not going to be frozen to produce an executable. return _run_module_code(code, init_globals, run_name,\r\n prepare(preparation_data)\r\n\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 97, in _run_module_code\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 236, in prepare\r\n _run_code(code, mod_globals, init_globals,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 87, in _run_code\r\n _fixup_main_from_path(data['init_main_from_path'])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 287, in _fixup_main_from_path\r\n exec(code, run_globals)\r\n File \"F:\\Codes\\Python Apps\\asr\\test.py\", line 6, in <module>\r\n updated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4)\r\n main_content = runpy.run_path(main_path,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1370, in map\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 265, in run_path\r\n with Pool(num_proc, initargs=(RLock(),), initializer=tqdm.set_lock) as pool:\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\context.py\", line 119, in Pool\r\n return _run_module_code(code, init_globals, run_name,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n return Pool(processes, initializer, initargs, maxtasksperchild,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 87, in _run_code\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 212, in __init__\r\n exec(code, run_globals)\r\n File \"F:\\Codes\\Python Apps\\asr\\test.py\", line 6, in <module>\r\n self._repopulate_pool()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 303, in _repopulate_pool\r\n updated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1370, in map\r\n return self._repopulate_pool_static(self._ctx, self.Process,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 326, in _repopulate_pool_static\r\n with Pool(num_proc, initargs=(RLock(),), initializer=tqdm.set_lock) as pool:\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\context.py\", line 119, in Pool\r\n w.start()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\process.py\", line 121, in start\r\n return Pool(processes, initializer, initargs, maxtasksperchild,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 212, in __init__\r\n self._popen = self._Popen(self)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\context.py\", line 327, in _Popen\r\n self._repopulate_pool()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 303, in _repopulate_pool\r\n return Popen(process_obj)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\popen_spawn_win32.py\", line 45, in __init__\r\n return self._repopulate_pool_static(self._ctx, self.Process,\r\n prep_data = spawn.get_preparation_data(process_obj._name)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 326, in _repopulate_pool_static\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 154, in get_preparation_data\r\n _check_not_importing_main()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 134, in _check_not_importing_main\r\n w.start()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\process.py\", line 121, in start\r\n raise RuntimeError('''\r\nRuntimeError:\r\n An attempt has been made to start a new process before the\r\n current process has finished its bootstrapping phase.\r\n\r\n This probably means that you are not using fork to start your\r\n child processes and you have forgotten to use the proper idiom\r\n in the main module:\r\n\r\n if __name__ == '__main__':\r\n freeze_support()\r\n ...\r\n```", "Thanks this is really helpful !\r\nI'll try to reproduce on my side and come back to you", "if __name__ == '__main__':\r\n\r\n\r\nThis line before calling the map function stops the error but the script still repeats endless", "Indeed you needed `if __name__ == '__main__'` since accoding to [this stackoverflow post](https://stackoverflow.com/a/18205006):\r\n\r\n> On Windows the subprocesses will import (i.e. execute) the main module at start. You need to insert an if __name__ == '__main__': guard in the main module to avoid creating subprocesses recursively.\r\n\r\nRegarding the hanging issue, can you try to update `dill` and `multiprocess` ?", "It's already on the newest version", "```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 791, in move\r\n os.rename(src, real_dst)\r\nFileExistsError: [WinError 183] Eine Datei kann nicht erstellt werden, wenn sie bereits vorhanden ist: 'D:\\\\huggingfacecache\\\\common_voice\\\\de\\\\6.1.0\\\\0041e06ab061b91d0a23234a2221e87970a19cf3a81b20901474cffffeb7869f\\\\tmpx9fl_jg8' -> 'D:\\\\huggingfacecache\\\\common_voice\\\\de\\\\6.1.0\\\\0041e06ab061b91d0a23234a2221e87970a19cf3a81b20901474cffffeb7869f\\\\cache-9b4f203a63742dfc.arrow'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 116, in spawn_main\r\n exitcode = _main(fd, parent_sentinel)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 125, in _main\r\n prepare(preparation_data)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 236, in prepare\r\n _fixup_main_from_path(data['init_main_from_path'])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 287, in _fixup_main_from_path\r\n main_content = runpy.run_path(main_path,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 265, in run_path\r\n return _run_module_code(code, init_globals, run_name,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"F:\\Codes\\Python Apps\\asr\\cvtrain.py\", line 243, in <module>\r\n common_voice_train = common_voice_train.map(remove_special_characters, remove_columns=[\"sentence\"])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1339, in map\r\n return self._map_single(\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 203, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\fingerprint.py\", line 337, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1646, in _map_single\r\n shutil.move(tmp_file.name, cache_file_name)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 805, in move\r\n copy_function(src, real_dst)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 435, in copy2\r\n copyfile(src, dst, follow_symlinks=follow_symlinks)\r\n 0%| | 0/27771 [00:00<?, ?ex/s] \r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 264, in copyfile\r\n with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst:\r\nOSError: [Errno 22] Invalid argument: 'D:\\\\huggingfacecache\\\\common_voice\\\\de\\\\6.1.0\\\\0041e06ab061b91d0a23234a2221e87970a19cf3a81b20901474cffffeb7869f\\\\cache-9b4f203a63742dfc.arrow'\r\n```\r\n\r\nI was adding freeze support before calling the mapping function like this\r\nif __name__ == '__main__':\r\n freeze_support()\r\n dataset.map(....)", "Usually OSError of an arrow file on windows means that the file is currently opened as a dataset object, so you can't overwrite it until the dataset object falls out of scope.\r\nCan you make sure that there's no dataset object that loaded the `cache-9b4f203a63742dfc.arrow` file ?", "Now I understand\r\nThe error occures because the script got restarted in another thread, so the object is already loaded.\r\nStill don't have an idea why a new thread starts the whole script again" ]
1,615,972,348,000
1,628,099,948,000
1,628,099,948,000
NONE
null
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log crashes into an loop
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2067/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2067/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2065
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2065/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2065/comments
https://api.github.com/repos/huggingface/datasets/issues/2065/events
https://github.com/huggingface/datasets/issues/2065
833,291,432
MDU6SXNzdWU4MzMyOTE0MzI=
2,065
Only user permission of saved cache files, not group
{ "login": "lorr1", "id": 57237365, "node_id": "MDQ6VXNlcjU3MjM3MzY1", "avatar_url": "https://avatars.githubusercontent.com/u/57237365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lorr1", "html_url": "https://github.com/lorr1", "followers_url": "https://api.github.com/users/lorr1/followers", "following_url": "https://api.github.com/users/lorr1/following{/other_user}", "gists_url": "https://api.github.com/users/lorr1/gists{/gist_id}", "starred_url": "https://api.github.com/users/lorr1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lorr1/subscriptions", "organizations_url": "https://api.github.com/users/lorr1/orgs", "repos_url": "https://api.github.com/users/lorr1/repos", "events_url": "https://api.github.com/users/lorr1/events{/privacy}", "received_events_url": "https://api.github.com/users/lorr1/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting.\r\n\r\nCurrently there's no way to specify this.\r\n\r\nWhen loading/processing a dataset, the arrow file is written using a temporary file. Then once writing is finished, it's moved to the cache directory (using `shutil.move` [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1646))\r\n\r\nThat means it keeps the permissions specified by the `tempfile.NamedTemporaryFile` object, i.e. `-rw-------` instead of `-rw-r--r--`. Improving this could be a nice first contribution to the library :)", "Hi @lhoestq,\r\nI looked into this and yes you're right. The `NamedTemporaryFile` is always created with mode 0600, which prevents group from reading the file. Should we change the permissions of `tmp_file.name` [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1871) and [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1590), post creation to 0644 inorder for group and others to read it?", "Good idea :) we could even update the permissions after the file has been moved by shutil.move [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1899) and [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1646) actually.\r\nApparently they set the default 0600 for temporary files for security reasons, so let's update the umask only after the file has been moved", "Would it be possible to actually set the umask based on a user provided argument? For example, a popular usecase my team has is using a shared file-system for processing datasets. This may involve writing/deleting other files, or changing filenames, which a -rw-r--r-- wouldn't fix. ", "Note that you can get the cache files of a dataset with the `cache_files` attributes.\r\nThen you can `chmod` those files and all the other cache files in the same directory.\r\n\r\nMoreover we can probably keep the same permissions after each transform. This way you just need to set the permissions once after doing `load_dataset` for example, and then all the new transformed cached files will have the same permissions.\r\nWhat do you think ?", "This means we'll check the permissions of other `cache_files` already created for a dataset before setting permissions for new `cache_files`?", "You can just check the permission of `dataset.cache_files[0]` imo", "> This way you just need to set the permissions once after doing load_dataset for example, and then all the new transformed cached files will have the same permissions.\r\n\r\nI was referring to this. Ensuring that newly generated `cache_files` have the same permissions", "Yes exactly\r\n\r\nI imagine users can first do `load_dataset`, then chmod on the arrow files. After that all the new cache files could have the same permissions as the first arrow files. Opinions on this ?", "Sounds nice but I feel this is a sub-part of the approach mentioned by @siddk. Instead of letting the user set new permissions by itself first and then making sure newly generated files have same permissions why don't we ask the user initially only what they want? What are your thoughts?", "Yes sounds good. Should this be a parameter in `load_dataset` ? Or an env variable ? Or use the value of `os.umask` ?", "Ideally it should be a parameter in `load_dataset` but I'm not sure how important it is for the users (considering only important things should go into `load_dataset` parameters)", "I think it's fairly important; for context, our team uses a shared file-system where many folks run experiments based on datasets that are cached by other users.\r\n\r\nFor example, I might start a training run, downloading a dataset. Then, a couple of days later, a collaborator using the same repository might want to use the same dataset on the same shared filesystem, but won't be able to under the default permissions.\r\n\r\nBeing able to specify directly in the top-level `load_dataset()` call seems important, but an equally valid option would be to just inherit from the running user's `umask` (this should probably be the default anyway).\r\n\r\nSo basically, argument that takes a custom set of permissions, and by default, use the running user's umask!", "Maybe let's start by defaulting to the user's umask !\r\nDo you want to give it a try @bhavitvyamalik ?", "Yeah sure! Instead of using default `0o644` should I first extract umask of current user and then use `os.umask` on it? We can do it inside `Dataset` class so that all folders/files created during the call use running user's umask\r\n\r\n", "You can get the umask using `os.umask` and then I guess you can just use `os.chmod` as in your previous PR, but with the right permissions depending on the umask.", "FWIW, we have this issue with other caches - e.g. `transformers` model files. So probably will need to backport this into `transformers` as well.\r\n\r\nthanks @thomwolf for the pointer.", "Hi @stas00,\r\nFor this should we use the same umask code in the respective model directory inside `TRANSFORMERS_CACHE`?", "That sounds very right to me, @bhavitvyamalik " ]
1,615,940,422,000
1,620,629,129,000
1,620,629,129,000
NONE
null
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you know any ways around this or a way to correctly set the permissions?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2065/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2065/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2061
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2061/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2061/comments
https://api.github.com/repos/huggingface/datasets/issues/2061/events
https://github.com/huggingface/datasets/issues/2061
832,596,228
MDU6SXNzdWU4MzI1OTYyMjg=
2,061
Cannot load udpos subsets from xtreme dataset using load_dataset()
{ "login": "adzcodez", "id": 55791365, "node_id": "MDQ6VXNlcjU1NzkxMzY1", "avatar_url": "https://avatars.githubusercontent.com/u/55791365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adzcodez", "html_url": "https://github.com/adzcodez", "followers_url": "https://api.github.com/users/adzcodez/followers", "following_url": "https://api.github.com/users/adzcodez/following{/other_user}", "gists_url": "https://api.github.com/users/adzcodez/gists{/gist_id}", "starred_url": "https://api.github.com/users/adzcodez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adzcodez/subscriptions", "organizations_url": "https://api.github.com/users/adzcodez/orgs", "repos_url": "https://api.github.com/users/adzcodez/repos", "events_url": "https://api.github.com/users/adzcodez/events{/privacy}", "received_events_url": "https://api.github.com/users/adzcodez/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
[ "@lhoestq Adding \"_\" to the class labels in the dataset script will fix the issue.\r\n\r\nThe bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.", "Hi ! Thanks for reporting @adzcodez \r\n\r\n\r\n> @lhoestq Adding \"_\" to the class labels in the dataset script will fix the issue.\r\n> \r\n> The bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.\r\n\r\nYou're right: \"_\" should be added to the list of labels, and the examples must be sequences of tokens, not singles tokens.\r\n", "@lhoestq Can you please label this issue with the \"good first issue\" label? I'm not sure I'll find time to fix this.\r\n\r\nTo resolve it, the user should:\r\n1. add `\"_\"` to the list of labels\r\n2. transform the udpos subset to the conll format (I think the preprocessing logic can be borrowed from [the original repo](https://github.com/google-research/xtreme/blob/58a76a0d02458c4b3b6a742d3fd4ffaca80ff0de/utils_preprocess.py#L187-L204))\r\n3. update the dummy data\r\n4. update the dataset info\r\n5. [optional] add info about the data fields structure of the udpos subset to the dataset readme", "I tried fixing this issue, but its working fine in the dev version : \"1.6.2.dev0\"\r\n\r\nI think somebody already fixed it. ", "Hi,\r\n\r\nafter #2326, the lines with pos tags equal to `\"_\"` are filtered out when generating the dataset, so this fixes the KeyError described above. However, the udpos subset should be in the conll format i.e. it should yield sequences of tokens and not single tokens, so it would be great to see this fixed (feel free to borrow the logic from [here](https://github.com/google-research/xtreme/blob/58a76a0d02458c4b3b6a742d3fd4ffaca80ff0de/utils_preprocess.py#L187-L204) if you decide to work on this). ", "Closed by #2466." ]
1,615,887,133,000
1,624,017,251,000
1,624,017,250,000
NONE
null
Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and faced the same error. Reprex is: `from datasets import load_dataset ` `dataset = load_dataset('xtreme', 'udpos.English')` The error is: `KeyError: '_'` The full traceback is: KeyError Traceback (most recent call last) <ipython-input-5-7181359ea09d> in <module> 1 from datasets import load_dataset ----> 2 dataset = load_dataset('xtreme', 'udpos.English') ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs) 738 739 # Download and prepare data --> 740 builder_instance.download_and_prepare( 741 download_config=download_config, 742 download_mode=download_mode, ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 576 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 577 if not downloaded_from_gcs: --> 578 self._download_and_prepare( 579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 580 ) ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 654 try: 655 # Prepare split will record examples associated to the split --> 656 self._prepare_split(split_generator, **prepare_split_kwargs) 657 except OSError as e: 658 raise OSError( ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _prepare_split(self, split_generator) 977 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose 978 ): --> 979 example = self.info.features.encode_example(record) 980 writer.write(example) 981 finally: ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example) 946 def encode_example(self, example): 947 example = cast_to_python_objects(example) --> 948 return encode_nested_example(self, example) 949 950 def encode_batch(self, batch): ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj) 840 # Nested structures: we allow dict, list/tuples, sequences 841 if isinstance(schema, dict): --> 842 return { 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 844 } ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in <dictcomp>(.0) 841 if isinstance(schema, dict): 842 return { --> 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 844 } 845 elif isinstance(schema, (list, tuple)): ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj) 868 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks 869 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)): --> 870 return schema.encode_example(obj) 871 # Other object should be directly convertible to a native Arrow type (like Translation and Translation) 872 return obj ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example_data) 647 # If a string is given, convert to associated integer 648 if isinstance(example_data, str): --> 649 example_data = self.str2int(example_data) 650 651 # Allowing -1 to mean no label. ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in str2int(self, values) 605 if value not in self._str2int: 606 value = value.strip() --> 607 output.append(self._str2int[str(value)]) 608 else: 609 # No names provided, try to integerize KeyError: '_'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2061/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2061/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2059
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2059/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2059/comments
https://api.github.com/repos/huggingface/datasets/issues/2059/events
https://github.com/huggingface/datasets/issues/2059
832,579,156
MDU6SXNzdWU4MzI1NzkxNTY=
2,059
Error while following docs to load the `ted_talks_iwslt` dataset
{ "login": "ekdnam", "id": 40426312, "node_id": "MDQ6VXNlcjQwNDI2MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/40426312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekdnam", "html_url": "https://github.com/ekdnam", "followers_url": "https://api.github.com/users/ekdnam/followers", "following_url": "https://api.github.com/users/ekdnam/following{/other_user}", "gists_url": "https://api.github.com/users/ekdnam/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekdnam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekdnam/subscriptions", "organizations_url": "https://api.github.com/users/ekdnam/orgs", "repos_url": "https://api.github.com/users/ekdnam/repos", "events_url": "https://api.github.com/users/ekdnam/events{/privacy}", "received_events_url": "https://api.github.com/users/ekdnam/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "@skyprince999 as you authored the PR for this dataset, any comments?", "This has been fixed in #2064 by @mariosasko (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)" ]
1,615,885,939,000
1,615,917,631,000
1,615,917,607,000
NONE
null
I am currently trying to load the `ted_talks_iwslt` dataset into google colab. The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so. ```python dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014") ``` Executing it results in the error attached below. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-7dcc67154ef9> in <module>() ----> 1 dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014") 4 frames /usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs) 730 hash=hash, 731 features=features, --> 732 **config_kwargs, 733 ) 734 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, writer_batch_size, *args, **kwargs) 927 928 def __init__(self, *args, writer_batch_size=None, **kwargs): --> 929 super(GeneratorBasedBuilder, self).__init__(*args, **kwargs) 930 # Batch size used by the ArrowWriter 931 # It defines the number of samples that are kept in memory before writing them /usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs) 241 name, 242 custom_features=features, --> 243 **config_kwargs, 244 ) 245 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs) 337 if "version" not in config_kwargs and hasattr(self, "VERSION") and self.VERSION: 338 config_kwargs["version"] = self.VERSION --> 339 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs) 340 341 # otherwise use the config_kwargs to overwrite the attributes /root/.cache/huggingface/modules/datasets_modules/datasets/ted_talks_iwslt/024d06b1376b361e59245c5878ab8acf9a7576d765f2d0077f61751158e60914/ted_talks_iwslt.py in __init__(self, language_pair, year, **kwargs) 219 description=description, 220 version=datasets.Version("1.1.0", ""), --> 221 **kwargs, 222 ) 223 TypeError: __init__() got multiple values for keyword argument 'version' ``` How to resolve this? PS: Thanks a lot @huggingface team for creating this great library!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2059/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2059/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2058
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2058/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2058/comments
https://api.github.com/repos/huggingface/datasets/issues/2058/events
https://github.com/huggingface/datasets/issues/2058
832,159,844
MDU6SXNzdWU4MzIxNTk4NDQ=
2,058
Is it possible to convert a `tfds` to HuggingFace `dataset`?
{ "login": "abarbosa94", "id": 6608232, "node_id": "MDQ6VXNlcjY2MDgyMzI=", "avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abarbosa94", "html_url": "https://github.com/abarbosa94", "followers_url": "https://api.github.com/users/abarbosa94/followers", "following_url": "https://api.github.com/users/abarbosa94/following{/other_user}", "gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}", "starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions", "organizations_url": "https://api.github.com/users/abarbosa94/orgs", "repos_url": "https://api.github.com/users/abarbosa94/repos", "events_url": "https://api.github.com/users/abarbosa94/events{/privacy}", "received_events_url": "https://api.github.com/users/abarbosa94/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,615,839,527,000
1,615,839,527,000
null
CONTRIBUTOR
null
I was having some weird bugs with `C4`dataset version of HuggingFace, so I decided to try to download `C4`from `tfds`. I would like to know if it is possible to convert a tfds dataset to HuggingFace dataset format :) I can also open a new issue reporting the bug I'm receiving with `datasets.load_dataset('c4','en')` in the future if you think that it would be useful. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2058/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2058/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2056
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2056/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2056/comments
https://api.github.com/repos/huggingface/datasets/issues/2056/events
https://github.com/huggingface/datasets/issues/2056
831,718,397
MDU6SXNzdWU4MzE3MTgzOTc=
2,056
issue with opus100/en-fr dataset
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq I also deleted the cache and redownload the file and still the same issue, I appreciate any help on this. thanks ", "Here please find the minimal code to reproduce the issue @lhoestq note this only happens with MT5TokenizerFast\r\n\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers import MT5TokenizerFast\r\n\r\ndef get_tokenized_dataset(dataset_name, dataset_config_name, tokenizer):\r\n datasets = load_dataset(dataset_name, dataset_config_name, script_version=\"master\")\r\n column_names = datasets[\"train\"].column_names\r\n text_column_name = \"translation\"\r\n def process_dataset(datasets):\r\n def process_function(examples):\r\n lang = \"fr\"\r\n return {\"src_texts\": [example[lang] for example in examples[text_column_name]]}\r\n datasets = datasets.map(\r\n process_function,\r\n batched=True,\r\n num_proc=None,\r\n remove_columns=column_names,\r\n load_from_cache_file=True,\r\n )\r\n return datasets\r\n datasets = process_dataset(datasets)\r\n text_column_name = \"src_texts\"\r\n column_names = [\"src_texts\"]\r\n def tokenize_function(examples):\r\n return tokenizer(examples[text_column_name], return_special_tokens_mask=True)\r\n tokenized_datasets = datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=None,\r\n remove_columns=column_names,\r\n load_from_cache_file=True\r\n )\r\n\r\nif __name__ == \"__main__\":\r\n tokenizer_kwargs = {\r\n \"cache_dir\": None,\r\n \"use_fast\": True,\r\n \"revision\": \"main\",\r\n \"use_auth_token\": None\r\n }\r\n tokenizer = MT5TokenizerFast.from_pretrained(\"google/mt5-small\", **tokenizer_kwargs)\r\n get_tokenized_dataset(dataset_name=\"opus100\", dataset_config_name=\"en-fr\", tokenizer=tokenizer)\r\n~ \r\n```", "as per https://github.com/huggingface/tokenizers/issues/626 this looks like to be the tokenizer bug, I therefore, reported it there https://github.com/huggingface/tokenizers/issues/626 and I am closing this one." ]
1,615,807,962,000
1,615,909,740,000
1,615,909,739,000
NONE
null
Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advance. ` thread '<unnamed>' panicked at 'index out of bounds: the len is 617 but the index is 617', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/normalizer.rs:382:21 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace 63%|██████████████████████████████████████████████████████████▊ | 626/1000 [00:27<00:16, 22.69ba/s] Traceback (most recent call last): File "run_mlm.py", line 550, in <module> main() File "run_mlm.py", line 412, in main in zip(data_args.dataset_name, data_args.dataset_config_name)] File "run_mlm.py", line 411, in <listcomp> logger) for dataset_name, dataset_config_name\ File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 96, in get_tokenized_dataset load_from_cache_file=not data_args.overwrite_cache, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in map for k, dataset in self.items() File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in <dictcomp> for k, dataset in self.items() File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1309, in map update_data=update_data, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 204, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/fingerprint.py", line 337, in wrapper out = func(self, *args, **kwargs) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1574, in _map_single batch, indices, check_same_num_examples=len(self.list_indexes()) > 0, offset=offset File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1490, in apply_function_on_filtered_inputs function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 89, in tokenize_function return tokenizer(examples[text_column_name], return_special_tokens_mask=True) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2347, in __call__ **kwargs, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2532, in batch_encode_plus **kwargs, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 384, in _batch_encode_plus is_pretokenized=is_split_into_words, pyo3_runtime.PanicException: index out of bounds: the len is 617 but the index is 617 `
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2056/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2056/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2055
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2055/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2055/comments
https://api.github.com/repos/huggingface/datasets/issues/2055/events
https://github.com/huggingface/datasets/issues/2055
831,684,312
MDU6SXNzdWU4MzE2ODQzMTI=
2,055
is there a way to override a dataset object saved with save_to_disk?
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi\r\nYou can rename the arrow file and update the name in `state.json`", "I tried this way, but when there is a mapping process to the dataset, it again uses a random cache name. atm, I am trying to use the following method by setting an exact cache file,\r\n\r\n```\r\n dataset_with_embedding =csv_dataset.map(\r\n partial(self.embed, ctx_encoder=ctx_encoder, ctx_tokenizer=self.context_tokenizer),\r\n batched=True,\r\n batch_size=1,\r\n features=new_features,\r\n cache_file_name=cache_arrow_path,\r\n load_from_cache_file=False\r\n )\r\n```\r\nSo here we set a cache_file_name , after this it uses the same file name when saving again and again. ", "I'm not sure I understand your issue, can you elaborate ?\r\n\r\n`cache_file_name` is indeed an argument you can set to specify the cache file that will be used for the processed dataset. By default the file is named with something like `cache-<fingerprint>.arrow` where the fingerprint is a hash.", "Let's say I am updating a set of embedding in a dataset that is around 40GB inside a training loop every 500 steps (Ex: calculating the embeddings in updated ctx_encoder in RAG and saving it to the passage path). So when we use **dataset_object.save_to_disk('passage_path_directory')** it will save the new dataset object every time with a random file name, especially when we do some transformations to dataset objects such as map or shards. This way, we keep collecting unwanted files that will eventually eat up all the disk space. \r\n\r\nBut if we can save the dataset object every time by a single name like **data_shard_1.arrow**, it will automatically remove the previous file and save the new one in the same directory. I found the above-mentioned code snippet useful to complete this task. \r\n\r\nIs this clear?" ]
1,615,805,453,000
1,616,385,977,000
1,616,385,977,000
NONE
null
At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2055/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2055/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2054
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2054/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2054/comments
https://api.github.com/repos/huggingface/datasets/issues/2054/events
https://github.com/huggingface/datasets/issues/2054
831,597,665
MDU6SXNzdWU4MzE1OTc2NjU=
2,054
Could not find file for ZEST dataset
{ "login": "bhadreshpsavani", "id": 26653468, "node_id": "MDQ6VXNlcjI2NjUzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhadreshpsavani", "html_url": "https://github.com/bhadreshpsavani", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "The zest dataset url was changed (allenai/zest#3) and #2057 should resolve this.", "This has been fixed in #2057 by @matt-peters (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)", "Thanks @lhoestq and @matt-peters ", "I am closing this issue since its fixed!" ]
1,615,799,518,000
1,620,034,224,000
1,620,034,224,000
CONTRIBUTOR
null
I am trying to use zest dataset from Allen AI using below code in colab, ``` !pip install -q datasets from datasets import load_dataset dataset = load_dataset("zest") ``` I am getting the following error, ``` Using custom data configuration default Downloading and preparing dataset zest/default (download: 5.53 MiB, generated: 19.96 MiB, post-processed: Unknown size, total: 25.48 MiB) to /root/.cache/huggingface/datasets/zest/default/0.0.0/1f7a230fbfc964d979bbca0f0130fbab3259fce547ee758ad8aa4f9c9bec6cca... --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-6-18dbbc1a4b8a> in <module>() 1 from datasets import load_dataset 2 ----> 3 dataset = load_dataset("zest") 9 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 612 ) 613 elif response is not None and response.status_code == 404: --> 614 raise FileNotFoundError("Couldn't find file at {}".format(url)) 615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 616 raise ConnectionError("Couldn't reach {}".format(url)) FileNotFoundError: Couldn't find file at https://ai2-datasets.s3-us-west-2.amazonaws.com/zest/zest.zip ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2054/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2054/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2052
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2052/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2052/comments
https://api.github.com/repos/huggingface/datasets/issues/2052/events
https://github.com/huggingface/datasets/issues/2052
831,135,704
MDU6SXNzdWU4MzExMzU3MDQ=
2,052
Timit_asr dataset repeats examples
{ "login": "fermaat", "id": 7583522, "node_id": "MDQ6VXNlcjc1ODM1MjI=", "avatar_url": "https://avatars.githubusercontent.com/u/7583522?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fermaat", "html_url": "https://github.com/fermaat", "followers_url": "https://api.github.com/users/fermaat/followers", "following_url": "https://api.github.com/users/fermaat/following{/other_user}", "gists_url": "https://api.github.com/users/fermaat/gists{/gist_id}", "starred_url": "https://api.github.com/users/fermaat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fermaat/subscriptions", "organizations_url": "https://api.github.com/users/fermaat/orgs", "repos_url": "https://api.github.com/users/fermaat/repos", "events_url": "https://api.github.com/users/fermaat/events{/privacy}", "received_events_url": "https://api.github.com/users/fermaat/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthis was fixed by #1995, so you can wait for the next release or install the package directly from the master branch with the following command: \r\n```bash\r\npip install git+https://github.com/huggingface/datasets\r\n```", "Ty!" ]
1,615,722,223,000
1,615,804,636,000
1,615,804,636,000
NONE
null
Summary When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same Steps to reproduce As an example, on this code there is the text from the training part: Code snippet: ``` from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") timit['train']['text'] #['Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', ``` The same behavior happens for other columns Expected behavior: Different info on the actual timit_asr dataset Actual behavior: When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same. I've checked datasets 1.3 and the rows are different Debug info Streamlit version: (get it with $ streamlit version) Python version: Python 3.6.12 Using Conda? PipEnv? PyEnv? Pex? Using pip OS version: Centos-release-7-9.2009.1.el7.centos.x86_64 Additional information You can check the same behavior on https://huggingface.co/datasets/viewer/?dataset=timit_asr
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2052/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2052/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2050
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2050/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2050/comments
https://api.github.com/repos/huggingface/datasets/issues/2050/events
https://github.com/huggingface/datasets/issues/2050
831,006,551
MDU6SXNzdWU4MzEwMDY1NTE=
2,050
Build custom dataset to fine-tune Wav2Vec2
{ "login": "Omarnabk", "id": 72882909, "node_id": "MDQ6VXNlcjcyODgyOTA5", "avatar_url": "https://avatars.githubusercontent.com/u/72882909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Omarnabk", "html_url": "https://github.com/Omarnabk", "followers_url": "https://api.github.com/users/Omarnabk/followers", "following_url": "https://api.github.com/users/Omarnabk/following{/other_user}", "gists_url": "https://api.github.com/users/Omarnabk/gists{/gist_id}", "starred_url": "https://api.github.com/users/Omarnabk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Omarnabk/subscriptions", "organizations_url": "https://api.github.com/users/Omarnabk/orgs", "repos_url": "https://api.github.com/users/Omarnabk/repos", "events_url": "https://api.github.com/users/Omarnabk/events{/privacy}", "received_events_url": "https://api.github.com/users/Omarnabk/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "@lhoestq - We could simply use the \"general\" json dataset for this no? ", "Sure you can use the json loader\r\n```python\r\ndata_files = {\"train\": \"path/to/your/train_data.json\", \"test\": \"path/to/your/test_data.json\"}\r\ntrain_dataset = load_dataset(\"json\", data_files=data_files, split=\"train\")\r\ntest_dataset = load_dataset(\"json\", data_files=data_files, split=\"test\")\r\n```\r\n\r\nYou just need to make sure that the data contain the paths to the audio files.\r\nIf not, feel free to use `.map()` to add them.", "Many thanks! that was what I was looking for. " ]
1,615,672,870,000
1,615,800,448,000
1,615,800,448,000
NONE
null
Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2050/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2050/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2048
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2048/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2048/comments
https://api.github.com/repos/huggingface/datasets/issues/2048/events
https://github.com/huggingface/datasets/issues/2048
830,953,431
MDU6SXNzdWU4MzA5NTM0MzE=
2,048
github is not always available - probably need a back up
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,615,658,612,000
1,615,658,612,000
null
CONTRIBUTOR
null
Yesterday morning github wasn't working: ``` :/tmp$ wget https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py--2021-03-12 18:35:59-- https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.109.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected. HTTP request sent, awaiting response... 500 Internal Server Error 2021-03-12 18:36:11 ERROR 500: Internal Server Error. ``` Suggestion: have a failover system and replicate the data on another system and reach there if gh isn't reachable? perhaps gh can be a master and the replicate a slave - so there is only one true source.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2048/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2048/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2046/comments
https://api.github.com/repos/huggingface/datasets/issues/2046/events
https://github.com/huggingface/datasets/issues/2046
830,423,033
MDU6SXNzdWU4MzA0MjMwMzM=
2,046
add_faisis_index gets very slow when doing it interatively
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I think faiss automatically sets the number of threads to use to build the index.\r\nCan you check how many CPU cores are being used when you build the index in `use_own_knowleldge_dataset` as compared to this script ? Are there other programs running (maybe for rank>0) ?", "Hi,\r\n I am running the add_faiss_index during the training process of the RAG from the master process (rank 0). But at the exact moment, I do not run any other process since I do it in every 5000 training steps. \r\n \r\n I think what you say is correct. It depends on the number of CPU cores. I did an experiment to compare the time taken to finish the add_faiss_index process on use_own_knowleldge_dataset.py vs the training loop thing. The training loop thing takes 40 mins more. It might be natural right? \r\n \r\n \r\n at the moment it uses around 40 cores of a 96 core machine (I am fine-tuning the entire process). ", "Can you try to set the number of threads manually ?\r\nIf you set the same number of threads for both the `use_own_knowledge_dataset.py` and RAG training, it should take the same amount of time.\r\nYou can see how to set the number of thread in the faiss wiki: https://github.com/facebookresearch/faiss/wiki/Threads-and-asynchronous-calls", "Ok, I will report the details too soon. I am the first one on the list and currently add_index being computed for the 3rd time in the loop. Actually seems like the time is taken to complete each interaction is the same, but around 1 hour more compared to running it without the training loop. A the moment this takes 5hrs and 30 mins. If there is any way to faster the process, an end-to-end rag will be perfect. So I will also try out with different thread numbers too. \r\n\r\n![image](https://user-images.githubusercontent.com/16892570/111453464-798c5f80-8778-11eb-86d0-19d212f58e38.png)\r\n", "@lhoestq on a different note, I read about using Faiss-GPU, but the documentation says we should use it when the dataset has the ability to fit into the GPU memory. Although this might work, in the long-term this is not that practical for me.\r\n\r\nhttps://github.com/matsui528/faiss_tips", "@lhoestq \r\n\r\nHi, I executed the **use_own_dataset.py** script independently and ask a few of my friends to run their programs in the HPC machine at the same time. \r\n\r\n Once there are so many other processes are running the add_index function gets slows down naturally. So basically the speed of the add_index depends entirely on the number of CPU processes. Then I set the number of threads as you have mentioned and got actually the same time for RAG training and independat running. So you are correct! :) \r\n\r\n \r\n Then I added this [issue in Faiss repostiary](https://github.com/facebookresearch/faiss/issues/1767). I got an answer saying our current **IndexHNSWFlat** can get slow for 30 million vectors and it would be better to use alternatives. What do you think?", "It's a matter of tradeoffs.\r\nHSNW is fast at query time but takes some time to build.\r\nA flat index is flat to build but is \"slow\" at query time.\r\nAn IVF index is probably a good choice for you: fast building and fast queries (but still slower queries than HSNW).\r\n\r\nNote that for an IVF index you would need to have an `nprobe` parameter (number of cells to visit for one query, there are `nlist` in total) that is not too small in order to have good retrieval accuracy, but not too big otherwise the queries will take too much time. From the faiss documentation:\r\n> The nprobe parameter is always a way of adjusting the tradeoff between speed and accuracy of the result. Setting nprobe = nlist gives the same result as the brute-force search (but slower).\r\n\r\nFrom my experience with indexes on DPR embeddings, setting nprobe around 1/4 of nlist gives really good retrieval accuracy and there's no need to have a value higher than that (or you would need to brute-force in order to see a difference).", "@lhoestq \r\n\r\nThanks a lot for sharing all this prior knowledge. \r\n\r\nJust asking what would be a good nlist of parameters for 30 million embeddings?", "When IVF is used alone, nlist should be between `4*sqrt(n)` and `16*sqrt(n)`.\r\nFor more details take a look at [this section of the Faiss wiki](https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index#how-big-is-the-dataset)", "Thanks a lot. I was lost with calling the index from class and using faiss_index_factory. ", "@lhoestq Thanks a lot for the help you have given to solve this issue. As per my experiments, IVF index suits well for my case and it is a lot faster. The use of this can make the entire RAG end-to-end trainable lot faster. So I will close this issue. Will do the final PR soon. " ]
1,615,580,838,000
1,616,624,951,000
1,616,624,951,000
NONE
null
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any way to make this process faster? @lhoestq ``` def training_step(self, batch, batch_idx) -> Dict: if (not batch_idx==0) and (batch_idx%5==0): print("******************************************************") ctx_encoder=self.trainer.model.module.module.model.rag.ctx_encoder model_copy =type(ctx_encoder)(self.config_dpr) # get a new instance #this will be load in the CPU model_copy.load_state_dict(ctx_encoder.state_dict()) # copy weights and stuff list_of_gpus = ['cuda:2','cuda:3'] c_dir='/custom/cache/dir' kb_dataset = load_dataset("csv", data_files=[self.custom_config.csv_path], split="train", delimiter="\t", column_names=["title", "text"],cache_dir=c_dir) print(kb_dataset) n=len(list_of_gpus) #nunber of dedicated GPUs kb_list=[kb_dataset.shard(n, i, contiguous=True) for i in range(n)] #kb_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/haha-dir') print(self.trainer.global_rank) dataset_shards = self.re_encode_kb(model_copy.to(device=list_of_gpus[self.trainer.global_rank]),kb_list[self.trainer.global_rank]) output = [None for _ in list_of_gpus] #self.trainer.accelerator_connector.accelerator.barrier("embedding_process") dist.all_gather_object(output, dataset_shards) #This creation and re-initlaization of the new index if (self.trainer.global_rank==0): #saving will be done in the main process combined_dataset = concatenate_datasets(output) passages_path =self.config.passages_path logger.info("saving the dataset with ") #combined_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/MY-Passage') combined_dataset.save_to_disk(passages_path) logger.info("Add faiss index to the dataset that consist of embeddings") embedding_dataset=combined_dataset index = faiss.IndexHNSWFlat(768, 128, faiss.METRIC_INNER_PRODUCT) embedding_dataset.add_faiss_index("embeddings", custom_index=index) embedding_dataset.get_index("embeddings").save(self.config.index_path)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2046/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2040
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2040/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2040/comments
https://api.github.com/repos/huggingface/datasets/issues/2040/events
https://github.com/huggingface/datasets/issues/2040
830,169,387
MDU6SXNzdWU4MzAxNjkzODc=
2,040
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk
{ "login": "simonschoe", "id": 53626067, "node_id": "MDQ6VXNlcjUzNjI2MDY3", "avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4", "gravatar_id": "", "url": "https://api.github.com/users/simonschoe", "html_url": "https://github.com/simonschoe", "followers_url": "https://api.github.com/users/simonschoe/followers", "following_url": "https://api.github.com/users/simonschoe/following{/other_user}", "gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}", "starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions", "organizations_url": "https://api.github.com/users/simonschoe/orgs", "repos_url": "https://api.github.com/users/simonschoe/repos", "events_url": "https://api.github.com/users/simonschoe/events{/privacy}", "received_events_url": "https://api.github.com/users/simonschoe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! To help me understand the situation, can you print the values of `load_from_disk(PATH_DATA_CLS_A)['train']._indices_data_files` and `load_from_disk(PATH_DATA_CLS_B)['train']._indices_data_files` ?\r\nThey should both have a path to an arrow file\r\n\r\nAlso note that from #2025 concatenating datasets will no longer have such restrictions.", "Sure, thanks for the fast reply!\r\n\r\nFor dataset A: `[{'filename': 'drive/MyDrive/data_target_task/dataset_a/train/cache-4797266bf4db1eb7.arrow'}]`\r\nFor dataset B: `[]`\r\n\r\nNo clue why for B it returns nothing. `PATH_DATA_CLS_B` is exactly the same in `save_to_disk` and `load_from_disk`... Also I can verify that the folder physically exists under 'drive/MyDrive/data_target_task/dataset_b/'", "In the next release you'll be able to concatenate any kinds of dataset (either from memory or from disk).\r\n\r\nFor now I'd suggest you to flatten the indices of the A and B datasets. This will remove the indices mapping and you will be able to concatenate them. You can flatten the indices with\r\n```python\r\ndataset = dataset.flatten_indices()\r\n```", "Indeed this works. Not the most elegant solution, but it does the trick. Thanks a lot! " ]
1,615,559,220,000
1,628,100,043,000
1,628,100,043,000
NONE
null
Hi there, I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects): ```python concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']]) ``` Yielding the following error: ```python ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk. However datasets' indices [1] come from memory and datasets' indices [0] come from disk. ``` Been trying to solve this for quite some time now. Both `DataDict` have been created by reading in a `csv` via `load_dataset` and subsequently processed using the various `datasets` methods (i.e. filter, map, remove col, rename col). Can't figure out tho... `load_from_disk(PATH_DATA_CLS_A)['train']` yields: ```python Dataset({ features: ['labels', 'text'], num_rows: 785 }) ``` `load_from_disk(PATH_DATA_CLS_B)['train']` yields: ```python Dataset({ features: ['labels', 'text'], num_rows: 3341 }) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2040/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2040/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2038
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2038/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2038/comments
https://api.github.com/repos/huggingface/datasets/issues/2038/events
https://github.com/huggingface/datasets/issues/2038
830,036,875
MDU6SXNzdWU4MzAwMzY4NzU=
2,038
outdated dataset_infos.json might fail verifications
{ "login": "songfeng", "id": 2062185, "node_id": "MDQ6VXNlcjIwNjIxODU=", "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/songfeng", "html_url": "https://github.com/songfeng", "followers_url": "https://api.github.com/users/songfeng/followers", "following_url": "https://api.github.com/users/songfeng/following{/other_user}", "gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/songfeng/subscriptions", "organizations_url": "https://api.github.com/users/songfeng/orgs", "repos_url": "https://api.github.com/users/songfeng/repos", "events_url": "https://api.github.com/users/songfeng/events{/privacy}", "received_events_url": "https://api.github.com/users/songfeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting.\r\n\r\nTo update the dataset_infos.json you can run:\r\n```\r\ndatasets-cli test ./datasets/doc2dial --all_configs --save_infos --ignore_verifications\r\n```", "Fixed by #2041, thanks again @songfeng !" ]
1,615,549,314,000
1,615,912,060,000
1,615,912,060,000
CONTRIBUTOR
null
The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc.. Could you please update this file or point me how to update this file? Thank you.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2038/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2038/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2036
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2036/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2036/comments
https://api.github.com/repos/huggingface/datasets/issues/2036/events
https://github.com/huggingface/datasets/issues/2036
829,909,258
MDU6SXNzdWU4Mjk5MDkyNTg=
2,036
Cannot load wikitext
{ "login": "Gpwner", "id": 19349207, "node_id": "MDQ6VXNlcjE5MzQ5MjA3", "avatar_url": "https://avatars.githubusercontent.com/u/19349207?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Gpwner", "html_url": "https://github.com/Gpwner", "followers_url": "https://api.github.com/users/Gpwner/followers", "following_url": "https://api.github.com/users/Gpwner/following{/other_user}", "gists_url": "https://api.github.com/users/Gpwner/gists{/gist_id}", "starred_url": "https://api.github.com/users/Gpwner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Gpwner/subscriptions", "organizations_url": "https://api.github.com/users/Gpwner/orgs", "repos_url": "https://api.github.com/users/Gpwner/repos", "events_url": "https://api.github.com/users/Gpwner/events{/privacy}", "received_events_url": "https://api.github.com/users/Gpwner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Solved!" ]
1,615,540,179,000
1,615,797,902,000
1,615,797,884,000
NONE
null
when I execute these codes ``` >>> from datasets import load_dataset >>> test_dataset = load_dataset("wikitext") ``` I got an error,any help? ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 487, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/wikitext/wikitext.py ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2036/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2036/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2035
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2035/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2035/comments
https://api.github.com/repos/huggingface/datasets/issues/2035/events
https://github.com/huggingface/datasets/issues/2035
829,475,544
MDU6SXNzdWU4Mjk0NzU1NDQ=
2,035
wiki40b/wikipedia for almost all languages cannot be downloaded
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Dear @lhoestq for wikipedia dataset I also get the same error, I greatly appreciate if you could have a look into this dataset as well. Below please find the command to reproduce the error:\r\n\r\n```\r\ndataset = load_dataset(\"wikipedia\", \"20200501.bg\")\r\nprint(dataset)\r\n```\r\n\r\nYour library is my only chance to be able training the models at scale and I am grateful for your help.\r\n\r\n", "Hi @dorost1234,\r\nTry installing this library first, `pip install 'apache-beam[gcp]' --use-feature=2020-resolver` followed by loading dataset like this using beam runner.\r\n\r\n`dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')`\r\n\r\n I also read in error stack trace that:\r\n\r\n> Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc.\r\n\r\nWorked perfectly fine after this (Ignore these warnings)\r\n\r\n![image](https://user-images.githubusercontent.com/19718818/110908410-c7e2ce00-8334-11eb-8d10-7354359e9ec3.png)\r\n\r\n", "For wikipedia dataset, looks like the files it's looking for are no longer available. For `bg`, I checked [here](https://dumps.wikimedia.org/bgwiki/). For this I think `dataset_infos.json` for this dataset has to made again? You'll have to load this dataset also using beam runner.\r\n\r\n", "Hello @dorost1234,\r\n\r\nIndeed, Wikipedia datasets need a lot of preprocessing and this is done using Apache Beam. That is the reason why it is required that you install Apache Beam in order to preform this preprocessing.\r\n\r\nFor some specific default parameters (English Wikipedia), Hugging Face has already preprocessed the dataset for you (and it is stored in the cloud). That is the reason why you do not get the error for English: the preprocessing is already done by HF and you just get the preprocessed dataset; Apache Beam is not required in that case.", "Hi\nI really appreciate if huggingface can kindly provide preprocessed\ndatasets, processing these datasets require sufficiently large resources\nand I do not have unfortunately access to, and perhaps many others too.\nthanks\n\nOn Fri, Mar 12, 2021 at 9:04 AM Albert Villanova del Moral <\n***@***.***> wrote:\n\n> Hello @dorost1234 <https://github.com/dorost1234>,\n>\n> Indeed, Wikipedia datasets need a lot of preprocessing and this is done\n> using Apache Beam. That is the reason why it is required that you install\n> Apache Beam in order to preform this preprocessing.\n>\n> For some specific default parameters (English Wikipedia), Hugging Face has\n> already preprocessed the dataset for you (and it is stored in the cloud).\n> That is the reason why you do not get the error for English: the\n> preprocessing is already done by HF and you just get the preprocessed\n> dataset; Apache Beam is not required in that case.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2035#issuecomment-797310899>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMXACFQZAGMK4VGXRETTDHDI3ANCNFSM4ZA5R2UA>\n> .\n>\n", "Hi everyone\r\nthanks for the helpful pointers, I did it as @bhavitvyamalik suggested, for me this freezes on this command for several hours, \r\n\r\n`Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /users/dara/cache/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\n`\r\n\r\nDo you know how long this takes? Any specific requirements the machine should have? like very large memory or so? @lhoestq \r\n\r\nthanks \r\n\r\n\r\n", "HI @dorost1234, \r\nThe dataset size is 631.84 MiB so depending on your internet speed it'll take some time. You can monitor your internet speed meanwhile to see if it's downloading the dataset or not (use `nload` if you're using linux/mac to monitor the same). In my case it took around 3-4 mins. Since they haven't used `download_and_extract` here that's why there's no download progress bar.", "Hi\r\nthanks, my internet speed should be good, but this really freezes for me, this is how I try to get this dataset:\r\n\r\n`from datasets import load_dataset\r\ndataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')`\r\n\r\nthe output I see if different also from what you see after writing this command:\r\n\r\n`Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /users/dara/cache/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...`\r\n\r\ndo you have any idea why it might get freezed? anything I am missing @lhoestq @bhavitvyamalik. Do I need maybe to set anything special for apache-beam? \r\n\r\nthanks a lot \r\n\r\nOn Tue, Mar 16, 2021 at 9:03 AM Bhavitvya Malik ***@***.***>\r\nwrote:\r\n\r\n> HI @dorost1234 <https://github.com/dorost1234>,\r\n> The dataset size is 631.84 MiB so depending on your internet speed it'll\r\n> take some time. You can monitor your internet speed meanwhile to see if\r\n> it's downloading the dataset or not (use nload if you're using linux/mac\r\n> to monitor the same). In my case it took around 3-4 mins. Since they\r\n> haven't used download_and_extract here that's why there's no download\r\n> progress bar.\r\n>\r\n> —\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/2035#issuecomment-800044303>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMQIHNNLM2LGG6QKZ73TD4GDJANCNFSM4ZA5R2UA>\r\n> .\r\n>\r\n", "I tried this on another machine (followed the same procedure I've mentioned above). This is what it shows (during the freeze period) for me:\r\n```\r\n>>> dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')\r\nDownloading: 5.26kB [00:00, 1.23MB/s] \r\nDownloading: 1.40kB [00:00, 327kB/s] \r\nDownloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\nWARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.\r\nConnecting anonymously.\r\nWARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.\r\n```\r\nAfter around 10 minutes, here's the loading of dataset:\r\n```\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:16<00:00, 16.42s/sources]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.12sources/s]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.14sources/s]\r\nDataset wiki40b downloaded and prepared to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f. Subsequent calls will reuse this data.\r\n```", "Hi\r\nI honestly also now tried on another machine and nothing shows up after\r\nhours of waiting. Are you sure you have not set any specific setting? maybe\r\ngoogle cloud which seems it is used here, needs some credential setting?\r\nthanks for any suggestions on this\r\n\r\nOn Tue, Mar 16, 2021 at 10:02 AM Bhavitvya Malik ***@***.***>\r\nwrote:\r\n\r\n> I tried this on another machine (followed the same procedure I've\r\n> mentioned above). This is what it shows (during the freeze period) for me:\r\n>\r\n> >>> dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')\r\n> Downloading: 5.26kB [00:00, 1.23MB/s]\r\n> Downloading: 1.40kB [00:00, 327kB/s]\r\n> Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\n> WARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.\r\n> Connecting anonymously.\r\n> WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.\r\n>\r\n> After around 10 minutes, here's the loading of dataset:\r\n>\r\n> 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:16<00:00, 16.42s/sources]\r\n> 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.12sources/s]\r\n> 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.14sources/s]\r\n> Dataset wiki40b downloaded and prepared to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f. Subsequent calls will reuse this data.\r\n>\r\n> —\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/2035#issuecomment-800081772>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMX6A2ZTRZUIIZVFRCDTD4NC3ANCNFSM4ZA5R2UA>\r\n> .\r\n>\r\n" ]
1,615,492,494,000
1,615,906,417,000
null
NONE
null
Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error. I really need majority of languages in this dataset to be able to train my models for a deadline and your great scalable super well-written library is my only hope to train the models at scale while being low on resources. thank you very much. ``` (fast) dara@vgne046:/user/dara/dev/codes/seq2seq$ python test_data.py Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to temp/dara/cache_home_2/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f... Traceback (most recent call last): File "test_data.py", line 3, in <module> dataset = load_dataset("wiki40b", "cs") File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset use_auth_token=use_auth_token, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 579, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 1105, in _download_and_prepare import apache_beam as beam File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/__init__.py", line 96, in <module> from apache_beam import io File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/__init__.py", line 23, in <module> from apache_beam.io.avroio import * File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/avroio.py", line 55, in <module> import avro File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 668, in _load_unlocked File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 34, in <module> File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 30, in LoadResource NotADirectoryError: [Errno 20] Not a directory: '/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/VERSION.txt' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2035/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2035/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2032
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2032/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2032/comments
https://api.github.com/repos/huggingface/datasets/issues/2032/events
https://github.com/huggingface/datasets/issues/2032
829,250,912
MDU6SXNzdWU4MjkyNTA5MTI=
2,032
Use Arrow filtering instead of writing a new arrow file for Dataset.filter
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false } ]
null
[]
1,615,475,930,000
1,615,483,257,000
null
MEMBER
null
Currently the filter method reads the dataset batch by batch to write a new, filtered, arrow file on disk. Therefore all the reading + writing can take some time. Using a mask directly on the arrow table doesn't do any read or write operation therefore it's significantly quicker. I think there are two cases: - if the dataset doesn't have an indices mapping, then one can simply use the arrow filtering on the main arrow table `dataset._data.filter(...)` - if the dataset an indices mapping, then the mask should be applied on the indices mapping table `dataset._indices.filter(...)` The indices mapping is used to map between the idx at `dataset[idx]` in `__getitem__` and the idx in the actual arrow table. The new filter method should therefore be faster, and allow users to pass either a filtering function (that returns a boolean given an example), or directly a mask. Feel free to discuss this idea in this thread :) One additional note: the refactor at #2025 would make all the pickle-related stuff work directly with the arrow filtering, so that we only need to change the Dataset.filter method without having to deal with pickle. cc @theo-m @gchhablani related issues: #1796 #1949
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2032/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2032/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2031
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2031/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2031/comments
https://api.github.com/repos/huggingface/datasets/issues/2031/events
https://github.com/huggingface/datasets/issues/2031
829,122,778
MDU6SXNzdWU4MjkxMjI3Nzg=
2,031
wikipedia.py generator that extracts XML doesn't release memory
{ "login": "miyamonz", "id": 6331508, "node_id": "MDQ6VXNlcjYzMzE1MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/miyamonz", "html_url": "https://github.com/miyamonz", "followers_url": "https://api.github.com/users/miyamonz/followers", "following_url": "https://api.github.com/users/miyamonz/following{/other_user}", "gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}", "starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions", "organizations_url": "https://api.github.com/users/miyamonz/orgs", "repos_url": "https://api.github.com/users/miyamonz/repos", "events_url": "https://api.github.com/users/miyamonz/events{/privacy}", "received_events_url": "https://api.github.com/users/miyamonz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @miyamonz \r\nThanks for investigating this issue, good job !\r\nIt would be awesome to integrate your fix in the library, could you open a pull request ?", "OK! I'll send it later." ]
1,615,467,084,000
1,616,402,032,000
1,616,402,032,000
CONTRIBUTOR
null
I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe. I found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop. https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L464-L502 `root.clear()` intend to clear memory, but it doesn't. https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L490 https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L494 I replaced them with `elem.clear()`, then it seems to work correctly. here is the notebook to reproduce it. https://gist.github.com/miyamonz/dc06117302b6e85fa51cbf46dde6bb51#file-xtract_content-ipynb
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2031/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2031/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2029
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2029/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2029/comments
https://api.github.com/repos/huggingface/datasets/issues/2029/events
https://github.com/huggingface/datasets/issues/2029
829,097,290
MDU6SXNzdWU4MjkwOTcyOTA=
2,029
Loading a faiss index KeyError
{ "login": "nbroad1881", "id": 24982805, "node_id": "MDQ6VXNlcjI0OTgyODA1", "avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nbroad1881", "html_url": "https://github.com/nbroad1881", "followers_url": "https://api.github.com/users/nbroad1881/followers", "following_url": "https://api.github.com/users/nbroad1881/following{/other_user}", "gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}", "starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions", "organizations_url": "https://api.github.com/users/nbroad1881/orgs", "repos_url": "https://api.github.com/users/nbroad1881/repos", "events_url": "https://api.github.com/users/nbroad1881/events{/privacy}", "received_events_url": "https://api.github.com/users/nbroad1881/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "In your code `dataset2` doesn't contain the \"embeddings\" column, since it is created from the pandas DataFrame with columns \"text\" and \"label\".\r\n\r\nTherefore when you call `dataset2[embeddings_name]`, you get a `KeyError`.\r\n\r\nIf you want the \"embeddings\" column back, you can create `dataset2` with\r\n```python\r\ndataset2 = load_from_disk(dataset_filename)\r\n```\r\nwhere `dataset_filename` is the place where you saved you dataset with the embeddings in the first place.", "Ok in that case HF should fix their misleading example at https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index \r\n\r\nI copy-pasted it here.\r\n\r\n> When you are done with your queries you can save your index on disk:\r\n> \r\n> ```python\r\n> ds_with_embeddings.save_faiss_index('embeddings', 'my_index.faiss')\r\n> ```\r\n> Then reload it later:\r\n> \r\n> ```python\r\n> ds = load_dataset('crime_and_punish', split='train[:100]')\r\n> ds.load_faiss_index('embeddings', 'my_index.faiss')\r\n> ```", "Hi !\r\n\r\nThe code of the example is valid.\r\nAn index is a search engine, it's not considered a column of a dataset.\r\nWhen you do `ds.load_faiss_index(\"embeddings\", 'my_index.faiss')`, it attaches an index named \"embeddings\" to the dataset but it doesn't re-add the \"embeddings\" column. You can list the indexes of a dataset by using `ds.list_indexes()`.\r\n\r\nIf I understand correctly by reading this example you thought that it was re-adding the \"embeddings\" column.\r\nThis looks misleading indeed, and we should add a note to make it more explicit that it doesn't store the column that was used to build the index.\r\n\r\nFeel free to open a PR to suggest an improvement on the documentation if you want to contribute :)", "> If I understand correctly by reading this example you thought that it was re-adding the \"embeddings\" column.\r\nYes. I was trying to use the dataset in RAG and it complained that the dataset didn't have the right columns. No problems when loading the dataset with `load_from_disk` and then doing `load_faiss_index`\r\n\r\nWhat I learned was\r\n1. column and index are different\r\n2. loading the index does not create a column\r\n3. the column is not needed to be able to use the index\r\n4. RAG needs both the embeddings column and the index\r\n\r\nIf I can come up with a way to articulate this in the right spot in the docs, I'll open a PR" ]
1,615,464,973,000
1,615,508,469,000
1,615,508,469,000
NONE
null
I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation. The basic steps are: 1. Create a dataset (dataset1) 2. Create an embeddings column using DPR 3. Add a faiss index to the dataset 4. Save faiss index to a file 5. Create a new dataset (dataset2) with the same text and label information as dataset1 6. Try to load the faiss index from file to dataset2 7. Get `KeyError: "Column embeddings not in the dataset"` I've made a colab notebook that should show exactly what I did. Please switch to GPU runtime; I didn't check on CPU. https://colab.research.google.com/drive/1X0S9ZuZ8k0ybcoei4w7so6dS_WrABmIx?usp=sharing Ubuntu Version VERSION="18.04.5 LTS (Bionic Beaver)" datasets==1.4.1 faiss==1.5.3 faiss-gpu==1.7.0 torch==1.8.0+cu101 transformers==4.3.3 NVIDIA-SMI 460.56 Driver Version: 460.32.03 CUDA Version: 11.2 Tesla K80 I was basically following the steps here: https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index I included the exact code from the documentation at the end of the notebook to show that they don't work either.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2029/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2029/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2026
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2026/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2026/comments
https://api.github.com/repos/huggingface/datasets/issues/2026/events
https://github.com/huggingface/datasets/issues/2026
828,194,467
MDU6SXNzdWU4MjgxOTQ0Njc=
2,026
KeyError on using map after renaming a column
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nActually, the error occurs due to these two lines:\r\n```python\r\nraw_dataset.set_format('torch',columns=['img','label'])\r\nraw_dataset = raw_dataset.rename_column('img','image')\r\n```\r\n`Dataset.rename_column` doesn't update the `_format_columns` attribute, previously defined by `Dataset.set_format`, with a new column name which is why this new column is missing in the output.", "Hi @mariosasko,\n\nThanks for opening a PR on this :)\nWhy does the old name also disappear?", "I just merged a @mariosasko 's PR that fixes this issue.\r\nIf it happens again, feel free to re-open :)" ]
1,615,402,457,000
1,615,473,574,000
1,615,473,520,000
CONTRIBUTOR
null
Hi, I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function. Here is what I try: ```python transform = Compose([ToPILImage(),ToTensor(),Normalize([0.0,0.0,0.0],[1.0,1.0,1.0])]) def prepare_features(examples): images = [] labels = [] print(examples) for example_idx, example in enumerate(examples["image"]): if transform is not None: images.append(transform(examples["image"][example_idx].permute(2,0,1))) else: images.append(examples["image"][example_idx].permute(2,0,1)) labels.append(examples["label"][example_idx]) output = {"label":labels, "image":images} return output raw_dataset = load_dataset('cifar10') raw_dataset.set_format('torch',columns=['img','label']) raw_dataset = raw_dataset.rename_column('img','image') features = datasets.Features({ "image": datasets.Array3D(shape=(3,32,32),dtype="float32"), "label": datasets.features.ClassLabel(names=[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck", ]), }) train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000) ``` The error: ```python --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-54-bf29672c53ee> in <module>() 14 ]), 15 }) ---> 16 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000) 2 frames /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1287 test_inputs = self[:2] if batched else self[0] 1288 test_indices = [0, 1] if batched else 0 -> 1289 update_data = does_function_return_dict(test_inputs, test_indices) 1290 logger.info("Testing finished, running the mapping function on the dataset") 1291 /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in does_function_return_dict(inputs, indices) 1258 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns] 1259 processed_inputs = ( -> 1260 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) 1261 ) 1262 does_return_dict = isinstance(processed_inputs, Mapping) <ipython-input-52-b4dccbafb70d> in prepare_features(examples) 3 labels = [] 4 print(examples) ----> 5 for example_idx, example in enumerate(examples["image"]): 6 if transform is not None: 7 images.append(transform(examples["image"][example_idx].permute(2,0,1))) KeyError: 'image' ``` The print statement inside returns this: ```python {'label': tensor([6, 9])} ``` Apparently, both `img` and `image` do not exist after renaming. Note that this code works fine with `img` everywhere. Notebook: https://colab.research.google.com/drive/1SzESAlz3BnVYrgQeJ838vbMp1OsukiA2?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2026/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2026/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2022
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2022/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2022/comments
https://api.github.com/repos/huggingface/datasets/issues/2022/events
https://github.com/huggingface/datasets/issues/2022
827,435,033
MDU6SXNzdWU4Mjc0MzUwMzM=
2,022
ValueError when rename_column on splitted dataset
{ "login": "simonschoe", "id": 53626067, "node_id": "MDQ6VXNlcjUzNjI2MDY3", "avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4", "gravatar_id": "", "url": "https://api.github.com/users/simonschoe", "html_url": "https://github.com/simonschoe", "followers_url": "https://api.github.com/users/simonschoe/followers", "following_url": "https://api.github.com/users/simonschoe/following{/other_user}", "gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}", "starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions", "organizations_url": "https://api.github.com/users/simonschoe/orgs", "repos_url": "https://api.github.com/users/simonschoe/repos", "events_url": "https://api.github.com/users/simonschoe/events{/privacy}", "received_events_url": "https://api.github.com/users/simonschoe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nThis is a bug so thanks for reporting it. `Dataset.__setstate__` is the problem, which is called when `Dataset.rename_column` tries to copy the dataset with `copy.deepcopy(self)`. This only happens if the `split` arg in `load_dataset` was defined as `ReadInstruction`.\r\n\r\nTo overcome this issue, use the named splits API (for now):\r\n```python\r\ntrain_ds, test_ds = load_dataset(\r\n path='csv', \r\n delimiter='\\t', \r\n data_files=text_files, \r\n split=['train[:90%]', 'train[-10%:]'],\r\n)\r\n\r\ntrain_ds = train_ds.rename_column('sentence', 'text')\r\n```", "This has been fixed in #2043 , thanks @mariosasko \r\nThe fix is available on master and we'll do a new release soon :)\r\n\r\nfeel free to re-open if you still have issues" ]
1,615,369,238,000
1,615,903,568,000
1,615,903,505,000
NONE
null
Hi there, I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so: ```python split = { 'train': ReadInstruction('train', to=90, unit='%'), 'test': ReadInstruction('train', from_=-10, unit='%') } dataset = load_dataset( path='csv', # use 'text' loading script to load from local txt-files delimiter='\t', # xxx data_files=text_files, # list of paths to local text files split=split, # xxx ) dataset ``` Part of output: ```python DatasetDict({ train: Dataset({ features: ['sentence', 'sentiment'], num_rows: 900 }) test: Dataset({ features: ['sentence', 'sentiment'], num_rows: 100 }) }) ``` Afterwards I'd like to rename the 'sentence' column to 'text' in order to be compatible with my modelin pipeline. If I run the following code I experience a `ValueError` however: ```python dataset['train'].rename_column('sentence', 'text') ``` ```python /usr/local/lib/python3.7/dist-packages/datasets/splits.py in __init__(self, name) 353 for split_name in split_names_from_instruction: 354 if not re.match(_split_re, split_name): --> 355 raise ValueError(f"Split name should match '{_split_re}'' but got '{split_name}'.") 356 357 def __str__(self): ValueError: Split name should match '^\w+(\.\w+)*$'' but got 'ReadInstruction('. ``` In particular, these behavior does not arise if I use the deprecated `rename_column_` method. Any idea what causes the error? Would assume something in the way I defined the split. Thanks in advance! :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2022/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2022/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2021
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2021/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2021/comments
https://api.github.com/repos/huggingface/datasets/issues/2021/events
https://github.com/huggingface/datasets/issues/2021
826,988,016
MDU6SXNzdWU4MjY5ODgwMTY=
2,021
Interactively doing save_to_disk and load_from_disk corrupts the datasets object?
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nCan you give us a minimal reproducible example? This [part](https://huggingface.co/docs/datasets/master/processing.html#controling-the-cache-behavior) of the docs explains how to control caching." ]
1,615,344,514,000
1,615,630,061,000
1,615,630,061,000
NONE
null
dataset_info.json file saved after using save_to_disk gets corrupted as follows. ![image](https://user-images.githubusercontent.com/16892570/110568474-ed969880-81b7-11eb-832f-2e5129656016.png) Is there a way to disable the cache that will save to /tmp/huggiface/datastes ? I have a feeling there is a serious issue with cashing.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2021/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2021/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2012
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2012/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2012/comments
https://api.github.com/repos/huggingface/datasets/issues/2012/events
https://github.com/huggingface/datasets/issues/2012
825,634,064
MDU6SXNzdWU4MjU2MzQwNjQ=
2,012
No upstream branch
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "What's the issue exactly ?\r\n\r\nGiven an `upstream` remote repository with url `https://github.com/huggingface/datasets.git`, you can totally rebase from `upstream/master`.\r\n\r\nIt's mentioned at the beginning how to add the `upstream` remote repository\r\n\r\nhttps://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L10-L14", "~~What difference is there with the default `origin` remote that is set when you clone the repo?~~ I've just understood that this applies to **forks** of the repo 🤡 " ]
1,615,283,335,000
1,615,289,611,000
1,615,289,611,000
CONTRIBUTOR
null
Feels like the documentation on adding a new dataset is outdated? https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54 There is no upstream branch on remote.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2012/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2012/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2010
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2010/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2010/comments
https://api.github.com/repos/huggingface/datasets/issues/2010/events
https://github.com/huggingface/datasets/issues/2010
825,567,635
MDU6SXNzdWU4MjU1Njc2MzU=
2,010
Local testing fails
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "I'm not able to reproduce on my side.\r\nCan you provide the full stacktrace please ?\r\nWhat version of `python` and `dill` do you have ? Which OS are you using ?", "```\r\nco_filename = '<ipython-input-2-e0383a102aae>', returned_obj = [0]\r\n \r\n def create_ipython_func(co_filename, returned_obj):\r\n def func():\r\n return returned_obj\r\n \r\n code = func.__code__\r\n> code = CodeType(*[getattr(code, k) if k != \"co_filename\" else co_filename for k in code_args])\r\nE TypeError: an integer is required (got type bytes)\r\n\r\ntests/test_caching.py:152: TypeError\r\n```\r\n\r\nPython 3.8.8 \r\ndill==0.3.1.1\r\n", "I managed to reproduce. This comes from the CodeType init signature that is different in python 3.8.8\r\nI opened a PR to fix this test\r\nThanks !" ]
1,615,280,498,000
1,615,298,763,000
1,615,298,763,000
CONTRIBUTOR
null
I'm following the CI setup as described in https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19 in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4 and getting ``` FAILED tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function - TypeError: an integer is required (got type bytes) 1 failed, 2321 passed, 5109 skipped, 10 warnings in 124.32s (0:02:04) ``` Seems like a discrepancy with CI, perhaps a lib version that's not controlled? Tried with `pyarrow=={1.0.0,0.17.1,2.0.0}`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2010/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2010/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2009
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2009/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2009/comments
https://api.github.com/repos/huggingface/datasets/issues/2009/events
https://github.com/huggingface/datasets/issues/2009
825,541,366
MDU6SXNzdWU4MjU1NDEzNjY=
2,009
Ambiguous documentation
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @theo-m !\r\n\r\nA few lines above this line, you'll find that the `_split_generators` method returns a list of `SplitGenerator`s objects:\r\n\r\n```python\r\ndatasets.SplitGenerator(\r\n name=datasets.Split.VALIDATION,\r\n # These kwargs will be passed to _generate_examples\r\n gen_kwargs={\r\n \"filepath\": os.path.join(data_dir, \"dev.jsonl\"),\r\n \"split\": \"dev\",\r\n },\r\n),\r\n```\r\n\r\nNotice the `gen_kwargs` argument passed to the constructor of `SplitGenerator`: this dict will be unpacked as keyword arguments to pass to the `_generat_examples` method (in this case the `filepath` and `split` arguments).\r\n\r\nLet me know if that helps!", "Oh ok I hadn't made the connection between those two, will offer a tweak to the comment and the template then - thanks!" ]
1,615,279,331,000
1,615,561,294,000
1,615,561,294,000
CONTRIBUTOR
null
https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158 Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming from. Happy to push a PR with a clearer statement when I understand the meaning.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2009/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2009/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2007
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2007/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2007/comments
https://api.github.com/repos/huggingface/datasets/issues/2007/events
https://github.com/huggingface/datasets/issues/2007
824,518,158
MDU6SXNzdWU4MjQ1MTgxNTg=
2,007
How to not load huggingface datasets into memory
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "So maybe a summary here: \r\nIf I could fit a large model with batch_size = X into memory, is there a way I could train this model for huge datasets with keeping setting the same? thanks ", "The `datastets` library doesn't load datasets into memory. Therefore you can load a dataset that is terabytes big without filling up your RAM.\r\n\r\nThe only thing that's loaded into memory during training is the batch used in the training step.\r\nSo as long as your model works with batch_size = X, then you can load an even bigger dataset and it will work as well with the same batch_size.\r\n\r\nNote that you still have to take into account that some batches take more memory than others, depending on the texts lengths. If it works for a batch with batch_size = X and with texts of maximum length, then it will work for all batches.\r\n\r\nIn your case I guess that there are a few long sentences in the dataset. For those long sentences you get a memory error on your GPU because they're too long. By passing `max_train_samples` you may have taken a subset of the dataset that only contain short sentences. That's probably why in your case it worked only when you set `max_train_samples`.\r\nI'd suggest you to reduce the batch size so that the batches with long sentences can be loaded on the GPU.\r\n\r\nLet me know if that helps or if you have other questions" ]
1,615,206,926,000
1,628,100,145,000
1,628,100,145,000
NONE
null
Hi I am running this example from transformers library version 4.3.3: (Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box) USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google/mt5-base --dataset_name wmt16 --dataset_config_name ro-en --source_prefix "translate English to Romanian: " --task translation_en_to_ro --output_dir /test/test_large --do_train --do_eval --predict_with_generate --max_train_samples 500 --max_val_samples 500 --max_source_length 128 --max_target_length 128 --sortish_sampler --per_device_train_batch_size 8 --val_max_target_length 128 --deepspeed ds_config.json --num_train_epochs 1 --eval_steps 25000 --warmup_steps 500 --overwrite_output_dir (Here please find the script: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py) If you do not pass max_train_samples in above command to load the full dataset, then I get memory issue on a gpu with 24 GigBytes of memory. I need to train large-scale mt5 model on large-scale datasets of wikipedia (multiple of them concatenated or other datasets in multiple languages like OPUS), could you help me how I can avoid loading the full data into memory? to make the scripts not related to data size? In above example, I was hoping the script could work without relying on dataset size, so I can still train the model without subsampling training set. thank you so much @lhoestq for your great help in advance
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2007/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2007/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2005/comments
https://api.github.com/repos/huggingface/datasets/issues/2005/events
https://github.com/huggingface/datasets/issues/2005
824,275,035
MDU6SXNzdWU4MjQyNzUwMzU=
2,005
Setting to torch format not working with torchvision and MNIST
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Adding to the previous information, I think `torch.utils.data.DataLoader` is doing some conversion. \r\nWhat I tried:\r\n```python\r\ntrain_dataset = load_dataset('mnist')\r\n```\r\nI don't use any `map` or `set_format` or any `transform`. I use this directly, and try to load batches using the `DataLoader` with batch size 2, I get an output like this for the `image`:\r\n\r\n```\r\n[[tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor...\r\n```\r\nFor `label`, it works fine:\r\n```\r\ntensor([7, 6])\r\n```\r\nNote that I didn't specify conversion to torch tensors anywhere.\r\n\r\nBasically, there are two problems here:\r\n1. `dataset.map` doesn't return tensor type objects, even though it uses the transforms, the grayscale conversion in transform was done, but the output was lists only.\r\n2. The `DataLoader` performs its own conversion, which may be not desired.\r\n\r\nI understand that we can't change `DataLoader` because it is a torch functionality, however, is there a way we can handle image data to allow using it with torch `DataLoader` and `torchvision` properly?\r\n\r\nI think if the `image` was a torch tensor (N,H,W,C), or a list of torch tensors (H,W,C), before it is passed to `DataLoader`, then we might not face this issue. ", "What's the feature types of your new dataset after `.map` ?\r\n\r\nCan you try with adding `features=` in the `.map` call in order to set the \"image\" feature type to `Array2D` ?\r\nThe default feature type is lists of lists, we've not implemented shape verification to use ArrayXD instead of nested lists yet", "Hi @lhoestq\r\n\r\nRaw feature types are like this:\r\n```\r\nImage:\r\n<class 'list'> 60000 #(type, len)\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'int'>\r\nLabel:\r\n<class 'list'> 60000\r\n<class 'int'>\r\n```\r\nInside the `prepare_feature` method with batch size 100000 , after processing, they are like this:\r\n\r\nInside Prepare Train Features\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter map, the feature type are like this:\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'float'>\r\nLabel:\r\n<class 'list'> 60000\r\n<class 'int'>\r\n```\r\n\r\nAfter dataloader with batch size 2, the batch features are like this:\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n<hr>\r\n\r\nWhen I was setting the format of `train_dataset` to 'torch' after mapping - \r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nCorresponding DataLoader batch:\r\n```\r\nFrom DataLoader batch features\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nI will check with features and get back.\r\n\r\n\r\n\r\n", "Hi @lhoestq\r\n\r\n# Using Array3D\r\nI tried this:\r\n```python\r\nfeatures = datasets.Features({\r\n \"image\": datasets.Array3D(shape=(1,28,28),dtype=\"float32\"),\r\n \"label\": datasets.features.ClassLabel(names=[\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]),\r\n })\r\ntrain_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)\r\n```\r\nand it didn't fix the issue.\r\n\r\nDuring the `prepare_train_features:\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter the `map`:\r\n\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'float'>\r\nLabel:\r\n<class 'list'> 60000\r\n<class 'int'>\r\n```\r\nFrom the DataLoader batch:\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\nIt is the same as before.\r\n\r\n---\r\n\r\nUsing `datasets.Sequence(datasets.Array2D(shape=(28,28),dtype=\"float32\"))` gave an error during `map`:\r\n\r\n```python\r\nArrowNotImplementedError Traceback (most recent call last)\r\n<ipython-input-95-d28e69289084> in <module>()\r\n 3 \"label\": datasets.features.ClassLabel(names=[\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]),\r\n 4 })\r\n----> 5 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)\r\n\r\n15 frames\r\n/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc)\r\n 446 num_proc=num_proc,\r\n 447 )\r\n--> 448 for k, dataset in self.items()\r\n 449 }\r\n 450 )\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py in <dictcomp>(.0)\r\n 446 num_proc=num_proc,\r\n 447 )\r\n--> 448 for k, dataset in self.items()\r\n 449 }\r\n 450 )\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)\r\n 1307 fn_kwargs=fn_kwargs,\r\n 1308 new_fingerprint=new_fingerprint,\r\n-> 1309 update_data=update_data,\r\n 1310 )\r\n 1311 else:\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 202 }\r\n 203 # apply actual function\r\n--> 204 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 205 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 206 # re-apply format to the output\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)\r\n 335 # Call actual function\r\n 336 \r\n--> 337 out = func(self, *args, **kwargs)\r\n 338 \r\n 339 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)\r\n 1580 if update_data:\r\n 1581 batch = cast_to_python_objects(batch)\r\n-> 1582 writer.write_batch(batch)\r\n 1583 if update_data:\r\n 1584 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)\r\n 274 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type)\r\n 275 typed_sequence_examples[col] = typed_sequence\r\n--> 276 pa_table = pa.Table.from_pydict(typed_sequence_examples)\r\n 277 self.write_table(pa_table, writer_batch_size)\r\n 278 \r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in __arrow_array__(self, type)\r\n 95 out = pa.ExtensionArray.from_storage(type, pa.array(self.data, type.storage_dtype))\r\n 96 else:\r\n---> 97 out = pa.array(self.data, type=type)\r\n 98 if trying_type and out[0].as_py() != self.data[0]:\r\n 99 raise TypeError(\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowNotImplementedError: extension\r\n```", "# Convert raw tensors to torch format\r\nStrangely, converting to torch tensors works perfectly on `raw_dataset`:\r\n```python\r\nraw_dataset.set_format('torch',columns=['image','label'])\r\n```\r\nTypes:\r\n```\r\nImage:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nUsing this for transforms:\r\n```python\r\ndef prepare_features(examples):\r\n images = []\r\n labels = []\r\n for example_idx, example in enumerate(examples[\"image\"]):\r\n if transform is not None:\r\n images.append(transform(\r\n examples[\"image\"][example_idx].numpy()\r\n ))\r\n else:\r\n images.append(examples[\"image\"][example_idx].numpy())\r\n labels.append(examples[\"label\"][example_idx])\r\n output = {\"label\":labels, \"image\":images}\r\n return output\r\n```\r\n\r\nInside `prepare_train_features`:\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter `map`:\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\nDataLoader batch:\r\n\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\n---\r\n\r\n## Using `torch` format:\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\nDataLoader batches:\r\n\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\n---\r\n## Using the features - `Array3D`:\r\n\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter `map`:\r\n```\r\nImage:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter DataLoader `batch`:\r\n```\r\nImage:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nThe last one works perfectly.\r\n\r\n![image](https://user-images.githubusercontent.com/29076344/110491452-4cf09c00-8117-11eb-8a47-73bf3fc0c3dc.png)\r\n\r\nI wonder why this worked, and others didn't.\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "Concluding, the way it works right now is:\r\n\r\n1. Converting raw dataset to `torch` format.\r\n2. Use the transform and apply using `map`, ensure the returned values are tensors. \r\n3. When mapping, use `features` with `image` being `Array3D` type.", "What the dataset returns depends on the feature type.\r\nFor a feature type that is Sequence(Sequence(Sequence(Value(\"uint8\")))), a dataset formatted as \"torch\" return lists of lists of tensors. This is because the lists lengths may vary.\r\nFor a feature type that is Array3D on the other hand it returns one tensor. This is because the size of the tensor is fixed and defined bu the Array3D type.", "Okay, that makes sense.\r\nRaw images are list of Array2D, hence we get a single tensor when `set_format` is used. But, why should I need to convert the raw images to `torch` format when `map` does this internally?\r\n\r\nUsing `Array3D` did not work with `map` when raw images weren't `set_format`ted to torch type.", "I understand that `map` needs to know what kind of output tensors are expected, and thus converting the raw dataset to `torch` format is necessary. Closing the issue since it is resolved." ]
1,615,189,091,000
1,615,312,693,000
1,615,312,693,000
CONTRIBUTOR
null
Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```python def prepare_features(examples): images = [] labels = [] for example_idx, example in enumerate(examples["image"]): if transform is not None: images.append(transform( np.array(examples["image"][example_idx], dtype=np.uint8) )) else: images.append(torch.tensor(np.array(examples["image"][example_idx], dtype=np.uint8))) labels.append(torch.tensor(examples["label"][example_idx])) output = {"label":labels, "image":images} return output raw_dataset = load_dataset('mnist') train_dataset = raw_dataset.map(prepare_features, batched=True, batch_size=10000) train_dataset.set_format("torch",columns=["image","label"]) ``` After this, I check the type of the following: ```python print(type(train_dataset["train"]["label"])) print(type(train_dataset["train"]["image"][0])) ``` This leads to the following output: ```python <class 'torch.Tensor'> <class 'list'> ``` I use `torch.utils.DataLoader` for batches, the type of `batch["train"]["image"]` is also `<class 'list'>`. I don't understand why only the `label` is converted to a torch tensor, why does the image not get converted? How can I fix this issue? Thanks, Gunjan EDIT: I just checked the shapes, and the types, `batch[image]` is a actually a list of list of tensors. Shape is (1,28,2,28), where `batch_size` is 2. I don't understand why this is happening. Ideally it should be a tensor of shape (2,1,28,28). EDIT 2: Inside `prepare_train_features`, the shape of `images[0]` is `torch.Size([1,28,28])`, the conversion is working. However, the output of the `map` is a list of list of list of list.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2005/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2003
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2003/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2003/comments
https://api.github.com/repos/huggingface/datasets/issues/2003/events
https://github.com/huggingface/datasets/issues/2003
824,034,678
MDU6SXNzdWU4MjQwMzQ2Nzg=
2,003
Messages are being printed to the `stdout`
{ "login": "mahnerak", "id": 1367529, "node_id": "MDQ6VXNlcjEzNjc1Mjk=", "avatar_url": "https://avatars.githubusercontent.com/u/1367529?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mahnerak", "html_url": "https://github.com/mahnerak", "followers_url": "https://api.github.com/users/mahnerak/followers", "following_url": "https://api.github.com/users/mahnerak/following{/other_user}", "gists_url": "https://api.github.com/users/mahnerak/gists{/gist_id}", "starred_url": "https://api.github.com/users/mahnerak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mahnerak/subscriptions", "organizations_url": "https://api.github.com/users/mahnerak/orgs", "repos_url": "https://api.github.com/users/mahnerak/repos", "events_url": "https://api.github.com/users/mahnerak/events{/privacy}", "received_events_url": "https://api.github.com/users/mahnerak/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "This is expected to show this message to the user via stdout.\r\nThis way the users see it directly and can cancel the downloading if they want to.\r\nCould you elaborate why it would be better to have it in stderr instead of stdout ?", "@lhoestq, sorry for the late reply\r\n\r\nI completely understand why you decided to output a message that is always shown. The only problem is that the message is printed to the `stdout`. For example, if the user runs `python run_glue.py > log_file`, it will redirect `stdout` to the file named `log_file`, and the message will not be shown to the user.\r\n\r\nInstead, we should print this message to `stderr`. Even in the case of `python run_glue.py > log_file` only `stdout` is being redirected and so the message is always shown." ]
1,615,154,974,000
1,615,830,467,000
null
NONE
null
In this code segment, we can see some messages are being printed to the `stdout`. https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554 According to the comment, it is done intentionally, but I don't really understand why don't we log it with a higher level or print it directly to the `stderr`. In my opinion, this kind of messages should never printed to the stdout. At least some configuration/flag should make it possible to provide in order to explicitly prevent the package to contaminate the stdout.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2003/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2001
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2001/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2001/comments
https://api.github.com/repos/huggingface/datasets/issues/2001/events
https://github.com/huggingface/datasets/issues/2001
823,946,706
MDU6SXNzdWU4MjM5NDY3MDY=
2,001
Empty evidence document ("provenance") in KILT ELI5 dataset
{ "login": "donggyukimc", "id": 16605764, "node_id": "MDQ6VXNlcjE2NjA1NzY0", "avatar_url": "https://avatars.githubusercontent.com/u/16605764?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donggyukimc", "html_url": "https://github.com/donggyukimc", "followers_url": "https://api.github.com/users/donggyukimc/followers", "following_url": "https://api.github.com/users/donggyukimc/following{/other_user}", "gists_url": "https://api.github.com/users/donggyukimc/gists{/gist_id}", "starred_url": "https://api.github.com/users/donggyukimc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donggyukimc/subscriptions", "organizations_url": "https://api.github.com/users/donggyukimc/orgs", "repos_url": "https://api.github.com/users/donggyukimc/repos", "events_url": "https://api.github.com/users/donggyukimc/events{/privacy}", "received_events_url": "https://api.github.com/users/donggyukimc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,131,695,000
1,615,960,261,000
1,615,960,261,000
NONE
null
In the original KILT benchmark(https://github.com/facebookresearch/KILT), all samples has its evidence document (i.e. wikipedia page id) for prediction. For example, a sample in ELI5 dataset has the format including provenance (=evidence document) like this `{"id": "1kiwfx", "input": "In Trading Places (1983, Akroyd/Murphy) how does the scheme at the end of the movie work? Why would buying a lot of OJ at a high price ruin the Duke Brothers?", "output": [{"answer": "I feel so old. People have been askinbg what happened at the end of this movie for what must be the last 15 years of my life. It never stops. Every year/month/fortnight, I see someone asking what happened, and someone explaining. Andf it will keep on happening, until I am 90yrs old, in a home, with nothing but the Internet and my bladder to keep me going. And there it will be: \"what happens at the end of Trading Places?\""}, {"provenance": [{"wikipedia_id": "242855", "title": "Futures contract", "section": "Section::::Abstract.", "start_paragraph_id": 1, "start_character": 14, "end_paragraph_id": 1, "end_character": 612, "bleu_score": 0.9232808519770748}]}], "meta": {"partial_evidence": [{"wikipedia_id": "520990", "title": "Trading Places", "section": "Section::::Plot.\n", "start_paragraph_id": 7, "end_paragraph_id": 7, "meta": {"evidence_span": ["On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts.", "On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts. Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice.", "Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice."]}}]}}` However, KILT ELI5 dataset from huggingface datasets library only contain empty list of provenance. `{'id': '1oy5tc', 'input': 'in football whats the point of wasting the first two plays with a rush - up the middle - not regular rush plays i get those', 'meta': {'left_context': '', 'mention': '', 'obj_surface': [], 'partial_evidence': [], 'right_context': '', 'sub_surface': [], 'subj_aliases': [], 'template_questions': []}, 'output': [{'answer': 'In most cases the O-Line is supposed to make a hole for the running back to go through. If you run too many plays to the outside/throws the defense will catch on.\n\nAlso, 2 5 yard plays gets you a new set of downs.', 'meta': {'score': 2}, 'provenance': []}, {'answer': "I you don't like those type of plays, watch CFL. We only get 3 downs so you can't afford to waste one. Lots more passing.", 'meta': {'score': 2}, 'provenance': []}]} ` should i perform other procedure to obtain evidence documents?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2001/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2001/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2000
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2000/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2000/comments
https://api.github.com/repos/huggingface/datasets/issues/2000/events
https://github.com/huggingface/datasets/issues/2000
823,899,910
MDU6SXNzdWU4MjM4OTk5MTA=
2,000
Windows Permission Error (most recent version of datasets)
{ "login": "itsLuisa", "id": 73881148, "node_id": "MDQ6VXNlcjczODgxMTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/73881148?v=4", "gravatar_id": "", "url": "https://api.github.com/users/itsLuisa", "html_url": "https://github.com/itsLuisa", "followers_url": "https://api.github.com/users/itsLuisa/followers", "following_url": "https://api.github.com/users/itsLuisa/following{/other_user}", "gists_url": "https://api.github.com/users/itsLuisa/gists{/gist_id}", "starred_url": "https://api.github.com/users/itsLuisa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/itsLuisa/subscriptions", "organizations_url": "https://api.github.com/users/itsLuisa/orgs", "repos_url": "https://api.github.com/users/itsLuisa/repos", "events_url": "https://api.github.com/users/itsLuisa/events{/privacy}", "received_events_url": "https://api.github.com/users/itsLuisa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @itsLuisa !\r\n\r\nCould you give us more information about the error you're getting, please?\r\nA copy-paste of the Traceback would be nice to get a better understanding of what is wrong :) ", "Hello @SBrandeis , this is it:\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 537, in incomplete_dir\r\n yield tmp_dir\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 578, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 656, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 982, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\arrow_writer.py\", line 297, in finalize\r\n self.write_on_file()\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\arrow_writer.py\", line 230, in write_on_file\r\n pa_array = pa.array(typed_sequence)\r\n File \"pyarrow\\array.pxi\", line 222, in pyarrow.lib.array\r\n File \"pyarrow\\array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\arrow_writer.py\", line 97, in __arrow_array__\r\n out = pa.array(self.data, type=type)\r\n File \"pyarrow\\array.pxi\", line 305, in pyarrow.lib.array\r\n File \"pyarrow\\array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow\\error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\\error.pxi\", line 107, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowTypeError: Expected bytes, got a 'list' object\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:/Users/Luisa/Documents/Uni/WS 2020,21/Neural Networks/Final_Project/NN_Project/data_loading.py\", line 122, in <module>\r\n main()\r\n File \"C:/Users/Luisa/Documents/Uni/WS 2020,21/Neural Networks/Final_Project/NN_Project/data_loading.py\", line 111, in main\r\n dataset = datasets.load_dataset(\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\load.py\", line 740, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 586, in download_and_prepare\r\n self._save_info()\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\contextlib.py\", line 131, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 543, in incomplete_dir\r\n shutil.rmtree(tmp_dir)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\shutil.py\", line 740, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\shutil.py\", line 618, in _rmtree_unsafe\r\n onerror(os.unlink, fullname, sys.exc_info())\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\shutil.py\", line 616, in _rmtree_unsafe\r\n os.unlink(fullname)\r\nPermissionError: [WinError 32] Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen Prozess verwendet wird: 'C:\\\\Users\\\\Luisa\\\\.cache\\\\huggingface\\\\datasets\\\\sample\\\\default-20ee7d51a6a9454f\\\\0.0.0\\\\5fc4c3a355ea77ab446bd31fca5082437600b8364d29b2b95264048bd1f398b1.incomplete\\\\sample-train.arrow'\r\n\r\nProcess finished with exit code 1\r\n```", "Hi @itsLuisa, thanks for sharing the Traceback.\r\n\r\nYou are defining the \"id\" field as a `string` feature:\r\n```python\r\nclass Sample(datasets.GeneratorBasedBuilder):\r\n ...\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n features=datasets.Features(\r\n {\r\n \"id\": datasets.Value(\"string\"),\r\n # ^^ here\r\n \"tokens\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"pos_tags\": datasets.Sequence(datasets.features.ClassLabel(names=[...])),\r\n[...]\r\n```\r\n\r\nBut in the `_generate_examples`, the \"id\" field is a list:\r\n```python\r\nids = list()\r\n```\r\n\r\nChanging:\r\n```python\r\n\"id\": datasets.Value(\"string\"),\r\n```\r\nInto:\r\n```python\r\n\"id\": datasets.Sequence(datasets.Value(\"string\")),\r\n```\r\n\r\nShould fix your issue.\r\n\r\nLet me know if this helps!", "It seems to be working now, thanks a lot for the help, @SBrandeis !", "Glad to hear it!\r\nI'm closing the issue" ]
1,615,118,128,000
1,615,293,777,000
1,615,293,777,000
NONE
null
Hi everyone, Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from three local three-column tsv-files (id\ttokens\tpos_tags\n). I am using the most recent version of datasets. Thank you in advance! Luisa My script: ``` import datasets import csv logger = datasets.logging.get_logger(__name__) class SampleConfig(datasets.BuilderConfig): def __init__(self, **kwargs): super(SampleConfig, self).__init__(**kwargs) class Sample(datasets.GeneratorBasedBuilder): BUILDER_CONFIGS = [ SampleConfig(name="conll2003", version=datasets.Version("1.0.0"), description="Conll2003 dataset"), ] def _info(self): return datasets.DatasetInfo( description="Dataset with words and their POS-Tags", features=datasets.Features( { "id": datasets.Value("string"), "tokens": datasets.Sequence(datasets.Value("string")), "pos_tags": datasets.Sequence( datasets.features.ClassLabel( names=[ "''", ",", "-LRB-", "-RRB-", ".", ":", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "MD", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WRB", "``" ] ) ), } ), supervised_keys=None, homepage="https://catalog.ldc.upenn.edu/LDC2011T03", citation="Weischedel, Ralph, et al. OntoNotes Release 4.0 LDC2011T03. Web Download. Philadelphia: Linguistic Data Consortium, 2011.", ) def _split_generators(self, dl_manager): loaded_files = dl_manager.download_and_extract(self.config.data_files) return [ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": loaded_files["train"]}), datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": loaded_files["test"]}), datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": loaded_files["val"]}) ] def _generate_examples(self, filepath): logger.info("generating examples from = %s", filepath) with open(filepath, encoding="cp1252") as f: data = csv.reader(f, delimiter="\t") ids = list() tokens = list() pos_tags = list() for id_, line in enumerate(data): #print(line) if len(line) == 1: if tokens: yield id_, {"id": ids, "tokens": tokens, "pos_tags": pos_tags} ids = list() tokens = list() pos_tags = list() else: ids.append(line[0]) tokens.append(line[1]) pos_tags.append(line[2]) # last example yield id_, {"id": ids, "tokens": tokens, "pos_tags": pos_tags} def main(): dataset = datasets.load_dataset( "data_loading.py", data_files={ "train": "train.tsv", "test": "test.tsv", "val": "val.tsv" } ) #print(dataset) if __name__=="__main__": main() ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2000/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2000/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1997
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1997/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1997/comments
https://api.github.com/repos/huggingface/datasets/issues/1997/events
https://github.com/huggingface/datasets/issues/1997
823,679,465
MDU6SXNzdWU4MjM2Nzk0NjU=
1,997
from datasets import MoleculeDataset, GEOMDataset
{ "login": "futianfan", "id": 5087210, "node_id": "MDQ6VXNlcjUwODcyMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/5087210?v=4", "gravatar_id": "", "url": "https://api.github.com/users/futianfan", "html_url": "https://github.com/futianfan", "followers_url": "https://api.github.com/users/futianfan/followers", "following_url": "https://api.github.com/users/futianfan/following{/other_user}", "gists_url": "https://api.github.com/users/futianfan/gists{/gist_id}", "starred_url": "https://api.github.com/users/futianfan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/futianfan/subscriptions", "organizations_url": "https://api.github.com/users/futianfan/orgs", "repos_url": "https://api.github.com/users/futianfan/repos", "events_url": "https://api.github.com/users/futianfan/events{/privacy}", "received_events_url": "https://api.github.com/users/futianfan/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,615,045,819,000
1,615,047,206,000
1,615,047,206,000
NONE
null
I met the ImportError: cannot import name 'MoleculeDataset' from 'datasets'. Have anyone met the similar issues? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1997/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1996
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1996/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1996/comments
https://api.github.com/repos/huggingface/datasets/issues/1996/events
https://github.com/huggingface/datasets/issues/1996
823,573,410
MDU6SXNzdWU4MjM1NzM0MTA=
1,996
Error when exploring `arabic_speech_corpus`
{ "login": "elgeish", "id": 6879673, "node_id": "MDQ6VXNlcjY4Nzk2NzM=", "avatar_url": "https://avatars.githubusercontent.com/u/6879673?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elgeish", "html_url": "https://github.com/elgeish", "followers_url": "https://api.github.com/users/elgeish/followers", "following_url": "https://api.github.com/users/elgeish/following{/other_user}", "gists_url": "https://api.github.com/users/elgeish/gists{/gist_id}", "starred_url": "https://api.github.com/users/elgeish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elgeish/subscriptions", "organizations_url": "https://api.github.com/users/elgeish/orgs", "repos_url": "https://api.github.com/users/elgeish/repos", "events_url": "https://api.github.com/users/elgeish/events{/privacy}", "received_events_url": "https://api.github.com/users/elgeish/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" }, { "id": 2725241052, "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech", "name": "speech", "color": "d93f0b", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Thanks for reporting! We'll fix that as soon as possible", "Actually soundfile is not a dependency of this dataset.\r\nThe error comes from a bug that was fixed in this commit: https://github.com/huggingface/datasets/pull/1767/commits/c304e63629f4453367de2fd42883a78768055532\r\nBasically the library used to consider the `import soundfile` in the docstring as a dependency, while it's just here as a code example.\r\n\r\nUpdating the viewer to the latest version of `datasets` should fix this issue\r\n" ]
1,615,010,120,000
1,615,288,345,000
null
NONE
null
Navigate to https://huggingface.co/datasets/viewer/?dataset=arabic_speech_corpus Error: ``` ImportError: To be able to use this dataset, you need to install the following dependencies['soundfile'] using 'pip install soundfile' for instance' Traceback: File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/script_runner.py", line 332, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 233, in <module> configs = get_confs(option) File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py", line 604, in wrapped_func return get_or_create_cached_value() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py", line 588, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/home/sasha/nlp-viewer/run.py", line 145, in get_confs module_path = nlp.load.prepare_module(path, dataset=True File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/datasets/load.py", line 342, in prepare_module f"To be able to use this {module_type}, you need to install the following dependencies" ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1996/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1994
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1994/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1994/comments
https://api.github.com/repos/huggingface/datasets/issues/1994/events
https://github.com/huggingface/datasets/issues/1994
822,871,238
MDU6SXNzdWU4MjI4NzEyMzg=
1,994
not being able to get wikipedia es language
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@lhoestq I really appreciate if you could help me providiing processed datasets, I do not really have access to enough resources to run the apache-beam and need to run the codes on these datasets. Only en/de/fr currently works, but I need all the languages more or less. thanks ", "Hi @dorost1234, I think I can help you a little. I’ve processed some Wikipedia datasets (Spanish inclusive) using the HF/datasets library during recent research.\r\n\r\n@lhoestq Could you help me to upload these preprocessed datasets to Huggingface's repositories? To be more precise, I've built datasets from the following languages using the 20201201 dumps: Spanish, Portuguese, Russian, French, Japanese, Chinese, and Turkish. Process these datasets have high costs that most of the community can't afford. I think these preprocessed datasets I have could be helpful for someone without access to high-resource machines to process Wikipedia's dumps like @dorost1234\r\n\r\n", "Thank you so much @jonatasgrosman , I greatly appreciate your help with them. \r\nYes, I unfortunately does not have access to a good resource and need it for my\r\nresearch. I greatly appreciate @lhoestq your help with uploading the processed datasets in huggingface datasets. This would be really helpful for some users like me with not access to high-memory GPU resources.\r\n\r\nthank you both so much again.\r\n\r\nOn Sat, Mar 6, 2021 at 12:55 AM Jonatas Grosman <notifications@github.com>\r\nwrote:\r\n\r\n> Hi @dorost1234 <https://github.com/dorost1234>, I think I can help you a\r\n> little. I’ve processed some Wikipedia datasets (Spanish inclusive) using\r\n> the HF/datasets library during recent research.\r\n>\r\n> @lhoestq <https://github.com/lhoestq> Could you help me to upload these\r\n> preprocessed datasets to Huggingface's repositories? To be more precise,\r\n> I've built datasets from the following languages using the 20201201 dumps:\r\n> Spanish, Portuguese, Russian, French, Japanese, Chinese, and Turkish.\r\n> Process these datasets have high costs that most of the community can't\r\n> afford. I think these preprocessed datasets I have could be helpful for\r\n> someone without access to high-resource machines to process Wikipedia's\r\n> dumps like @dorost1234 <https://github.com/dorost1234>\r\n>\r\n> —\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/1994#issuecomment-791798195>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMWMK5GFJFU3ACCJFUDTCFVNZANCNFSM4YUZIF4A>\r\n> .\r\n>\r\n", "Hi @dorost1234, so sorry, but looking at my files here, I figure out that I've preprocessed files using the HF/datasets for all the languages previously listed by me (Portuguese, Russian, French, Japanese, Chinese, and Turkish) except the Spanish (on my tests I've used the [wikicorpus](https://www.cs.upc.edu/~nlp/wikicorpus/) instead).\r\n\r\nOnly with the Spanish Wikipedia's dump, I had the same `KeyError: '000nbsp'` problem already reported here https://github.com/huggingface/datasets/issues/577\r\n\r\nSo nowadays, even with access to a high resource machine, you couldn't be able to get Wikipedia's Spanish data using the HF/datasets :(\r\n\r\n\r\n\r\n\r\n", "Thanks a lot for the information and help. This would be great to have\nthese datasets.\n@lhoestq <https://github.com/lhoestq> Do you know a way I could get\nsmaller amount of these data like 1 GBtype of each language to deal with\ncomputatioanl requirements? thanks\n\nOn Sat, Mar 6, 2021 at 5:36 PM Jonatas Grosman <notifications@github.com>\nwrote:\n\n> Hi @dorost1234 <https://github.com/dorost1234>, so sorry, but looking at\n> my files here, I figure out that I've preprocessed files using the\n> HF/datasets for all the languages previously listed by me (Portuguese,\n> Russian, French, Japanese, Chinese, and Turkish) except the Spanish (on my\n> tests I've used the wikicorpus <https://www.cs.upc.edu/~nlp/wikicorpus/>\n> instead).\n>\n> Only with the Spanish Wikipedia's dump, I had the same KeyError: '000nbsp'\n> problem already reported here #577\n> <https://github.com/huggingface/datasets/issues/577>\n>\n> So nowadays, even with access to a high resource machine, you couldn't be\n> able to get Wikipedia's Spanish data using the HF/datasets :(\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/1994#issuecomment-791985546>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMWMO7WOHWLOROPD6Q3TCJKXPANCNFSM4YUZIF4A>\n> .\n>\n", "Hi ! As mentioned above the Spanish configuration have parsing issues from `mwparserfromhell`. I haven't tested with the latest `mwparserfromhell` >=0.6 though. Which version of `mwparserfromhell` are you using ?\r\n\r\n> @lhoestq Could you help me to upload these preprocessed datasets to Huggingface's repositories? To be more precise, I've built datasets from the following languages using the 20201201 dumps: Spanish, Portuguese, Russian, French, Japanese, Chinese, and Turkish. Process these datasets have high costs that most of the community can't afford. I think these preprocessed datasets I have could be helpful for someone without access to high-resource machines to process Wikipedia's dumps like @dorost1234\r\n\r\nThat would be awesome ! Feel free to ping me on slack so we can put the processed wikipedia files on google storage with the other ones we've already preprocessed.\r\n\r\n> Do you know a way I could get smaller amount of these data like 1 GBtype of each language to deal with computatioanl requirements? thanks\r\n\r\nI'd suggest to copy the [wikipedia.py](https://github.com/huggingface/datasets/blob/master/datasets/wikipedia/wikipedia.py) to a new script `custom_wikipedia.py` and modify it to only download and process only a subset of the raw data files.\r\nYou can for example replace [this line](https://github.com/huggingface/datasets/blob/64e59fc45ca2134218b3e42e83fddddbe840ff74/datasets/wikipedia/wikipedia.py#L446) by:\r\n```python\r\n if total_bytes >= (1 << 30): # stop if the total amount of data is >= 1GB\r\n break\r\n else:\r\n xml_urls.append(_base_url(lang) + fname)\r\n```\r\n\r\nThen you can load your custom wikipedia dataset with\r\n```python\r\nload_dataset(\"path/to/my/custom_wikipedia.py\", f\"{date}.{language}\")\r\n```", "Hi @lhoestq!\r\n\r\n> Hi ! As mentioned above the Spanish configuration have parsing issues from mwparserfromhell. I haven't tested with the latest mwparserfromhell >=0.6 though. Which version of mwparserfromhell are you using ?\r\n\r\nI'm using the latest mwparserfromhell version (0.6)\r\n\r\n> That would be awesome ! Feel free to ping me on slack so we can put the processed wikipedia files on google storage with the other ones we've already preprocessed.\r\n\r\nI'll ping you there 👍 ", "Thank you so much @jonatasgrosman and @lhoestq this would be a great help. I am really thankful to you both and to wonderful Huggingface dataset library allowing us to train models at scale." ]
1,614,933,108,000
1,615,495,581,000
null
NONE
null
Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/load.py", line 612, in load_dataset ignore_verifications=ignore_verifications, File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/builder.py", line 527, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/builder.py", line 1050, in _download_and_prepare "\n\t`{}`".format(usage_example) datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')` thanks @lhoestq for any suggestion/help
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1994/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1994/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1993
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1993/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1993/comments
https://api.github.com/repos/huggingface/datasets/issues/1993/events
https://github.com/huggingface/datasets/issues/1993
822,758,387
MDU6SXNzdWU4MjI3NTgzODc=
1,993
How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! That looks like a bug, can you provide some code so that we can reproduce ?\r\nIt's not supposed to update the original dataset", "Hi, I experimented with RAG. \r\n\r\nActually, you can run the [use_own_knowldge_dataset.py](https://github.com/shamanez/transformers/blob/rag-end-to-end-retrieval/examples/research_projects/rag/use_own_knowledge_dataset.py#L80). In the 80 you can save the dataset object to the disk with save_to_disk. Then in order to compute the embeddings in this use **load_from_disk**. \r\n\r\nThen finally save it. You can see the original dataset object (CSV after splitting also will be changed)\r\n\r\nOne more thing- when I save the dataset object with **save_to_disk** it name the arrow file with cache.... rather than using dataset. arrow. Can you add a variable that we can feed a name to save_to_disk function?", "@lhoestq I also found that cache in tmp directory gets updated after transformations. This is really problematic when using datasets interactively. Let's say we use the shards function to a dataset loaded with csv, atm when we do transformations to shards and combine them it updates the original csv cache. ", "I plan to update the save_to_disk method in #2025 so I can make sure the new save_to_disk doesn't corrupt your cache files.\r\nBut from your last message it looks like save_to_disk isn't the root cause right ?", "ok, one more thing. When we use save_to_disk there are two files other than .arrow. dataset_info.json and state.json. Sometimes most of the fields in the dataset_infor.json are null, especially when saving dataset objects. Anyways I think load_from_disk uses the arrow files mentioned in state.json right? ", "> Anyways I think load_from_disk uses the arrow files mentioned in state.json right?\r\n\r\nYes exactly", "Perfect. For now, I am loading the dataset from CSV in my interactive process and will wait until you make the PR!" ]
1,614,921,950,000
1,616,385,950,000
1,616,385,950,000
NONE
null
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place. When I save the dataset with **save_to_disk**, the original dataset which is already in the disk also gets updated. I do not want to update it. How to prevent from this?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1993/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1993/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1992
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1992/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1992/comments
https://api.github.com/repos/huggingface/datasets/issues/1992/events
https://github.com/huggingface/datasets/issues/1992
822,672,238
MDU6SXNzdWU4MjI2NzIyMzg=
1,992
`datasets.map` multi processing much slower than single processing
{ "login": "hwijeen", "id": 29157715, "node_id": "MDQ6VXNlcjI5MTU3NzE1", "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hwijeen", "html_url": "https://github.com/hwijeen", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "repos_url": "https://api.github.com/users/hwijeen/repos", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi @hwijeen, you might want to look at issues #1796 and #1949. I think it could be something related to the I/O operations being performed.", "I see that many people are experiencing the same issue. Is this problem considered an \"official\" bug that is worth a closer look? @lhoestq", "Yes this is an official bug. On my side I haven't managed to reproduce it but @theo-m has. We'll investigate this !", "Thank you for the reply! I would be happy to follow the discussions related to the issue.\r\nIf you do not mind, could you also give a little more explanation on my p.s.2? I am having a hard time figuring out why the single processing `map` uses all of my cores.\r\n@lhoestq @theo-m ", "Regarding your ps2: It depends what function you pass to `map`.\r\nFor example, fast tokenizers from `transformers` in Rust tokenize texts and parallelize the tokenization over all the cores.", "I am still experiencing this issue with datasets 1.9.0..\r\nHas there been a further investigation? \r\n<img width=\"442\" alt=\"image\" src=\"https://user-images.githubusercontent.com/29157715/126143387-8b5ddca2-a896-4e18-abf7-4fbf62a48b41.png\">\r\n" ]
1,614,910,202,000
1,626,689,109,000
null
NONE
null
Hi, thank you for the great library. I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G. My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tokenizer. I noticed that `map` function with `num_proc=mp.cpu_count() //2` takes more than 20 hours to finish the job where as `num_proc=1` gets the job done in about 5 hours. The machine I used has 40 cores, with 126G of RAM. There were no other jobs when `map` function was running. What could be the reason? I would be happy to provide information necessary to spot the reason. p.s. I was experiencing the imbalance issue mentioned in [here](https://github.com/huggingface/datasets/issues/610#issuecomment-705177036) when I was using multi processing. p.s.2 When I run `map` with `num_proc=1`, I see one tqdm bar but all the cores are working. When `num_proc=20`, only 20 cores work. ![Screen Shot 2021-03-05 at 11 04 59](https://user-images.githubusercontent.com/29157715/110056895-ef6cf000-7da2-11eb-8307-6698e9fb1ad4.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1992/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1992/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1990/comments
https://api.github.com/repos/huggingface/datasets/issues/1990/events
https://github.com/huggingface/datasets/issues/1990
822,384,502
MDU6SXNzdWU4MjIzODQ1MDI=
1,990
OSError: Memory mapping file failed: Cannot allocate memory
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Do you think this is trying to bring the dataset into memory and if I can avoid it to save on memory so it only brings a batch into memory? @lhoestq thank you", "It's not trying to bring the dataset into memory.\r\n\r\nActually, it's trying to memory map the dataset file, which is different. It allows to load large dataset files without filling up memory.\r\n\r\nWhat dataset did you use to get this error ?\r\nOn what OS are you running ? What's your python and pyarrow version ?", "Dear @lhoestq \r\nthank you so much for coming back to me. Please find info below:\r\n1) Dataset name: I used wikipedia with config 20200501.en\r\n2) I got these pyarrow in my environment:\r\npyarrow 2.0.0 <pip>\r\npyarrow 3.0.0 <pip>\r\n\r\n3) python version 3.7.10\r\n4) OS version \r\n\r\nlsb_release -a\r\nNo LSB modules are available.\r\nDistributor ID:\tDebian\r\nDescription:\tDebian GNU/Linux 10 (buster)\r\nRelease:\t10\r\nCodename:\tbuster\r\n\r\n\r\nIs there a way I could solve the memory issue and if I could run this model, I am using GeForce GTX 108, \r\nthanks \r\n", "I noticed that the error happens when loading the validation dataset.\r\nWhat value of `data_args.validation_split_percentage` did you use ?", "Dear @lhoestq \r\n\r\nthank you very much for the very sharp observation, indeed, this happens there, I use the default value of 5, I basically plan to subsample a part of the large dataset and choose it as validation set. Do you think this is bringing the data into memory during subsampling? Is there a way I could avoid this?\r\n\r\nThank you very much for the great help.\r\n\r\n\r\nOn Mon, Mar 8, 2021 at 11:28 AM Quentin Lhoest ***@***.***>\r\nwrote:\r\n\r\n> I noticed that the error happens when loading the validation dataset.\r\n> What value of data_args.validation_split_percentage did you use ?\r\n>\r\n> —\r\n> You are receiving this because you authored the thread.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/1990#issuecomment-792655644>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMS337ZUJ7HGGVVCCR3TCSREFANCNFSM4YTYAQ2A>\r\n> .\r\n>\r\n", "Methods like `dataset.shard`, `dataset.train_test_split`, `dataset.select` etc. don't bring the dataset in memory. \r\nThe only time when samples are brought to memory is when you access elements via `dataset[0]`, `dataset[:10]`, `dataset[\"my_column_names\"]`.\r\n\r\nBut it's possible that trying to use those methods to build your validation set doesn't fix the issue since, if I understand correctly, the error happens when when the dataset arrow file is opened (just before the 5% percentage is applied).\r\n\r\nDid you try to reproduce this issue in a google colab ? This would be super helpful to investigate why this happened.\r\n\r\nAlso maybe you can try clearing your cache at `~/.cache/huggingface/datasets` and try again. If the arrow file was corrupted somehow, removing it and rebuilding may fix the issue." ]
1,614,882,118,000
1,628,100,265,000
1,628,100,265,000
NONE
null
Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py ``` python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.en --do_train --do_eval --output_dir /dara/test --max_seq_length 128 ``` I am using transformer version: 4.3.2 But I got memory erorr using this dataset, is there a way I could save on memory with dataset library with wikipedia dataset? Specially I need to train a model with multiple of wikipedia datasets concatenated. thank you very much @lhoestq for your help and suggestions: ``` File "run_mlm.py", line 441, in <module> main() File "run_mlm.py", line 233, in main split=f"train[{data_args.validation_split_percentage}%:]", File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/load.py", line 750, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 740, in as_dataset map_tuple=True, File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 757, in _build_single_dataset in_memory=in_memory, File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 829, in _as_dataset in_memory=in_memory, File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 215, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 236, in read_files pa_table = self._read_files(files, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 171, in _read_files pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 302, in _get_dataset_from_filename pa_table = ArrowReader.read_table(filename, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 322, in read_table stream = stream_from(filename) File "pyarrow/io.pxi", line 782, in pyarrow.lib.memory_map File "pyarrow/io.pxi", line 743, in pyarrow.lib.MemoryMappedFile._open File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status OSError: Memory mapping file failed: Cannot allocate memory ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1990/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1990/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1989
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1989/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1989/comments
https://api.github.com/repos/huggingface/datasets/issues/1989/events
https://github.com/huggingface/datasets/issues/1989
822,328,147
MDU6SXNzdWU4MjIzMjgxNDc=
1,989
Question/problem with dataset labels
{ "login": "ioana-blue", "id": 17202292, "node_id": "MDQ6VXNlcjE3MjAyMjky", "avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ioana-blue", "html_url": "https://github.com/ioana-blue", "followers_url": "https://api.github.com/users/ioana-blue/followers", "following_url": "https://api.github.com/users/ioana-blue/following{/other_user}", "gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}", "starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions", "organizations_url": "https://api.github.com/users/ioana-blue/orgs", "repos_url": "https://api.github.com/users/ioana-blue/repos", "events_url": "https://api.github.com/users/ioana-blue/events{/privacy}", "received_events_url": "https://api.github.com/users/ioana-blue/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "It seems that I get parsing errors for various fields in my data. For example now I get this:\r\n```\r\n File \"../../../models/tr-4.3.2/run_puppets.py\", line 523, in <module>\r\n main()\r\n File \"../../../models/tr-4.3.2/run_puppets.py\", line 249, in main\r\n datasets = load_dataset(\"csv\", data_files=data_files)\r\n File \"/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/load.py\", line 740, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py\", line 572, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py\", line 650, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py\", line 1028, in _prepare_split\r\n writer.write_table(table)\r\n File \"/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/arrow_writer.py\", line 292, in write_table\r\n pa_table = pa_table.cast(self._schema)\r\n File \"pyarrow/table.pxi\", line 1311, in pyarrow.lib.Table.cast\r\n File \"pyarrow/table.pxi\", line 265, in pyarrow.lib.ChunkedArray.cast\r\n File \"/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/pyarrow/compute.py\", line 87, in cast\r\n return call_function(\"cast\", [arr], options)\r\n File \"pyarrow/_compute.pyx\", line 298, in pyarrow._compute.call_function\r\n File \"pyarrow/_compute.pyx\", line 192, in pyarrow._compute.Function.call\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Failed to parse string: https://www.netgalley.com/catalog/book/121872\r\n```", "Not sure if this helps, this is how I load my files (as in the sample scripts on transformers):\r\n\r\n```\r\n if data_args.train_file.endswith(\".csv\"):\r\n # Loading a dataset from local csv files\r\n datasets = load_dataset(\"csv\", data_files=data_files)\r\n```", "Since this worked out of the box in a few examples before, I wonder if it's some quoting issue or something else. ", "Hi @ioana-blue,\r\nCan you share a sample from your .csv? A dummy where you get this error will also help.\r\n\r\nI tried this csv:\r\n```csv\r\nfeature,label\r\n1.2,not nurse\r\n1.3,nurse\r\n1.5,surgeon\r\n```\r\nand the following snippet:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"csv\",data_files=['test.csv'])\r\n\r\nprint(d)\r\nprint(d['train']['label'])\r\n```\r\nand this works perfectly fine for me:\r\n```sh\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['feature', 'label'],\r\n num_rows: 3\r\n })\r\n})\r\n['not nurse', 'nurse', 'surgeon']\r\n```\r\nI'm sure your csv is more complicated than this one. But it is hard to tell where the issue might be without looking at a sample.", "I've had versions where it worked fain. For this dataset, I had all kind of parsing issues that I couldn't understand. What I ended up doing is strip all the columns that I didn't need and also make the label 0/1. \r\n\r\nI think one line that may have caused a problem was the csv version of this:\r\n\r\n```crawl-data/CC-MAIN-2017-47/segments/1510934806225.78/wet/CC-MAIN-20171120203833-20171120223833-00571.warc.wet.gz Rose Blakey is an aspiring journalist. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job. ^M ('Rose', '', 'Blakey') journalist F 38 journalist https://www.netgalley.com/catalog/book/121872 _ is desperate to escape the from the small Australian town in which _ lives. Rejection after rejection mean _ is stuck in what _ sees as a dead-end waitressing job. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job.```\r\n\r\nThe error I got in this case is this one: https://github.com/huggingface/datasets/issues/1989#issuecomment-790842771\r\n\r\nNote, this line was part of a much larger file and until this line I guess it was working fine. ", "Hi @ioana-blue,\r\n\r\nWhat is the separator you're using for the csv? I see there are only two commas in the given line, but they don't seem like appropriate points. Also, is this a string part of one line, or an entire line? There should also be a label, right?", "Sorry for the confusion, the sample above was from a tsv that was used to derive the csv. Let me construct the csv again (I had remove it). \r\n\r\nThis is the line in the csv - this is the whole line:\r\n```crawl-data/CC-MAIN-2017-47/segments/1510934806225.78/wet/CC-MAIN-20171120203833-20171120223833-00571.warc.wet.gz,Rose Blakey is an aspiring journalist. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead,\"('Rose', '', 'Blakey')\",journalist,F,38,journalist,https://www.netgalley.com/catalog/book/121872,_ is desperate to escape the from the small Australian town in which _ lives. Rejection after rejection mean _ is stuck in what _ sees as a dead-end waitressing job., She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job.```", "Hi,\r\nJust in case you want to use tsv directly, you can use the separator argument while loading the dataset.\r\n```python\r\nd = load_dataset(\"csv\",data_files=['test.csv'],sep=\"\\t\")\r\n```\r\n\r\nAdditionally, I don't face the issues with the following csv (same as the one you provided):\r\n\r\n```sh\r\nlink1,text1,info1,info2,info3,info4,info5,link2,text2,text3\r\ncrawl-data/CC-MAIN-2017-47/segments/1510934806225.78/wet/CC-MAIN-20171120203833-20171120223833-00571.warc.wet.gz,Rose Blakey is an aspiring journalist. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead,\"('Rose', '', 'Blakey')\",journalist,F,38,journalist,https://www.netgalley.com/catalog/book/121872,_ is desperate to escape the from the small Australian town in which _ lives. Rejection after rejection mean _ is stuck in what _ sees as a dead-end waitressing job., She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job.\r\n```\r\nOutput after loading:\r\n```sh\r\n{'link1': 'crawl-data/CC-MAIN-2017-47/segments/1510934806225.78/wet/CC-MAIN-20171120203833-20171120223833-00571.warc.wet.gz', 'text1': 'Rose Blakey is an aspiring journalist. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead', 'info1': \"('Rose', '', 'Blakey')\", 'info2': 'journalist', 'info3': 'F', 'info4': 38, 'info5': 'journalist', 'link2': 'https://www.netgalley.com/catalog/book/121872', 'text2': '_ is desperate to escape the from the small Australian town in which _ lives. Rejection after rejection mean _ is stuck in what _ sees as a dead-end waitressing job.', 'text3': ' She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job.'}\r\n```\r\nCan you check once if the tsv works for you directly using the separator argument? The conversion from tsv to csv could create issues, I'm only guessing though.", "thanks for the tip. very strange :/ I'll check my datasets version as well. \r\n\r\nI will have more similar experiments soon so I'll let you know if I manage to get rid of this. ", "No problem at all. I thought I'd be able to solve this but I'm unable to replicate the issue :/" ]
1,614,877,613,000
1,615,455,855,000
null
NONE
null
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: ``` File "../../../models/tr-4.3.2/run_puppets.py", line 523, in <module> main() File "../../../models/tr-4.3.2/run_puppets.py", line 249, in main datasets = load_dataset("csv", data_files=data_files) File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/load.py", line 740, in load_dataset builder_instance.download_and_prepare( File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 572, in download_and_prepare self._download_and_prepare( File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 650, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 1028, in _prepare_split writer.write_table(table) File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/arrow_writer.py", line 292, in write_table pa_table = pa_table.cast(self._schema) File "pyarrow/table.pxi", line 1311, in pyarrow.lib.Table.cast File "pyarrow/table.pxi", line 265, in pyarrow.lib.ChunkedArray.cast File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/pyarrow/compute.py", line 87, in cast return call_function("cast", [arr], options) File "pyarrow/_compute.pyx", line 298, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 192, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Failed to parse string: not nurse ``` Any ideas how to fix this? For now, I'll probably make them numeric.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1989/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1989/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1988/comments
https://api.github.com/repos/huggingface/datasets/issues/1988/events
https://github.com/huggingface/datasets/issues/1988
822,324,605
MDU6SXNzdWU4MjIzMjQ2MDU=
1,988
Readme.md is misleading about kinds of datasets?
{ "login": "surak", "id": 878399, "node_id": "MDQ6VXNlcjg3ODM5OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/878399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/surak", "html_url": "https://github.com/surak", "followers_url": "https://api.github.com/users/surak/followers", "following_url": "https://api.github.com/users/surak/following{/other_user}", "gists_url": "https://api.github.com/users/surak/gists{/gist_id}", "starred_url": "https://api.github.com/users/surak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/surak/subscriptions", "organizations_url": "https://api.github.com/users/surak/orgs", "repos_url": "https://api.github.com/users/surak/repos", "events_url": "https://api.github.com/users/surak/events{/privacy}", "received_events_url": "https://api.github.com/users/surak/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Yes it's possible to use image data. There are already a few of them available (MNIST, CIFAR..)" ]
1,614,877,460,000
1,628,100,323,000
1,628,100,323,000
NONE
null
Hi! At the README.MD, you say: "efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV/JSON/text. " But here: https://github.com/huggingface/datasets/blob/master/templates/new_dataset_script.py#L82-L117 You mention other kinds of datasets, with images and so on. I'm confused. Is it possible to use it to store, say, imagenet locally?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1988/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1988/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1987
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1987/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1987/comments
https://api.github.com/repos/huggingface/datasets/issues/1987/events
https://github.com/huggingface/datasets/issues/1987
822,308,956
MDU6SXNzdWU4MjIzMDg5NTY=
1,987
wmt15 is broken
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,614,876,385,000
1,614,876,385,000
null
CONTRIBUTOR
null
While testing the hotfix, I tried a random other wmt release and found wmt15 to be broken: ``` python -c 'from datasets import load_dataset; load_dataset("wmt15", "de-en")' Downloading: 2.91kB [00:00, 818kB/s] Downloading: 3.02kB [00:00, 897kB/s] Downloading: 41.1kB [00:00, 19.1MB/s] Downloading and preparing dataset wmt15/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt15/de-en/1.0.0/39ad5f9262a0910a8ad7028ad432731ad23fdf91f2cebbbf2ba4776b9859e87f... Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/load.py", line 740, in load_dataset builder_instance.download_and_prepare( File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/builder.py", line 578, in download_and_prepare self._download_and_prepare( File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/builder.py", line 634, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt15/39ad5f9262a0910a8ad7028ad432731ad23fdf91f2cebbbf2ba4776b9859e87f/wmt_utils.py", line 757, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 283, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 191, in download downloaded_path_or_paths = map_nested( File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 203, in map_nested mapped = [ File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 204, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 160, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 160, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 142, in _single_map_nested return function(data_struct) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 214, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 614, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/wmt/wmt15/resolve/main/training-parallel-nc-v10.tgz ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1987/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1987/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1986
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1986/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1986/comments
https://api.github.com/repos/huggingface/datasets/issues/1986/events
https://github.com/huggingface/datasets/issues/1986
822,176,290
MDU6SXNzdWU4MjIxNzYyOTA=
1,986
wmt datasets fail to load
{ "login": "sabania", "id": 32322564, "node_id": "MDQ6VXNlcjMyMzIyNTY0", "avatar_url": "https://avatars.githubusercontent.com/u/32322564?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sabania", "html_url": "https://github.com/sabania", "followers_url": "https://api.github.com/users/sabania/followers", "following_url": "https://api.github.com/users/sabania/following{/other_user}", "gists_url": "https://api.github.com/users/sabania/gists{/gist_id}", "starred_url": "https://api.github.com/users/sabania/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sabania/subscriptions", "organizations_url": "https://api.github.com/users/sabania/orgs", "repos_url": "https://api.github.com/users/sabania/repos", "events_url": "https://api.github.com/users/sabania/events{/privacy}", "received_events_url": "https://api.github.com/users/sabania/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "caching issue, seems to work again.." ]
1,614,867,535,000
1,614,868,267,000
1,614,868,267,000
NONE
null
~\.cache\huggingface\modules\datasets_modules\datasets\wmt14\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\wmt_utils.py in _split_generators(self, dl_manager) 758 # Extract manually downloaded files. 759 manual_files = dl_manager.extract(manual_paths_dict) --> 760 extraction_map = dict(downloaded_files, **manual_files) 761 762 for language in self.config.language_pair: TypeError: type object argument after ** must be a mapping, not list
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1986/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1986/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1984
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1984/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1984/comments
https://api.github.com/repos/huggingface/datasets/issues/1984/events
https://github.com/huggingface/datasets/issues/1984
821,816,588
MDU6SXNzdWU4MjE4MTY1ODg=
1,984
Add tests for WMT datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,614,840,402,000
1,614,840,402,000
null
MEMBER
null
As requested in #1981, we need tests for WMT datasets, using dummy data.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1984/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1984/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1983
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1983/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1983/comments
https://api.github.com/repos/huggingface/datasets/issues/1983/events
https://github.com/huggingface/datasets/issues/1983
821,746,008
MDU6SXNzdWU4MjE3NDYwMDg=
1,983
The size of CoNLL-2003 is not consistant with the official release.
{ "login": "h-peng17", "id": 39556019, "node_id": "MDQ6VXNlcjM5NTU2MDE5", "avatar_url": "https://avatars.githubusercontent.com/u/39556019?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h-peng17", "html_url": "https://github.com/h-peng17", "followers_url": "https://api.github.com/users/h-peng17/followers", "following_url": "https://api.github.com/users/h-peng17/following{/other_user}", "gists_url": "https://api.github.com/users/h-peng17/gists{/gist_id}", "starred_url": "https://api.github.com/users/h-peng17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h-peng17/subscriptions", "organizations_url": "https://api.github.com/users/h-peng17/orgs", "repos_url": "https://api.github.com/users/h-peng17/repos", "events_url": "https://api.github.com/users/h-peng17/events{/privacy}", "received_events_url": "https://api.github.com/users/h-peng17/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi,\r\n\r\nif you inspect the raw data, you can find there are 946 occurrences of `-DOCSTART- -X- -X- O` in the train split and `14041 + 946 = 14987`, which is exactly the number of sentences the authors report. `-DOCSTART-` is a special line that acts as a boundary between two different documents and is filtered out in our implementation.\r\n\r\n@lhoestq What do you think about including these lines? ([Link](https://github.com/flairNLP/flair/issues/1097) to a similar issue in the flairNLP repo)", "We should mention in the Conll2003 dataset card that these lines have been removed indeed.\r\n\r\nIf some users are interested in using these lines (maybe to recombine documents ?) then we can add a parameter to the conll2003 dataset to include them.\r\n\r\nBut IMO the default config should stay the current one (without the `-DOCSTART-` stuff), so that you can directly train NER models without additional preprocessing. Let me know what you think", "@lhoestq Yes, I agree adding a small note should be sufficient.\r\n\r\nCurrently, NLTK's `ConllCorpusReader` ignores the `-DOCSTART-` lines so I think it's ok if we do the same. If there is an interest in the future to use these lines, then we can include them.", "I added a mention of this in conll2003's dataset card:\r\nhttps://github.com/huggingface/datasets/blob/fc9796920da88486c3b97690969aabf03d6b4088/datasets/conll2003/README.md#conll2003\r\n\r\nEdit: just saw your PR @mariosasko (noticed it too late ^^)\r\nLet me take a look at it :)" ]
1,614,832,894,000
1,615,220,665,000
null
NONE
null
Thanks for the dataset sharing! But when I use conll-2003, I meet some questions. The statistics of conll-2003 in this repo is : \#train 14041 \#dev 3250 \#test 3453 While the official statistics is: \#train 14987 \#dev 3466 \#test 3684 Wish for your reply~
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1983/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1983/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1981
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1981/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1981/comments
https://api.github.com/repos/huggingface/datasets/issues/1981/events
https://github.com/huggingface/datasets/issues/1981
821,411,109
MDU6SXNzdWU4MjE0MTExMDk=
1,981
wmt datasets fail to load
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "@stas00 Mea culpa... May I fix this tomorrow morning?", "yes, of course, I reverted to the version before that and it works ;)\r\n\r\nbut since a new release was just made you will probably need to make a hotfix.\r\n\r\nand add the wmt to the tests?", "Sure, I will implement a regression test!", "@stas00 it is fixed. @lhoestq are you releasing the hot fix or would you prefer me to do it?", "I'll do a patch release for this issue early tomorrow.\r\n\r\nAnd yes we absolutly need tests for the wmt datasets: The missing tests for wmt are an artifact from the early development of the lib but now we have tools to generate automatically the dummy data used for tests :)", "still facing the same issue or similar:\r\nfrom datasets import load_dataset\r\nwtm14_test = load_dataset('wmt14',\"de-en\",cache_dir='./datasets')\r\n\r\n~.cache\\huggingface\\modules\\datasets_modules\\datasets\\wmt14\\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\\wmt_utils.py in _split_generators(self, dl_manager)\r\n758 # Extract manually downloaded files.\r\n759 manual_files = dl_manager.extract(manual_paths_dict)\r\n--> 760 extraction_map = dict(downloaded_files, **manual_files)\r\n761\r\n762 for language in self.config.language_pair:\r\n\r\nTypeError: type object argument after ** must be a mapping, not list" ]
1,614,799,299,000
1,614,867,407,000
1,614,811,716,000
CONTRIBUTOR
null
on master: ``` python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")' Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/de-en/1.0.0/43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e... Traceback (most recent call last): File "<string>", line 1, in <module> File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 740, in load_dataset builder_instance.download_and_prepare( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 578, in download_and_prepare self._download_and_prepare( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 634, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt14/43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e/wmt_utils.py", line 760, in _split_generators extraction_map = dict(downloaded_files, **manual_files) ``` it worked fine recently. same problem if I try wmt16. git bisect points to this commit from Feb 25 as the culprit https://github.com/huggingface/datasets/commit/792f1d9bb1c5361908f73e2ef7f0181b2be409fa @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1981/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1981/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1977
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1977/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1977/comments
https://api.github.com/repos/huggingface/datasets/issues/1977/events
https://github.com/huggingface/datasets/issues/1977
820,312,022
MDU6SXNzdWU4MjAzMTIwMjI=
1,977
ModuleNotFoundError: No module named 'apache_beam' for wikipedia datasets
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I sometimes also get this error with other languages of the same dataset:\r\n\r\n File \"/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py\", line 322, in read_table\r\n stream = stream_from(filename)\r\n File \"pyarrow/io.pxi\", line 782, in pyarrow.lib.memory_map\r\n File \"pyarrow/io.pxi\", line 743, in pyarrow.lib.MemoryMappedFile._open\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: Memory mapping file failed: Cannot allocate memory\r\n\r\n@lhoestq \r\n", "Hi ! Thanks for reporting\r\nSome wikipedia configurations do require the user to have `apache_beam` in order to parse the wikimedia data.\r\n\r\nOn the other hand regarding your second issue\r\n```\r\nOSError: Memory mapping file failed: Cannot allocate memory\r\n```\r\nI've never experienced this, can you open a new issue for this specific error and provide more details please ?\r\nFor example what script did you use to get this, what language did you use, what's your environment details (os, python version, pyarrow version).." ]
1,614,712,888,000
1,614,766,660,000
null
NONE
null
Hi I am trying to run run_mlm.py code [1] of huggingface with following "wikipedia"/ "20200501.aa" dataset: `python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.aa --do_train --do_eval --output_dir /tmp/test-mlm --max_seq_length 256 ` I am getting this error, but as per documentation, huggingface dataset provide processed version of this dataset and users can load it without requiring setup extra settings for apache-beam. could you help me please to load this dataset? Do you think I can run run_ml.py with this dataset? or anyway I could subsample and train the model? I greatly appreciate providing the processed version of all languages for this dataset, which allow the user to use them without setting up apache-beam,. thanks I really appreciate your help. @lhoestq thanks. [1] https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py error I get: ``` >>> import datasets >>> datasets.load_dataset("wikipedia", "20200501.aa") Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /dara/temp/cache_home_2/datasets/wikipedia/20200501.aa/1.0.0/4021357e28509391eab2f8300d9b689e7e8f3a877ebb3d354b01577d497ebc63... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/dara/temp/libs/anaconda3/envs/codes/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/load.py", line 746, in load_dataset use_auth_token=use_auth_token, File "/dara/temp/libs/anaconda3/envs/codes/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 573, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/dara/temp/libs/anaconda3/envs/codes/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 1099, in _download_and_prepare import apache_beam as beam ModuleNotFoundError: No module named 'apache_beam' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1977/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1977/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1973
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1973/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1973/comments
https://api.github.com/repos/huggingface/datasets/issues/1973/events
https://github.com/huggingface/datasets/issues/1973
820,077,312
MDU6SXNzdWU4MjAwNzczMTI=
1,973
Question: what gets stored in the datasets cache and why is it so huge?
{ "login": "ioana-blue", "id": 17202292, "node_id": "MDQ6VXNlcjE3MjAyMjky", "avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ioana-blue", "html_url": "https://github.com/ioana-blue", "followers_url": "https://api.github.com/users/ioana-blue/followers", "following_url": "https://api.github.com/users/ioana-blue/following{/other_user}", "gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}", "starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions", "organizations_url": "https://api.github.com/users/ioana-blue/orgs", "repos_url": "https://api.github.com/users/ioana-blue/repos", "events_url": "https://api.github.com/users/ioana-blue/events{/privacy}", "received_events_url": "https://api.github.com/users/ioana-blue/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Echo'ing this observation: I have a few datasets in the neighborhood of 2GB CSVs uncompressed, and when I use something like `Dataset.save_to_disk()` it's ~18GB on disk.\r\n\r\nIf this is unexpected behavior, would be happy to help run debugging as needed.", "Thanks @ioana-blue for pointing out this problem (and thanks also @justin-yan). You are right that current implementation of the datasets caching files take too much memory. We are definitely changing this and optimizing the defaults, so that the file sizes are considerably reduced. I will come back to you as soon as this is fixed.", "Thank you! Also I noticed that the files don't seem to be cleaned after the jobs finish. Last night I had only 3 jobs running, but the cache was still at 180GB. ", "And to clarify, it's not memory, it's disk space. Thank you!", "Hi ! As Albert said they can sometimes take more space that expected but we'll fix that soon.\r\n\r\nAlso, to give more details about caching: computations on a dataset are cached by default so that you don't have to recompute them the next time you run them.\r\n\r\nSo by default the cache files stay on your disk when you job is finished (so that if you re-execute it, it will be reloaded from the cache).\r\nFeel free to clear your cache after your job has finished, or disable caching using\r\n```python\r\nimport datasets\r\n\r\ndatasets.set_caching_enabled(False)\r\n```", "Thanks for the tip, this is useful. ", "Hi @ioana-blue, we have optimized Datasets' disk usage in the latest release v1.5.\r\n\r\nFeel free to update your Datasets version\r\n```shell\r\npip install -U datasets\r\n```\r\nand see if it better suits your needs.", "Thank you!" ]
1,614,695,753,000
1,617,113,039,000
1,615,887,840,000
NONE
null
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any insight? Thank you!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1973/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1973/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1972
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1972/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1972/comments
https://api.github.com/repos/huggingface/datasets/issues/1972/events
https://github.com/huggingface/datasets/issues/1972
819,752,761
MDU6SXNzdWU4MTk3NTI3NjE=
1,972
'Dataset' object has no attribute 'rename_column'
{ "login": "farooqzaman1", "id": 23195502, "node_id": "MDQ6VXNlcjIzMTk1NTAy", "avatar_url": "https://avatars.githubusercontent.com/u/23195502?v=4", "gravatar_id": "", "url": "https://api.github.com/users/farooqzaman1", "html_url": "https://github.com/farooqzaman1", "followers_url": "https://api.github.com/users/farooqzaman1/followers", "following_url": "https://api.github.com/users/farooqzaman1/following{/other_user}", "gists_url": "https://api.github.com/users/farooqzaman1/gists{/gist_id}", "starred_url": "https://api.github.com/users/farooqzaman1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/farooqzaman1/subscriptions", "organizations_url": "https://api.github.com/users/farooqzaman1/orgs", "repos_url": "https://api.github.com/users/farooqzaman1/repos", "events_url": "https://api.github.com/users/farooqzaman1/events{/privacy}", "received_events_url": "https://api.github.com/users/farooqzaman1/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! `rename_column` has been added recently and will be available in the next release" ]
1,614,672,109,000
1,614,690,483,000
null
NONE
null
'Dataset' object has no attribute 'rename_column'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1972/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1972/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1965
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1965/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1965/comments
https://api.github.com/repos/huggingface/datasets/issues/1965/events
https://github.com/huggingface/datasets/issues/1965
818,833,460
MDU6SXNzdWU4MTg4MzM0NjA=
1,965
Can we parallelized the add_faiss_index process over dataset shards ?
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi !\r\nAs far as I know not all faiss indexes can be computed in parallel and then merged. \r\nFor example [here](https://github.com/facebookresearch/faiss/wiki/Special-operations-on-indexes#splitting-and-merging-indexes) is is mentioned that only IndexIVF indexes can be merged.\r\nMoreover faiss already works using multithreading to parallelize the workload over your different CPU cores. You can find more info [here](https://github.com/facebookresearch/faiss/wiki/Threads-and-asynchronous-calls#internal-threading)\r\nSo I feel like the gains we would get by implementing a parallel `add_faiss_index` would not be that important, but let me know what you think.\r\n", "Actually, you are right. I also had the same idea. I am trying this in the context of end-ton-end retrieval training in RAG. So far I have parallelized the embedding re-computation within the training loop by using datasets shards. \r\n\r\nThen I was thinking of can I calculate the indexes for each shard and combined them with **concatenate** before I save.", "@lhoestq As you mentioned faiss is already using multiprocessing. I tried to do the add_index with faiss for a dataset object inside a RAY actor and the process became very slow... if fact it takes so much time. It is because a ray actor comes with a single CPU core unless we assign it more. I also tried assigning more cores but still running add_index in the main process is very fast. " ]
1,614,602,854,000
1,614,886,856,000
1,614,886,842,000
NONE
null
I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ? I feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process. @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1965/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1965/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1964
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1964/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1964/comments
https://api.github.com/repos/huggingface/datasets/issues/1964/events
https://github.com/huggingface/datasets/issues/1964
818,624,864
MDU6SXNzdWU4MTg2MjQ4NjQ=
1,964
Datasets.py function load_dataset does not match squad dataset
{ "login": "LeopoldACC", "id": 44536699, "node_id": "MDQ6VXNlcjQ0NTM2Njk5", "avatar_url": "https://avatars.githubusercontent.com/u/44536699?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LeopoldACC", "html_url": "https://github.com/LeopoldACC", "followers_url": "https://api.github.com/users/LeopoldACC/followers", "following_url": "https://api.github.com/users/LeopoldACC/following{/other_user}", "gists_url": "https://api.github.com/users/LeopoldACC/gists{/gist_id}", "starred_url": "https://api.github.com/users/LeopoldACC/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LeopoldACC/subscriptions", "organizations_url": "https://api.github.com/users/LeopoldACC/orgs", "repos_url": "https://api.github.com/users/LeopoldACC/repos", "events_url": "https://api.github.com/users/LeopoldACC/events{/privacy}", "received_events_url": "https://api.github.com/users/LeopoldACC/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi !\r\n\r\nTo fix 1, an you try to run this code ?\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"squad\", download_mode=\"force_redownload\")\r\n```\r\nMaybe the file your downloaded was corrupted, in this case redownloading this way should fix your issue 1.\r\n\r\nRegarding your 2nd point, you're right that loading the raw json this way doesn't give you a dataset with the column \"context\", \"question\" and \"answers\". Indeed the squad format is a very nested format so you have to preprocess the data. You can do it this way:\r\n```python\r\ndef process_squad(examples):\r\n \"\"\"\r\n Process a dataset in the squad format with columns \"title\" and \"paragraphs\"\r\n to return the dataset with columns \"context\", \"question\" and \"answers\".\r\n \"\"\"\r\n out = {\"context\": [], \"question\": [], \"answers\":[]} \r\n for paragraphs in examples[\"paragraphs\"]: \r\n for paragraph in paragraphs: \r\n for qa in paragraph[\"qas\"]: \r\n answers = [{\"answer_start\": answer[\"answer_start\"], \"text\": answer[\"text\"].strip()} for answer in qa[\"answers\"]] \r\n out[\"context\"].append(paragraph[\"context\"].strip()) \r\n out[\"question\"].append(qa[\"question\"].strip()) \r\n out[\"answers\"].append(answers) \r\n return out\r\n\r\ndatasets = load_dataset(extension, data_files=data_files, field=\"data\")\r\ncolumn_names = datasets[\"train\"].column_names\r\n\r\nif set(column_names) == {\"title\", \"paragraphs\"}:\r\n datasets = datasets.map(process_squad, batched=True, remove_columns=column_names)\r\n```\r\n\r\nHope that helps :)", "Thks for quickly answering!\r\n### 1 I try the first way,but seems not work \r\n```\r\nTraceback (most recent call last):\r\n File \"examples/question-answering/run_qa.py\", line 503, in <module>\r\n main()\r\n File \"examples/question-answering/run_qa.py\", line 218, in main\r\n datasets = load_dataset(data_args.dataset_name, download_mode=\"force_redownload\")\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/load.py\", line 746, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py\", line 573, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py\", line 633, in _download_and_prepare\r\n self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/utils/info_utils.py\", line 39, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json']\r\n```\r\n### 2 I try the second way,and run the examples/question-answering/run_qa.py,it lead to another bug orz..\r\n```\r\nTraceback (most recent call last):\r\n File \"examples/question-answering/run_qa.py\", line 523, in <module>\r\n main()\r\n File \"examples/question-answering/run_qa.py\", line 379, in main\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/arrow_dataset.py\", line 1120, in map\r\n update_data = does_function_return_dict(test_inputs, test_indices)\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/arrow_dataset.py\", line 1091, in does_function_return_dict\r\n function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n File \"examples/question-answering/run_qa.py\", line 339, in prepare_train_features\r\n if len(answers[\"answer_start\"]) == 0:\r\nTypeError: list indices must be integers or slices, not str\r\n```\r\n## may be the function prepare_train_features in run_qa.py need to fix,I think is that the prep\r\n```python\r\nfor i, offsets in enumerate(offset_mapping):\r\n # We will label impossible answers with the index of the CLS token.\r\n input_ids = tokenized_examples[\"input_ids\"][i]\r\n cls_index = input_ids.index(tokenizer.cls_token_id)\r\n\r\n # Grab the sequence corresponding to that example (to know what is the context and what is the question).\r\n sequence_ids = tokenized_examples.sequence_ids(i)\r\n\r\n # One example can give several spans, this is the index of the example containing this span of text.\r\n sample_index = sample_mapping[i]\r\n answers = examples[answer_column_name][sample_index]\r\n print(examples,answers)\r\n # If no answers are given, set the cls_index as answer.\r\n if len(answers[\"answer_start\"]) == 0:\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Start/end character index of the answer in the text.\r\n start_char = answers[\"answer_start\"][0]\r\n end_char = start_char + len(answers[\"text\"][0])\r\n\r\n # Start token index of the current span in the text.\r\n token_start_index = 0\r\n while sequence_ids[token_start_index] != (1 if pad_on_right else 0):\r\n token_start_index += 1\r\n\r\n # End token index of the current span in the text.\r\n token_end_index = len(input_ids) - 1\r\n while sequence_ids[token_end_index] != (1 if pad_on_right else 0):\r\n token_end_index -= 1\r\n\r\n # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).\r\n if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Otherwise move the token_start_index and token_end_index to the two ends of the answer.\r\n # Note: we could go after the last offset if the answer is the last word (edge case).\r\n while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:\r\n token_start_index += 1\r\n tokenized_examples[\"start_positions\"].append(token_start_index - 1)\r\n while offsets[token_end_index][1] >= end_char:\r\n token_end_index -= 1\r\n tokenized_examples[\"end_positions\"].append(token_end_index + 1)\r\n\r\n return tokenized_examples\r\n``` ", "## I have fixed it, @lhoestq \r\n### the first section change as you said and add [\"id\"]\r\n```python\r\ndef process_squad(examples):\r\n \"\"\"\r\n Process a dataset in the squad format with columns \"title\" and \"paragraphs\"\r\n to return the dataset with columns \"context\", \"question\" and \"answers\".\r\n \"\"\"\r\n # print(examples)\r\n out = {\"context\": [], \"question\": [], \"answers\":[],\"id\":[]} \r\n for paragraphs in examples[\"paragraphs\"]: \r\n for paragraph in paragraphs: \r\n for qa in paragraph[\"qas\"]: \r\n answers = [{\"answer_start\": answer[\"answer_start\"], \"text\": answer[\"text\"].strip()} for answer in qa[\"answers\"]] \r\n out[\"context\"].append(paragraph[\"context\"].strip()) \r\n out[\"question\"].append(qa[\"question\"].strip()) \r\n out[\"answers\"].append(answers) \r\n out[\"id\"].append(qa[\"id\"]) \r\n return out\r\ncolumn_names = datasets[\"train\"].column_names if training_args.do_train else datasets[\"validation\"].column_names\r\n# print(datasets[\"train\"].column_names)\r\nif set(column_names) == {\"title\", \"paragraphs\"}:\r\n datasets = datasets.map(process_squad, batched=True, remove_columns=column_names)\r\n# Preprocessing the datasets.\r\n# Preprocessing is slighlty different for training and evaluation.\r\nif training_args.do_train:\r\n column_names = datasets[\"train\"].column_names\r\nelse:\r\n column_names = datasets[\"validation\"].column_names\r\n# print(column_names)\r\nquestion_column_name = \"question\" if \"question\" in column_names else column_names[0]\r\ncontext_column_name = \"context\" if \"context\" in column_names else column_names[1]\r\nanswer_column_name = \"answers\" if \"answers\" in column_names else column_names[2]\r\n```\r\n### the second section\r\n```python\r\ndef prepare_train_features(examples):\r\n # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results\r\n # in one example possible giving several features when a context is long, each of those features having a\r\n # context that overlaps a bit the context of the previous feature.\r\n tokenized_examples = tokenizer(\r\n examples[question_column_name if pad_on_right else context_column_name],\r\n examples[context_column_name if pad_on_right else question_column_name],\r\n truncation=\"only_second\" if pad_on_right else \"only_first\",\r\n max_length=data_args.max_seq_length,\r\n stride=data_args.doc_stride,\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=True,\r\n padding=\"max_length\" if data_args.pad_to_max_length else False,\r\n )\r\n\r\n # Since one example might give us several features if it has a long context, we need a map from a feature to\r\n # its corresponding example. This key gives us just that.\r\n sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\")\r\n # The offset mappings will give us a map from token to character position in the original context. This will\r\n # help us compute the start_positions and end_positions.\r\n offset_mapping = tokenized_examples.pop(\"offset_mapping\")\r\n\r\n # Let's label those examples!\r\n tokenized_examples[\"start_positions\"] = []\r\n tokenized_examples[\"end_positions\"] = []\r\n\r\n for i, offsets in enumerate(offset_mapping):\r\n # We will label impossible answers with the index of the CLS token.\r\n input_ids = tokenized_examples[\"input_ids\"][i]\r\n cls_index = input_ids.index(tokenizer.cls_token_id)\r\n\r\n # Grab the sequence corresponding to that example (to know what is the context and what is the question).\r\n sequence_ids = tokenized_examples.sequence_ids(i)\r\n\r\n # One example can give several spans, this is the index of the example containing this span of text.\r\n sample_index = sample_mapping[i]\r\n answers = examples[answer_column_name][sample_index]\r\n # print(examples,answers,offset_mapping,tokenized_examples)\r\n # If no answers are given, set the cls_index as answer.\r\n if len(answers) == 0:#len(answers[\"answer_start\"]) == 0:\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Start/end character index of the answer in the text.\r\n start_char = answers[0][\"answer_start\"]\r\n end_char = start_char + len(answers[0][\"text\"])\r\n\r\n # Start token index of the current span in the text.\r\n token_start_index = 0\r\n while sequence_ids[token_start_index] != (1 if pad_on_right else 0):\r\n token_start_index += 1\r\n\r\n # End token index of the current span in the text.\r\n token_end_index = len(input_ids) - 1\r\n while sequence_ids[token_end_index] != (1 if pad_on_right else 0):\r\n token_end_index -= 1\r\n\r\n # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).\r\n if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Otherwise move the token_start_index and token_end_index to the two ends of the answer.\r\n # Note: we could go after the last offset if the answer is the last word (edge case).\r\n while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:\r\n token_start_index += 1\r\n tokenized_examples[\"start_positions\"].append(token_start_index - 1)\r\n while offsets[token_end_index][1] >= end_char:\r\n token_end_index -= 1\r\n tokenized_examples[\"end_positions\"].append(token_end_index + 1)\r\n return tokenized_examples\r\n```", "I'm glad you managed to fix run_qa.py for your case :)\r\n\r\nRegarding the checksum error, I'm not able to reproduce on my side.\r\nThis errors says that the downloaded file doesn't match the expected file.\r\n\r\nCould you try running this and let me know if you get the same output as me ?\r\n```python\r\nfrom datasets.utils.info_utils import get_size_checksum_dict\r\nfrom datasets import cached_path\r\n\r\nget_size_checksum_dict(cached_path(\"https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json\"))\r\n# {'num_bytes': 30288272, 'checksum': '3527663986b8295af4f7fcdff1ba1ff3f72d07d61a20f487cb238a6ef92fd955'}\r\n```", "I run the code,and it show below:\r\n```\r\n>>> from datasets.utils.info_utils import get_size_checksum_dict\r\n>>> from datasets import cached_path\r\n>>> get_size_checksum_dict(cached_path(\"https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json\"))\r\nDownloading: 30.3MB [04:13, 120kB/s]\r\n{'num_bytes': 30288272, 'checksum': '3527663986b8295af4f7fcdff1ba1ff3f72d07d61a20f487cb238a6ef92fd955'}\r\n```", "Alright ! So in this case redownloading the file with `download_mode=\"force_redownload\"` should fix it. Can you try using `download_mode=\"force_redownload\"` again ?\r\n\r\nNot sure why it didn't work for you the first time though :/" ]
1,614,588,091,000
1,614,870,566,000
null
NONE
null
### 1 When I try to train lxmert,and follow the code in README that --dataset name: ```shell python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /home2/zhenggo1/checkpoint/lxmert_squad ``` the bug is that: ``` Downloading and preparing dataset squad/plain_text (download: 33.51 MiB, generated: 85.75 MiB, post-processed: Unknown size, total: 119.27 MiB) to /home2/zhenggo1/.cache/huggingface/datasets/squad/plain_text/1.0.0/4c81550d83a2ac7c7ce23783bd8ff36642800e6633c1f18417fb58c3ff50cdd7... Traceback (most recent call last): File "examples/question-answering/run_qa.py", line 501, in <module> main() File "examples/question-answering/run_qa.py", line 217, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset use_auth_token=use_auth_token, File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py", line 633, in _download_and_prepare self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json'] ``` And I try to find the [checksum link](https://github.com/huggingface/datasets/blob/master/datasets/squad/dataset_infos.json) ,is the problem plain_text do not have a checksum? ### 2 When I try to train lxmert,and use local dataset: ``` python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --train_file $SQUAD_DIR/train-v1.1.json --validation_file $SQUAD_DIR/dev-v1.1.json --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /home2/zhenggo1/checkpoint/lxmert_squad ``` The bug is that ``` ['title', 'paragraphs'] Traceback (most recent call last): File "examples/question-answering/run_qa.py", line 501, in <module> main() File "examples/question-answering/run_qa.py", line 273, in main answer_column_name = "answers" if "answers" in column_names else column_names[2] IndexError: list index out of range ``` I print the answer_column_name and find that local squad dataset need the package datasets to preprocessing so that the code below can work: ``` if training_args.do_train: column_names = datasets["train"].column_names else: column_names = datasets["validation"].column_names print(datasets["train"].column_names) question_column_name = "question" if "question" in column_names else column_names[0] context_column_name = "context" if "context" in column_names else column_names[1] answer_column_name = "answers" if "answers" in column_names else column_names[2] ``` ## Please tell me how to fix the bug,thks a lot!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1964/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1964/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1963
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1963/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1963/comments
https://api.github.com/repos/huggingface/datasets/issues/1963/events
https://github.com/huggingface/datasets/issues/1963
818,289,967
MDU6SXNzdWU4MTgyODk5Njc=
1,963
bug in SNLI dataset
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! The labels -1 correspond to the examples without gold labels in the original snli dataset.\r\nFeel free to remove these examples if you don't need them by using\r\n```python\r\ndata = data.filter(lambda x: x[\"label\"] != -1)\r\n```" ]
1,614,540,980,000
1,614,600,089,000
null
NONE
null
Hi There is label of -1 in train set of SNLI dataset, please find the code below: ``` import numpy as np import datasets data = datasets.load_dataset("snli")["train"] labels = [] for d in data: labels.append(d["label"]) print(np.unique(labels)) ``` and results: `[-1 0 1 2]` version of datasets used: `datasets 1.2.1 <pip> ` thanks for your help. @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1963/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1963/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1959
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1959/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1959/comments
https://api.github.com/repos/huggingface/datasets/issues/1959/events
https://github.com/huggingface/datasets/issues/1959
818,055,644
MDU6SXNzdWU4MTgwNTU2NDQ=
1,959
Bug in skip_rows argument of load_dataset function ?
{ "login": "LedaguenelArthur", "id": 73159756, "node_id": "MDQ6VXNlcjczMTU5NzU2", "avatar_url": "https://avatars.githubusercontent.com/u/73159756?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LedaguenelArthur", "html_url": "https://github.com/LedaguenelArthur", "followers_url": "https://api.github.com/users/LedaguenelArthur/followers", "following_url": "https://api.github.com/users/LedaguenelArthur/following{/other_user}", "gists_url": "https://api.github.com/users/LedaguenelArthur/gists{/gist_id}", "starred_url": "https://api.github.com/users/LedaguenelArthur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LedaguenelArthur/subscriptions", "organizations_url": "https://api.github.com/users/LedaguenelArthur/orgs", "repos_url": "https://api.github.com/users/LedaguenelArthur/repos", "events_url": "https://api.github.com/users/LedaguenelArthur/events{/privacy}", "received_events_url": "https://api.github.com/users/LedaguenelArthur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\ntry `skiprows` instead. This part is not properly documented in the docs it seems.\r\n\r\n@lhoestq I'll fix this as part of a bigger PR that fixes typos in the docs." ]
1,614,468,774,000
1,615,285,292,000
1,615,285,292,000
NONE
null
Hello everyone, I'm quite new to Git so sorry in advance if I'm breaking some ground rules of issues posting... :/ I tried to use the load_dataset function, from Huggingface datasets library, on a csv file using the skip_rows argument described on Huggingface page to skip the first row containing column names `test_dataset = load_dataset('csv', data_files=['test_wLabel.tsv'], delimiter='\t', column_names=["id", "sentence", "label"], skip_rows=1)` But I got the following error message `__init__() got an unexpected keyword argument 'skip_rows'` Have I used the wrong argument ? Am I missing something or is this a bug ? Thank you very much for your time, Best regards, Arthur
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1959/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1959/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1958
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1958/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1958/comments
https://api.github.com/repos/huggingface/datasets/issues/1958/events
https://github.com/huggingface/datasets/issues/1958
818,037,548
MDU6SXNzdWU4MTgwMzc1NDg=
1,958
XSum dataset download link broken
{ "login": "himat", "id": 1156974, "node_id": "MDQ6VXNlcjExNTY5NzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1156974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/himat", "html_url": "https://github.com/himat", "followers_url": "https://api.github.com/users/himat/followers", "following_url": "https://api.github.com/users/himat/following{/other_user}", "gists_url": "https://api.github.com/users/himat/gists{/gist_id}", "starred_url": "https://api.github.com/users/himat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/himat/subscriptions", "organizations_url": "https://api.github.com/users/himat/orgs", "repos_url": "https://api.github.com/users/himat/repos", "events_url": "https://api.github.com/users/himat/events{/privacy}", "received_events_url": "https://api.github.com/users/himat/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Never mind, I ran it again and it worked this time. Strange." ]
1,614,462,476,000
1,614,462,616,000
1,614,462,616,000
NONE
null
I did ``` from datasets import load_dataset dataset = load_dataset("xsum") ``` This returns `ConnectionError: Couldn't reach http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1958/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1957
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1957/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1957/comments
https://api.github.com/repos/huggingface/datasets/issues/1957/events
https://github.com/huggingface/datasets/issues/1957
818,014,624
MDU6SXNzdWU4MTgwMTQ2MjQ=
1,957
[request] make load_metric api intutive
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,614,458,634,000
1,614,464,470,000
null
CONTRIBUTOR
null
``` metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank) ``` May I suggest that `num_process` is confusing as it's singular yet expects a plural value and either * be deprecated in favor of `num_processes` which is more intuitive since it's plural as its expected value * or even better why not mimic the established dist environment convention for that purpose, which uses `world_size`. Same for `process_id` - why reinvent the naming and needing to explain that this is **NOT** `PID`, when we have `rank` already. That is: ``` metric = load_metric('glue', 'mrpc', world_size=world_size, rank=rank) ``` This then fits like a glove into the pytorch DDP and alike envs. and we just need to call: * `dist.get_world_size()` * `dist.get_rank()` So it'd be as simple as: ``` metric = load_metric('glue', 'mrpc', world_size=dist.get_world_size(), rank=dist.get_rank()) ``` From: https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group * `world_size (int, optional)` – Number of processes participating in the job. Required if store is specified. * `rank (int, optional)` – Rank of the current process. Required if store is specified. And may be an example would be useful, so that the user doesn't even need to think about where to get `dist`: ``` import torch.distributed as dist if dist.is_initialized(): metric = load_metric(metric_name, world_size=dist.get_world_size(), rank=dist.get_rank()) else: metric = load_metric(metric_name) ``` I'm aware this is pytorch-centric, but it's better than no examples, IMHO. Thank you.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1957/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1957/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1956
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1956/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1956/comments
https://api.github.com/repos/huggingface/datasets/issues/1956/events
https://github.com/huggingface/datasets/issues/1956
818,013,741
MDU6SXNzdWU4MTgwMTM3NDE=
1,956
[distributed env] potentially unsafe parallel execution
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "You can pass the same `experiment_id` for all the metrics of the same group, and use another `experiment_id` for the other groups.\r\nMaybe we can add an environment variable that sets the default value for `experiment_id` ? What do you think ?", "Ah, you're absolutely correct, @lhoestq - it's exactly the equivalent of the shared secret. Thank you!" ]
1,614,458,325,000
1,614,619,482,000
1,614,619,482,000
CONTRIBUTOR
null
``` metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank) ``` presumes that there is only one set of parallel processes running - and will intermittently fail if you have multiple sets running as they will surely overwrite each other. Similar to https://github.com/huggingface/datasets/issues/1942 (but for a different reason). That's why dist environments use some unique to a group identifier so that each group is dealt with separately. e.g. the env-way of pytorch dist syncing is done with a unique per set `MASTER_ADDRESS+MASTER_PORT` So ideally this interface should ask for a shared secret to do the right thing. I'm not reporting an immediate need, but am only flagging that this will hit someone down the road. This problem can be remedied by adding a new optional `shared_secret` option, which can then be used to differentiate different groups of processes. and this secret should be part of the file lock name and the experiment. Thank you
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1956/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1956/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1954
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1954/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1954/comments
https://api.github.com/repos/huggingface/datasets/issues/1954/events
https://github.com/huggingface/datasets/issues/1954
817,565,563
MDU6SXNzdWU4MTc1NjU1NjM=
1,954
add a new column
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi\r\nnot sure how change the lable after creation, but this is an issue not dataset request. thanks ", "Hi ! Currently you have to use `map` . You can see an example of how to do it in this comment: https://github.com/huggingface/datasets/issues/853#issuecomment-727872188\r\n\r\nIn the future we'll add support for a more native way of adding a new column ;)" ]
1,614,363,447,000
1,619,707,843,000
1,619,707,843,000
NONE
null
Hi I'd need to add a new column to the dataset, I was wondering how this can be done? thanks @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1954/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1954/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1949
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1949/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1949/comments
https://api.github.com/repos/huggingface/datasets/issues/1949/events
https://github.com/huggingface/datasets/issues/1949
816,986,936
MDU6SXNzdWU4MTY5ODY5MzY=
1,949
Enable Fast Filtering using Arrow Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi @gchhablani :)\r\nThanks for proposing your help !\r\n\r\nI'll be doing a refactor of some parts related to filtering in the scope of https://github.com/huggingface/datasets/issues/1877\r\nSo I would first wait for this refactor to be done before working on the filtering. In particular because I plan to make things simpler to manipulate.\r\n\r\nYour feedback on this refactor would also be appreciated since it also aims at making the core code more accessible (basically my goal is that no one's ever \"having troubles getting started\" ^^)\r\n\r\nThis will be available in a few days, I will be able to give you more details at that time if you don't mind waiting a bit !", "Sure! I don't mind waiting. I'll check the refactor and try to understand what you're trying to do :)" ]
1,614,308,017,000
1,614,367,109,000
null
CONTRIBUTOR
null
Hi @lhoestq, As mentioned in Issue #1796, I would love to work on enabling fast filtering/mapping. Can you please share the expectations? It would be great if you could point me to the relevant methods/files involved. Or the docs or maybe an overview of `arrow_dataset.py`. I only ask this because I am having trouble getting started ;-; Any help would be appreciated. Thanks, Gunjan
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1949/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1949/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1948
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1948/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1948/comments
https://api.github.com/repos/huggingface/datasets/issues/1948/events
https://github.com/huggingface/datasets/issues/1948
816,689,329
MDU6SXNzdWU4MTY2ODkzMjk=
1,948
dataset loading logger level
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "These warnings are showed when there's a call to `.map` to say to the user that a dataset is reloaded from the cache instead of being recomputed.\r\nThey are warnings since we want to make sure the users know that it's not recomputed.", "Thank you for explaining the intention, @lhoestq \r\n\r\n1. Could it be then made more human-friendly? Currently the hex gibberish tells me nothing of what's really going on. e.g. the following is instructive, IMHO:\r\n\r\n```\r\nWARNING: wmt16/ro-en/train dataset was loaded from cache instead of being recomputed\r\nWARNING: wmt16/ro-en/validation dataset was loaded from cache instead of being recomputed\r\nWARNING: wmt16/ro-en/test dataset was loaded from cache instead of being recomputed\r\n```\r\nnote that it removes the not so useful hex info and tells the user instead which split it's referring to - but probably no harm in keeping the path if it helps the debug. But the key is that now the warning is telling me what it is it's warning me about.\r\n```\r\nWarning:Loading cache path\r\n```\r\non the other hand isn't telling what it is warning about.\r\n\r\nAnd I still suggest this is INFO level, otherwise you need to turn all 'using cache' statements to WARNING to be consistent. The user is most likely well aware the cache is used for models, etc. So this feels very similar.\r\n\r\n2. Should there be a way for a user to void warranty by having a flag - `I know I'm expecting the cached version to load if it's available - please do not warn me about it=True`\r\n\r\nTo explain the need: Warnings are a problem, they constantly take attention away because they could be the harbinger of a problem. Therefore I prefer not to have any warnings in the log, and if I get any I usually try to deal with those so that my log is clean. \r\n\r\nIt's less of an issue for somebody doing long runs. It's a huge issue for someone who does a new run every few minutes and on the lookout for any potential problems which is what I have been doing a lot of integrating DeepSpeed and other things. And since there are already problems to deal with during the integration it's nice to have a clean log to start with. \r\n\r\nI hope my need is not unreasonable and I was able to explain it adequately. \r\n\r\nThank you." ]
1,614,278,017,000
1,614,302,824,000
null
CONTRIBUTOR
null
on master I get this with `--dataset_name wmt16 --dataset_config ro-en`: ``` WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-2e01bead8cf42e26.arrow WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-ac3bebaf4f91f776.arrow WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-810c3e61259d73a9.arrow ``` why are those WARNINGs? Should be INFO, no? warnings should only be used when a user needs to pay attention to something, this is just informative - I'd even say it should be DEBUG, but definitely not WARNING. Thank you.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1948/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1948/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1945
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1945/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1945/comments
https://api.github.com/repos/huggingface/datasets/issues/1945/events
https://github.com/huggingface/datasets/issues/1945
816,421,966
MDU6SXNzdWU4MTY0MjE5NjY=
1,945
AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets'
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "sorry my mistake, datasets were overwritten closing now, thanks a lot" ]
1,614,258,585,000
1,614,259,235,000
1,614,259,226,000
NONE
null
Hi I am trying to concatenate a list of huggingface datastes as: ` train_dataset = datasets.concatenate_datasets(train_datasets) ` Here is the `train_datasets` when I print: ``` [Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 120361 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 2670 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 6944 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 38140 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 173711 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 1655 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 4274 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 2019 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 2109 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 11963 })] ``` I am getting the following error: `AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets' ` I was wondering if you could help me with this issue, thanks a lot
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1945/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1945/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1942
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1942/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1942/comments
https://api.github.com/repos/huggingface/datasets/issues/1942/events
https://github.com/huggingface/datasets/issues/1942
816,037,520
MDU6SXNzdWU4MTYwMzc1MjA=
1,942
[experiment] missing default_experiment-1-0.arrow
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi !\r\n\r\nThe cache at `~/.cache/huggingface/metrics` stores the users data for metrics computations (hence the arrow files).\r\n\r\nHowever python modules (i.e. dataset scripts, metric scripts) are stored in `~/.cache/huggingface/modules/datasets_modules`.\r\n\r\nIn particular the metrics are cached in `~/.cache/huggingface/modules/datasets_modules/metrics/`\r\n\r\nFeel free to take a look at your cache and let me know if you find any issue that would help explaining why you had an issue with `rouge` with no connection. I'm doing some tests on my side to try to reproduce the issue you have\r\n", "Thank you for clarifying that the metrics files are to be found elsewhere, @lhoestq \r\n\r\n> The cache at ~/.cache/huggingface/metrics stores the users data for metrics computations (hence the arrow files).\r\n\r\ncould it be renamed to reflect that? otherwise it misleadingly suggests that it's the metrics. Perhaps `~/.cache/huggingface/metrics-user-data`?\r\n\r\nAnd there are so many `.lock` files w/o corresponding files under `~/.cache/huggingface/metrics/`. Why are they there? \r\n\r\nfor example after I wipe out the dir completely and do one training I end up with:\r\n```\r\n~/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock\r\n```\r\nwhat is that lock file locking when nothing is running?", "The lock files come from an issue with filelock (see comment in the code [here](https://github.com/benediktschmitt/py-filelock/blob/master/filelock.py#L394-L398)). Basically on unix there're always .lock files left behind. I haven't dove into this issue", "are you sure you need an external lock file? if it's a single purpose locking in the same scope you can lock the caller `__file__` instead, e.g. here is how one can `flock` the script file itself to ensure atomic printing:\r\n\r\n```\r\nimport fcntl\r\ndef printflock(*msgs):\r\n \"\"\" print in multiprocess env so that the outputs from different processes don't get interleaved \"\"\"\r\n with open(__file__, \"r\") as fh:\r\n fcntl.flock(fh, fcntl.LOCK_EX)\r\n try:\r\n print(*msgs)\r\n finally:\r\n fcntl.flock(fh, fcntl.LOCK_UN)\r\n```\r\n", "OK, this issue is not about caching but some internal conflict/race condition it seems, I have just run into it on my normal env:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 356, in _finalize\r\n self.data = Dataset(**reader.read_files([{\"filename\": f} for f in file_paths]))\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 236, in read_files\r\n pa_table = self._read_files(files, in_memory=in_memory)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 171, in _read_files\r\n pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 302, in _get_dataset_from_filename\r\n pa_table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 322, in read_table\r\n stream = stream_from(filename)\r\n File \"pyarrow/io.pxi\", line 782, in pyarrow.lib.memory_map\r\n File \"pyarrow/io.pxi\", line 743, in pyarrow.lib.MemoryMappedFile._open\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 97, in pyarrow.lib.check_status\r\nFileNotFoundError: [Errno 2] Failed to open local file '/home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"examples/seq2seq/run_seq2seq.py\", line 655, in <module>\r\n main()\r\n File \"examples/seq2seq/run_seq2seq.py\", line 619, in main\r\n test_results = trainer.predict(\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer_seq2seq.py\", line 121, in predict\r\n return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py\", line 1706, in predict\r\n output = self.prediction_loop(\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py\", line 1813, in prediction_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n File \"examples/seq2seq/run_seq2seq.py\", line 556, in compute_metrics\r\n result = metric.compute(predictions=decoded_preds, references=decoded_labels)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 388, in compute\r\n self._finalize()\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 358, in _finalize\r\n raise ValueError(\r\nValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.\r\n```\r\n\r\nI'm just running `run_seq2seq.py` under DeepSpeed:\r\n\r\n```\r\nexport BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 deepspeed --num_gpus=2 examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --do_predict --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_train_samples 100 --max_val_samples 100 --max_test_samples 100 --dataset_name wmt16 --dataset_config ro-en --source_prefix \"translate English to Romanian: \" --deepspeed examples/tests/deepspeed/ds_config.json\r\n```\r\n\r\nIt finished the evaluation OK and crashed on the prediction part of the Trainer. But the eval / predict parts no longer run under Deepspeed, it's just plain ddp.\r\n\r\nIs this some kind of race condition? It happens intermittently - there is nothing else running at the same time.\r\n\r\nBut if 2 independent instances of the same script were to run at the same time it's clear to see that this problem would happen. Perhaps it'd help to create a unique hash which is shared between all processes in the group and use that as the default experiment id?\r\n", "When you're using metrics in a distributed setup, there are two cases:\r\n1. you're doing two completely different experiments (two evaluations) and the 2 metrics jobs have nothing to do with each other\r\n2. you're doing one experiment (one evaluation) but use multiple processes to feed the data to the metric.\r\n\r\nIn case 1. you just need to provide two different `experiment_id` so that the metrics don't collide.\r\nIn case 2. they must have the same experiment_id (or use the default one), but in this case you also need to provide the `num_processes` and `process_id`\r\n\r\nIf understand correctly you're in situation 2.\r\n\r\nIf so, you make sure that you instantiate the metrics with both the right `num_processes` and `process_id` parameters ?\r\n\r\nIf they're not set, then the cache files of the two metrics collide it can cause issues. For example if one metric finishes before the other, then the cache file is deleted and the other metric gets a FileNotFoundError\r\nThere's more information in the [documentation](https://huggingface.co/docs/datasets/loading_metrics.html#distributed-setups) if you want\r\n\r\nHope that helps !", "Thank you for explaining that in a great way, @lhoestq \r\n\r\nSo the bottom line is that the `transformers` examples are broken since they don't do any of that. At least `run_seq2seq.py` just does `metric = load_metric(metric_name)`\r\n\r\nWhat test would you recommend to reliably reproduce this bug in `examples/seq2seq/run_seq2seq.py`?", "To give more context, we are just using the metrics for the `comput_metric` function and nothing else. Is there something else we can use that just applies the function to the full arrays of predictions and labels? Because that's all we need, all the gathering has already been done because the datasets Metric multiprocessing relies on file storage and thus does not work in a multi-node distributed setup (whereas the Trainer does).\r\n\r\nOtherwise, we'll have to switch to something else to compute the metrics :-(", "OK, it definitely leads to a race condition in how it's used right now. Here is how you can reproduce it - by injecting a random sleep time different for each process before the locks are acquired. \r\n```\r\n--- a/src/datasets/metric.py\r\n+++ b/src/datasets/metric.py\r\n@@ -348,6 +348,16 @@ class Metric(MetricInfoMixin):\r\n\r\n elif self.process_id == 0:\r\n # Let's acquire a lock on each node files to be sure they are finished writing\r\n+\r\n+ import time\r\n+ import random\r\n+ import os\r\n+ pid = os.getpid()\r\n+ random.seed(pid)\r\n+ secs = random.randint(1, 15)\r\n+ time.sleep(secs)\r\n+ print(f\"sleeping {secs}\")\r\n+\r\n file_paths, filelocks = self._get_all_cache_files()\r\n\r\n # Read the predictions and references\r\n@@ -385,7 +395,10 @@ class Metric(MetricInfoMixin):\r\n\r\n if predictions is not None:\r\n self.add_batch(predictions=predictions, references=references)\r\n+ print(\"FINALIZE START\")\r\n+\r\n self._finalize()\r\n+ print(\"FINALIZE END\")\r\n\r\n self.cache_file_name = None\r\n self.filelock = None\r\n```\r\n\r\nthen run with 2 procs: `python -m torch.distributed.launch --nproc_per_node=2`\r\n```\r\nexport BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --do_predict --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_train_samples 10 --max_val_samples 10 --max_test_samples 10 --dataset_name wmt16 --dataset_config ro-en --source_prefix \"translate English to Romanian: \"\r\n```\r\n\r\n```\r\n***** Running Evaluation *****\r\n Num examples = 10\r\n Batch size = 16\r\n 0%| | 0/1 [00:00<?, ?it/s]FINALIZE START\r\nFINALIZE START\r\nsleeping 11\r\nFINALIZE END\r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:11<00:00, 11.06s/it]\r\nsleeping 11\r\nTraceback (most recent call last):\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 368, in _finalize\r\n self.data = Dataset(**reader.read_files([{\"filename\": f} for f in file_paths]))\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 236, in read_files\r\n pa_table = self._read_files(files, in_memory=in_memory)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 171, in _read_files\r\n pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 302, in _get_dataset_from_filename\r\n pa_table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 322, in read_table\r\n stream = stream_from(filename)\r\n File \"pyarrow/io.pxi\", line 782, in pyarrow.lib.memory_map\r\n File \"pyarrow/io.pxi\", line 743, in pyarrow.lib.MemoryMappedFile._open\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 97, in pyarrow.lib.check_status\r\nFileNotFoundError: [Errno 2] Failed to open local file '/home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"examples/seq2seq/run_seq2seq.py\", line 645, in <module>\r\n main()\r\n File \"examples/seq2seq/run_seq2seq.py\", line 601, in main\r\n metrics = trainer.evaluate(\r\n File \"/mnt/nvme1/code/huggingface/transformers-mp-pp/src/transformers/trainer_seq2seq.py\", line 74, in evaluate\r\n return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)\r\n File \"/mnt/nvme1/code/huggingface/transformers-mp-pp/src/transformers/trainer.py\", line 1703, in evaluate\r\n output = self.prediction_loop(\r\n File \"/mnt/nvme1/code/huggingface/transformers-mp-pp/src/transformers/trainer.py\", line 1876, in prediction_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n File \"examples/seq2seq/run_seq2seq.py\", line 556, in compute_metrics\r\n result = metric.compute(predictions=decoded_preds, references=decoded_labels)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 402, in compute\r\n self._finalize()\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 370, in _finalize\r\n raise ValueError(\r\nValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.\r\n```", "I tried to adjust `run_seq2seq.py` and trainer to use the suggested dist env:\r\n```\r\n import torch.distributed as dist\r\n metric = load_metric(metric_name, num_process=dist.get_world_size(), process_id=dist.get_rank())\r\n```\r\nand in `trainer.py` added to call just for rank 0:\r\n```\r\n if self.is_world_process_zero() and self.compute_metrics is not None and preds is not None and label_ids is not None:\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n```\r\nand then the process hangs in a deadlock. \r\n\r\nHere is the tb:\r\n```\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/filelock.py\", line 275 in acquire\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 306 in _check_all_processes_locks\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 501 in _init_writer\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 440 in add_batch\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 397 in compute\r\n File \"examples/seq2seq/run_seq2seq.py\", line 558 in compute_metrics\r\n File \"/mnt/nvme1/code/huggingface/transformers-mp-pp/src/transformers/trainer.py\", line 1876 in prediction_loop\r\n File \"/mnt/nvme1/code/huggingface/transformers-mp-pp/src/transformers/trainer.py\", line 1703 in evaluate\r\n File \"/mnt/nvme1/code/huggingface/transformers-mp-pp/src/transformers/trainer_seq2seq.py\", line 74 in evaluate\r\n File \"examples/seq2seq/run_seq2seq.py\", line 603 in main\r\n File \"examples/seq2seq/run_seq2seq.py\", line 651 in <module>\r\n```\r\n\r\nBut this sounds right, since in the above diff I set up a distributed metric and only called one process - so it's blocking on waiting for other processes to do the same.\r\n\r\nSo one working solution is to leave:\r\n\r\n```\r\n metric = load_metric(metric_name)\r\n```\r\nalone, and only call `compute_metrics` from rank 0\r\n```\r\n if self.is_world_process_zero() and self.compute_metrics is not None and preds is not None and label_ids is not None:\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n```\r\n\r\nso we now no longer use the distributed env as far as `datasets` is concerned, it's just a single process.\r\n\r\nAre there any repercussions/side-effects to this proposed change in Trainer? If it always gathers all inputs on rank 0 then this is how it should have been done in first place - i.e. only run for rank 0. It appears that currently it was re-calculating the metrics on all processes on the same data just to throw the results away other than for rank 0. Unless I missed something.\r\n", "But no, since \r\n`\r\n metric = load_metric(metric_name)\r\n`\r\nis called for each process, the race condition is still there. So still getting:\r\n\r\n```\r\nValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.\r\n```\r\n\r\ni.e. the only way to fix this is to `load_metric` only for rank 0, but this requires huge changes in the code and all end users' code.\r\n", "OK, here is a workaround that works. The onus here is absolutely on the user:\r\n\r\n```\r\ndiff --git a/examples/seq2seq/run_seq2seq.py b/examples/seq2seq/run_seq2seq.py\r\nindex 2a060dac5..c82fd83ea 100755\r\n--- a/examples/seq2seq/run_seq2seq.py\r\n+++ b/examples/seq2seq/run_seq2seq.py\r\n@@ -520,7 +520,11 @@ def main():\r\n\r\n # Metric\r\n metric_name = \"rouge\" if data_args.task.startswith(\"summarization\") else \"sacrebleu\"\r\n- metric = load_metric(metric_name)\r\n+ import torch.distributed as dist\r\n+ if dist.is_initialized():\r\n+ metric = load_metric(metric_name, num_process=dist.get_world_size(), process_id=dist.get_rank())\r\n+ else:\r\n+ metric = load_metric(metric_name)\r\n\r\n def postprocess_text(preds, labels):\r\n preds = [pred.strip() for pred in preds]\r\n@@ -548,12 +552,17 @@ def main():\r\n # Some simple post-processing\r\n decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)\r\n\r\n+ kwargs = dict(predictions=decoded_preds, references=decoded_labels)\r\n+ if metric_name == \"rouge\":\r\n+ kwargs.update(use_stemmer=True)\r\n+ result = metric.compute(**kwargs) # must call for all processes\r\n+ if result is None: # only process with rank-0 will return metrics, others None\r\n+ return {}\r\n+\r\n if metric_name == \"rouge\":\r\n- result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)\r\n # Extract a few results from ROUGE\r\n result = {key: value.mid.fmeasure * 100 for key, value in result.items()}\r\n else:\r\n- result = metric.compute(predictions=decoded_preds, references=decoded_labels)\r\n result = {\"bleu\": result[\"score\"]}\r\n\r\n prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]\r\n```\r\n\r\nThis is not user-friendly to say the least. And it's still wasteful as we don't need other processes to do anything.\r\n\r\nBut it solves the current race condition.\r\n\r\nClearly this calls for a design discussion as it's the responsibility of the Trainer to handle this and not user's. Perhaps in the `transformers` land?", "I don't see how this could be the responsibility of `Trainer`, who hasn't the faintest idea of what a `datasets.Metric` is. The trainer takes a function `compute_metrics` that goes from predictions + labels to metric results, there is nothing there. That computation is done on all processes \r\n\r\nThe fact a `datasets.Metric` object cannot be used as a simple compute function in a multi-process environment is, in my opinion, a bug in `datasets`. Especially since, as I mentioned before, the multiprocessing part of `datasets.Metric` has a deep flaw since it can't work in a multinode environment. So you actually need to do the job of gather predictions and labels yourself.\r\n\r\nThe changes you are proposing Stas are making the code less readable and also concatenate all the predictions and labels `number_of_processes` times I believe, which is not going to make the metric computation any faster.\r\n\r\n", "Right, to clarify, I meant it'd be good to have it sorted on the library side and not requiring the user to figure it out. This is too complex and error-prone and if not coded correctly the bug will be intermittent which is even worse.\r\n\r\nOh I guess I wasn't clear in my message - in no way I'm proposing that we use this workaround code - I was just showing what I had to do to make it work.\r\n\r\nWe are on the same page.\r\n\r\n> The changes you are proposing Stas are making the code less readable and also concatenate all the predictions and labels number_of_processes times I believe, which is not going to make the metric computation any faster.\r\n\r\nAnd yes, this is another problem that my workaround introduces. Thank you for pointing it out, @sgugger \r\n", "> The fact a datasets.Metric object cannot be used as a simple compute function in a multi-process environment is, in my opinion, a bug in datasets\r\n\r\nYes totally, this use case is supposed to be supported by `datasets`. And in this case there shouldn't be any collision between the metrics. I'm looking into it :)\r\nMy guess is that at one point the metric isn't using the right file name. It's supposed to use one with a unique uuid in order to avoid the collisions.", "I just opened #1966 to fix this :)\r\n@stas00 if have a chance feel free to try it !", "Thank you, @lhoestq - I will experiment and report back. \r\n\r\nedit: It works! Thank you" ]
1,614,222,135,000
1,614,623,611,000
null
CONTRIBUTOR
null
the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/.cache/huggingface/metrics` - there are many `*.arrow.lock` files but zero metrics files. w/o the network I get: ``` FileNotFoundError: [Errno 2] No such file or directory: '~/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow ``` there is just `~/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock` I did run the same `run_seq2seq.py` script on the instance with network and it worked just fine, but only the lock file was left behind. this is with master. Thank you.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1942/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1941
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1941/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1941/comments
https://api.github.com/repos/huggingface/datasets/issues/1941/events
https://github.com/huggingface/datasets/issues/1941
815,985,167
MDU6SXNzdWU4MTU5ODUxNjc=
1,941
Loading of FAISS index fails for index_name = 'exact'
{ "login": "mkserge", "id": 2992022, "node_id": "MDQ6VXNlcjI5OTIwMjI=", "avatar_url": "https://avatars.githubusercontent.com/u/2992022?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mkserge", "html_url": "https://github.com/mkserge", "followers_url": "https://api.github.com/users/mkserge/followers", "following_url": "https://api.github.com/users/mkserge/following{/other_user}", "gists_url": "https://api.github.com/users/mkserge/gists{/gist_id}", "starred_url": "https://api.github.com/users/mkserge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mkserge/subscriptions", "organizations_url": "https://api.github.com/users/mkserge/orgs", "repos_url": "https://api.github.com/users/mkserge/repos", "events_url": "https://api.github.com/users/mkserge/events{/privacy}", "received_events_url": "https://api.github.com/users/mkserge/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting ! I'm taking a look", "Index training was missing, I fixed it here: https://github.com/huggingface/datasets/commit/f5986c46323583989f6ed1dabaf267854424a521\r\n\r\nCan you try again please ?", "Works great 👍 I just put a minor comment on the commit, I think you meant to pass the `train_size` from the one obtained from the config.\r\n\r\nThanks for a quick response!" ]
1,614,216,654,000
1,614,263,326,000
1,614,263,326,000
CONTRIBUTOR
null
Hi, It looks like loading of FAISS index now fails when using index_name = 'exact'. For example, from the RAG [model card](https://huggingface.co/facebook/rag-token-nq?fbclid=IwAR3bTfhls5U_t9DqsX2Vzb7NhtRHxJxfQ-uwFT7VuCPMZUM2AdAlKF_qkI8#usage). Running `transformers==4.3.2` and datasets installed from source on latest `master` branch. ```bash (venv) sergey_mkrtchyan datasets (master) $ python Python 3.8.6 (v3.8.6:db455296be, Sep 23 2020, 13:31:39) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration >>> tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") >>> retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True) Using custom data configuration dummy.psgs_w100.nq.no_index-dummy=True,with_index=False Reusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.no_index-dummy=True,with_index=False/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb) Using custom data configuration dummy.psgs_w100.nq.exact-50b6cda57ff32ab4 Reusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.exact-50b6cda57ff32ab4/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb) 0%| | 0/10 [00:00<?, ?it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 425, in from_pretrained return cls( File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 387, in __init__ self.init_retrieval() File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 458, in init_retrieval self.index.init_index() File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 284, in init_index self.dataset = load_dataset( File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/load.py", line 750, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py", line 734, in as_dataset datasets = utils.map_nested( File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/utils/py_utils.py", line 195, in map_nested return function(data_struct) File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py", line 769, in _build_single_dataset post_processed = self._post_process(ds, resources_paths) File "/Users/sergey_mkrtchyan/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb/wiki_dpr.py", line 205, in _post_process dataset.add_faiss_index("embeddings", custom_index=index) File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/arrow_dataset.py", line 2516, in add_faiss_index super().add_faiss_index( File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py", line 416, in add_faiss_index faiss_index.add_vectors(self, column=column, train_size=train_size, faiss_verbose=faiss_verbose) File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py", line 281, in add_vectors self.faiss_index.add(vecs) File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/__init__.py", line 104, in replacement_add self.add_c(n, swig_ptr(x)) File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/swigfaiss.py", line 3263, in add return _swigfaiss.IndexHNSW_add(self, n, x) RuntimeError: Error in virtual void faiss::IndexHNSW::add(faiss::Index::idx_t, const float *) at /Users/runner/work/faiss-wheels/faiss-wheels/faiss/faiss/IndexHNSW.cpp:356: Error: 'is_trained' failed >>> ``` The issue seems to be related to the scalar quantization in faiss added in this commit: 8c5220307c33f00e01c3bf7b8. Reverting it fixes the issue.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1941/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1941/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1940
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1940/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1940/comments
https://api.github.com/repos/huggingface/datasets/issues/1940/events
https://github.com/huggingface/datasets/issues/1940
815,770,012
MDU6SXNzdWU4MTU3NzAwMTI=
1,940
Side effect when filtering data due to `does_function_return_dict` call in `Dataset.map()`
{ "login": "francisco-perez-sorrosal", "id": 918006, "node_id": "MDQ6VXNlcjkxODAwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4", "gravatar_id": "", "url": "https://api.github.com/users/francisco-perez-sorrosal", "html_url": "https://github.com/francisco-perez-sorrosal", "followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers", "following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}", "gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}", "starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions", "organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs", "repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos", "events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}", "received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Thanks for the report !\r\n\r\nCurrently we don't have a way to let the user easily disable this behavior.\r\nHowever I agree that we should support stateful processing functions, ideally by removing `does_function_return_dict`.\r\n\r\nWe needed this function in order to know whether the `map` functions needs to write data or not. if `does_function_return_dict` returns False then we don't write anything.\r\n\r\nInstead of checking the output of the processing function outside of the for loop that iterates through the dataset to process it, we can check the output of the first processed example and at that point decide if we need to write data or not.\r\n\r\nTherefore it's definitely possible to fix this unwanted behavior, any contribution going into this direction is welcome :)", "Thanks @mariosasko for the PR!" ]
1,614,194,336,000
1,616,513,209,000
1,616,513,209,000
CONTRIBUTOR
null
Hi there! In my codebase I have a function to filter rows in a dataset, selecting only a certain number of examples per class. The function passes a extra argument to maintain a counter of the number of dataset rows/examples already selected per each class, which are the ones I want to keep in the end: ```python def fill_train_examples_per_class(example, per_class_limit: int, counter: collections.Counter): label = int(example['label']) current_counter = counter.get(label, 0) if current_counter < per_class_limit: counter[label] = current_counter + 1 return True return False ``` At some point I invoke it through the `Dataset.filter()` method in the `arrow_dataset.py` module like this: ```python ... kwargs = {"per_class_limit": train_examples_per_class_limit, "counter": Counter()} datasets['train'] = datasets['train'].filter(fill_train_examples_per_class, num_proc=1, fn_kwargs=kwargs) ... ``` The problem is that, passing a stateful container (the counter,) provokes a side effect in the new filtered dataset obtained. This is due to the fact that at some point in `filter()`, the `map()`'s function `does_function_return_dict` is invoked in line [1290](https://github.com/huggingface/datasets/blob/96578adface7e4bc1f3e8bafbac920d72ca1ca60/src/datasets/arrow_dataset.py#L1290). When this occurs, the state of the counter is initially modified by the effects of the function call on the 1 or 2 rows selected in lines 1288 and 1289 of the same file (which are marked as `test_inputs` & `test_indices` respectively in lines 1288 and 1289. This happens out of the control of the user (which for example can't reset the state of the counter before continuing the execution,) provoking in the end an undesired side effect in the results obtained. In my case, the resulting dataset -despite of the counter results are ok- lacks an instance of the classes 0 and 1 (which happen to be the classes of the first two examples of my dataset.) The rest of the classes I have in my dataset, contain the right number of examples as they were not affected by the effects of `does_function_return_dict` call. I've debugged my code extensively and made a workaround myself hardcoding the necessary stuff (basically putting `update_data=True` in line 1290,) and then I obtain the results I expected without the side effect. Is there a way to avoid that call to `does_function_return_dict` in map()'s line 1290 ? (e.g. extracting the required information that `does_function_return_dict` returns without making the testing calls to the user function on dataset rows 0 & 1) Thanks in advance, Francisco Perez-Sorrosal
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1940/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1940/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1939
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1939/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1939/comments
https://api.github.com/repos/huggingface/datasets/issues/1939/events
https://github.com/huggingface/datasets/issues/1939
815,680,510
MDU6SXNzdWU4MTU2ODA1MTA=
1,939
[firewalled env] OFFLINE mode
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting and for all the details and suggestions.\r\n\r\nI'm totally in favor of having a HF_DATASETS_OFFLINE env variable to disable manually all the connection checks, remove retries etc.\r\n\r\nMoreover you may know that the use case that you are mentioning is already supported from `datasets` 1.3.0, i.e. you already can:\r\n- first load datasets and metrics from an instance with internet connection\r\n- then be able to reload datasets and metrics from another instance without connection (as long as the filesystem is shared)\r\n\r\nThis is already implemented, but currently it only works if the requests return a `ConnectionError` (or any error actually). Not sure why it would hang instead of returning an error.\r\n\r\nMaybe this is just a issue with the timeout value being not set or too high ?\r\nIs there a way I can have access to one of the instances on which there's this issue (we can discuss this offline) ?\r\n", "I'm on master, so using all the available bells and whistles already.\r\n\r\nIf you look at the common issues - it for example tries to look up files if they appear in `_PACKAGED_DATASETS_MODULES` which it shouldn't do.\r\n\r\n--------------\r\n\r\nYes, there is a nuance to it. As I mentioned it's firewalled - that is it has a network but making any calls outside - it just hangs in:\r\n\r\n```\r\nsin_addr=inet_addr(\"xx.xx.xx.xx\")}, [28->16]) = 0\r\nclose(5) = 0\r\nsocket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC, IPPROTO_TCP) = 5\r\nconnect(5, {sa_family=AF_INET, sin_port=htons(3128), sin_addr=inet_addr(\"yy.yy.yy.yy\")}, 16^C) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)\r\n```\r\nuntil it times out.\r\n\r\nThat's why we need to be able to tell the software that there is no network to rely on even if there is one (good for testing too).\r\n\r\nSo what I'm thinking is that this is a simple matter of pre-ambling any network call wrappers with:\r\n\r\n```\r\nif HF_DATASETS_OFFLINE:\r\n assert \"Attempting to make a network call under Offline mode\"\r\n```\r\n\r\nand then fixing up if there is anything else to fix to make it work.\r\n\r\n--------------\r\n\r\nOtherwise I think the only other problem I encountered is that we need to find a way to pre-cache metrics, for some reason it's not caching it and wanting to fetch it from online.\r\n\r\nWhich is extra strange since it already has those files in the `datasets` repo itself that is on the filesystem.\r\n\r\nThe workaround I had to do is to copy `rouge/rouge.py` (with the parent folder) from the datasets repo to the current dir - and then it proceeded.", "Ok understand better the hanging issue.\r\nI guess catching connection errors is not enough, we should also avoid all the hangings.\r\nCurrently the offline mode tests are only done by simulating an instant connection fail that returns an error, let's have another connection mock that hangs instead.\r\n\r\nI'll also take a look at why you had to do this for `rouge`.\r\n", "FWIW, I think instant failure on the behalf of a network call is the simplest solution to correctly represent the environment and having the caller to sort it out is the next thing to do, since here it is the case of having no functional network, it's just that the software doesn't know this is the case, because there is some network. So we just need to help it to bail out instantly rather than hang waiting for it to time out. And afterwards everything else you said.", "Update on this: \r\n\r\nI managed to create a mock environment for tests that makes the connections hang until timeout.\r\nI managed to reproduce the issue you're having in this environment.\r\n\r\nI'll update the offline test cases to also test the robustness to connection hangings, and make sure we set proper timeouts where it's needed in the code. This should cover the _automatic_ section you mentioned.", "Fabulous! I'm glad you were able to reproduce the issues, @lhoestq!", "I lost access to the firewalled setup, but I emulated it with:\r\n\r\n```\r\nsudo ufw enable\r\nsudo ufw default deny outgoing\r\n```\r\n(thanks @mfuntowicz)\r\n\r\nI was able to test `HF_DATASETS_OFFLINE=1` and it worked great - i.e. didn't try to reach out with it and used the cached files instead.\r\n\r\nThank you!" ]
1,614,186,822,000
1,614,920,994,000
1,614,920,994,000
CONTRIBUTOR
null
This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls. I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 possible ways to going about it. ## 1. Manual manually prepare data and metrics files, that is transfer to the firewalled instance the dataset and the metrics and run: ``` DATASETS_OFFLINE=1 run_seq2seq.py --train_file xyz.csv --validation_file xyz.csv ... ``` `datasets` must not make any network calls and if there is a logic to do that and something is missing it should assert that this or that action requires network and therefore it can't proceed. ## 2. Automatic In some clouds one can prepare a datastorage ahead of time with a normal networked environment but which doesn't have gpus and then one switches to the gpu instance which is firewalled, but it can access all the cached data. This is the ideal situation, since in this scenario we don't have to do anything manually, but simply run the same application twice: 1. on the non-firewalled instance: ``` run_seq2seq.py --dataset_name wmt16 --dataset_config ro-en ... ``` which should download and cached everything. 2. and then immediately after on the firewalled instance, which shares the same filesystem ``` DATASETS_OFFLINE=1 run_seq2seq.py --dataset_name wmt16 --dataset_config ro-en ... ``` and the metrics and datasets should be cached by the invocation number 1 and any network calls be skipped and if the logic is missing data it should assert and not try to fetch any data from online. ## Common Issues 1. for example currently `datasets` tries to look up online datasets if the files contain json or csv, despite the paths already provided ``` if dataset and path in _PACKAGED_DATASETS_MODULES: ``` 2. it has an issue with metrics. e.g. I had to manually copy `rouge/rouge.py` from the `datasets` repo to the current dir - or it was hanging. I had to comment out `head_hf_s3(...)` calls to make things work. So all those `try: head_hf_s3(...)` shouldn't be tried with `DATASETS_OFFLINE=1` Here is the corresponding issue for `transformers`: https://github.com/huggingface/transformers/issues/10379 Thanks.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1939/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1939/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1937/comments
https://api.github.com/repos/huggingface/datasets/issues/1937/events
https://github.com/huggingface/datasets/issues/1937
815,163,943
MDU6SXNzdWU4MTUxNjM5NDM=
1,937
CommonGen dataset page shows an error OSError: [Errno 28] No space left on device
{ "login": "yuchenlin", "id": 10104354, "node_id": "MDQ6VXNlcjEwMTA0MzU0", "avatar_url": "https://avatars.githubusercontent.com/u/10104354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuchenlin", "html_url": "https://github.com/yuchenlin", "followers_url": "https://api.github.com/users/yuchenlin/followers", "following_url": "https://api.github.com/users/yuchenlin/following{/other_user}", "gists_url": "https://api.github.com/users/yuchenlin/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuchenlin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuchenlin/subscriptions", "organizations_url": "https://api.github.com/users/yuchenlin/orgs", "repos_url": "https://api.github.com/users/yuchenlin/repos", "events_url": "https://api.github.com/users/yuchenlin/events{/privacy}", "received_events_url": "https://api.github.com/users/yuchenlin/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Facing the same issue for [Squad](https://huggingface.co/datasets/viewer/?dataset=squad) and [TriviaQA](https://huggingface.co/datasets/viewer/?dataset=trivia_qa) datasets as well.", "We just fixed the issue, thanks for reporting !" ]
1,614,149,253,000
1,614,337,806,000
1,614,337,806,000
CONTRIBUTOR
null
The page of the CommonGen data https://huggingface.co/datasets/viewer/?dataset=common_gen shows ![image](https://user-images.githubusercontent.com/10104354/108959311-1865e600-7629-11eb-868c-cf4cb27034ea.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1937/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1937/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1934
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1934/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1934/comments
https://api.github.com/repos/huggingface/datasets/issues/1934/events
https://github.com/huggingface/datasets/issues/1934
814,437,190
MDU6SXNzdWU4MTQ0MzcxOTA=
1,934
Add Stanford Sentiment Treebank (SST)
{ "login": "patpizio", "id": 15801338, "node_id": "MDQ6VXNlcjE1ODAxMzM4", "avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patpizio", "html_url": "https://github.com/patpizio", "followers_url": "https://api.github.com/users/patpizio/followers", "following_url": "https://api.github.com/users/patpizio/following{/other_user}", "gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}", "starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patpizio/subscriptions", "organizations_url": "https://api.github.com/users/patpizio/orgs", "repos_url": "https://api.github.com/users/patpizio/repos", "events_url": "https://api.github.com/users/patpizio/events{/privacy}", "received_events_url": "https://api.github.com/users/patpizio/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Dataset added in release [1.5.0](https://github.com/huggingface/datasets/releases/tag/1.5.0), I think I can close this." ]
1,614,084,796,000
1,616,089,904,000
1,616,089,904,000
CONTRIBUTOR
null
I am going to add SST: - **Name:** The Stanford Sentiment Treebank - **Description:** The first corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language - **Paper:** [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf) - **Data:** https://nlp.stanford.edu/sentiment/index.html - **Motivation:** Already requested in #353, SST is a popular dataset for Sentiment Classification What's the difference with the [_SST-2_](https://huggingface.co/datasets/viewer/?dataset=glue&config=sst2) dataset included in GLUE? Essentially, SST-2 is a version of SST where: - the labels were mapped from real numbers in [0.0, 1.0] to a binary label: {0, 1} - the labels of the *sub-sentences* were included only in the training set - the labels in the test set are obfuscated So there is a lot more information in the original SST. The tricky bit is, the data is scattered into many text files and, for one in particular, I couldn't find the original encoding ([*but I'm not the only one*](https://groups.google.com/g/word2vec-toolkit/c/QIUjLw6RqFk/m/_iEeyt428wkJ) 🎵). The only solution I found was to manually replace all the è, ë, ç and so on into an `utf-8` copy of the text file. I uploaded the result in my Dropbox and I am using that as the main repo for the dataset. Also, the _sub-sentences_ are built at run-time from the information encoded in several text files, so generating the examples is a bit more cumbersome than usual. Luckily, the dataset is not enormous. I plan to divide the dataset in 2 configs: one with just whole sentences with their labels, the other with sentences _and their sub-sentences_ with their labels. Each config will be split in train, validation and test. Hopefully this makes sense, we may discuss it in the PR I'm going to submit.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1934/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1934/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1924/comments
https://api.github.com/repos/huggingface/datasets/issues/1924/events
https://github.com/huggingface/datasets/issues/1924
813,599,733
MDU6SXNzdWU4MTM1OTk3MzM=
1,924
Anonymous Dataset Addition (i.e Anonymous PR?)
{ "login": "PierreColombo", "id": 22492839, "node_id": "MDQ6VXNlcjIyNDkyODM5", "avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PierreColombo", "html_url": "https://github.com/PierreColombo", "followers_url": "https://api.github.com/users/PierreColombo/followers", "following_url": "https://api.github.com/users/PierreColombo/following{/other_user}", "gists_url": "https://api.github.com/users/PierreColombo/gists{/gist_id}", "starred_url": "https://api.github.com/users/PierreColombo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PierreColombo/subscriptions", "organizations_url": "https://api.github.com/users/PierreColombo/orgs", "repos_url": "https://api.github.com/users/PierreColombo/repos", "events_url": "https://api.github.com/users/PierreColombo/events{/privacy}", "received_events_url": "https://api.github.com/users/PierreColombo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi !\r\nI guess you can add a dataset without the fields that must be kept anonymous, and then update those when the anonymity period is over.\r\nYou can also make the PR from an anonymous org.\r\nPinging @yjernite just to make sure it's ok", "Hello,\r\nI would prefer to do the reverse: adding a link to an anonymous paper without the people names/institution in the PR. Would it be conceivable ?\r\nCheers\r\n", "Sure, I think it's ok on our side", "Yup, sounds good!" ]
1,614,007,350,000
1,614,104,890,000
null
CONTRIBUTOR
null
Hello, Thanks a lot for your librairy. We plan to submit a paper on OpenReview using the Anonymous setting. Is it possible to add a new dataset without breaking the anonimity, with a link to the paper ? Cheers @eusip
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1924/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1924/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1922
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1922/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1922/comments
https://api.github.com/repos/huggingface/datasets/issues/1922/events
https://github.com/huggingface/datasets/issues/1922
813,140,806
MDU6SXNzdWU4MTMxNDA4MDY=
1,922
How to update the "wino_bias" dataset
{ "login": "JieyuZhao", "id": 22306304, "node_id": "MDQ6VXNlcjIyMzA2MzA0", "avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JieyuZhao", "html_url": "https://github.com/JieyuZhao", "followers_url": "https://api.github.com/users/JieyuZhao/followers", "following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}", "gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}", "starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions", "organizations_url": "https://api.github.com/users/JieyuZhao/orgs", "repos_url": "https://api.github.com/users/JieyuZhao/repos", "events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}", "received_events_url": "https://api.github.com/users/JieyuZhao/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi @JieyuZhao !\r\n\r\nYou can edit the dataset card of wino_bias to update the URL via a Pull Request. This would be really appreciated :)\r\n\r\nThe dataset card is the README.md file you can find at https://github.com/huggingface/datasets/tree/master/datasets/wino_bias\r\nAlso the homepage url is also mentioned in the wino_bias.py so feel free to update it there as well.\r\n\r\nYou can create a Pull Request directly from the github interface by editing the files you want and submit a PR, or from a local clone of the repository.\r\n\r\nThanks for noticing !" ]
1,613,972,379,000
1,613,990,159,000
null
CONTRIBUTOR
null
Hi all, Thanks for the efforts to collect all the datasets! But I think there is a problem with the wino_bias dataset. The current link is not correct. How can I update that? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1922/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1919/comments
https://api.github.com/repos/huggingface/datasets/issues/1919/events
https://github.com/huggingface/datasets/issues/1919
812,626,872
MDU6SXNzdWU4MTI2MjY4NzI=
1,919
Failure to save with save_to_disk
{ "login": "M-Salti", "id": 9285264, "node_id": "MDQ6VXNlcjkyODUyNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/M-Salti", "html_url": "https://github.com/M-Salti", "followers_url": "https://api.github.com/users/M-Salti/followers", "following_url": "https://api.github.com/users/M-Salti/following{/other_user}", "gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}", "starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions", "organizations_url": "https://api.github.com/users/M-Salti/orgs", "repos_url": "https://api.github.com/users/M-Salti/repos", "events_url": "https://api.github.com/users/M-Salti/events{/privacy}", "received_events_url": "https://api.github.com/users/M-Salti/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi thanks for reporting and for proposing a fix :)\r\n\r\nI just merged a fix, feel free to try it from the master branch !", "Closing since this has been fixed by #1923" ]
1,613,830,690,000
1,614,793,227,000
1,614,793,227,000
CONTRIBUTOR
null
When I try to save a dataset locally using the `save_to_disk` method I get the error: ```bash FileNotFoundError: [Errno 2] No such file or directory: '/content/squad/train/squad-train.arrow' ``` To replicate: 1. Install `datasets` from master 2. Run this code: ```python from datasets import load_dataset squad = load_dataset("squad") # or any other dataset squad.save_to_disk("squad") # error here ``` The problem is that the method is not creating a directory with the name `dataset_path` for saving the dataset in (i.e. it's not creating the *train* and *validation* directories in this case). After creating the directory the problem resolves. I'll open a PR soon doing that and linking this issue.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1919/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1917/comments
https://api.github.com/repos/huggingface/datasets/issues/1917/events
https://github.com/huggingface/datasets/issues/1917
812,390,178
MDU6SXNzdWU4MTIzOTAxNzg=
1,917
UnicodeDecodeError: windows 10 machine
{ "login": "yosiasz", "id": 900951, "node_id": "MDQ6VXNlcjkwMDk1MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/900951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yosiasz", "html_url": "https://github.com/yosiasz", "followers_url": "https://api.github.com/users/yosiasz/followers", "following_url": "https://api.github.com/users/yosiasz/following{/other_user}", "gists_url": "https://api.github.com/users/yosiasz/gists{/gist_id}", "starred_url": "https://api.github.com/users/yosiasz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yosiasz/subscriptions", "organizations_url": "https://api.github.com/users/yosiasz/orgs", "repos_url": "https://api.github.com/users/yosiasz/repos", "events_url": "https://api.github.com/users/yosiasz/events{/privacy}", "received_events_url": "https://api.github.com/users/yosiasz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "upgraded to php 3.9.2 and it works!" ]
1,613,772,785,000
1,613,774,471,000
1,613,774,428,000
NONE
null
Windows 10 Php 3.6.8 when running ``` import datasets oscar_am = datasets.load_dataset("oscar", "unshuffled_deduplicated_am") print(oscar_am["train"][0]) ``` I get the following error ``` file "C:\PYTHON\3.6.8\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 58: character maps to <undefined> ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1917/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1917/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1915/comments
https://api.github.com/repos/huggingface/datasets/issues/1915/events
https://github.com/huggingface/datasets/issues/1915
812,229,654
MDU6SXNzdWU4MTIyMjk2NTQ=
1,915
Unable to download `wiki_dpr`
{ "login": "nitarakad", "id": 18504534, "node_id": "MDQ6VXNlcjE4NTA0NTM0", "avatar_url": "https://avatars.githubusercontent.com/u/18504534?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nitarakad", "html_url": "https://github.com/nitarakad", "followers_url": "https://api.github.com/users/nitarakad/followers", "following_url": "https://api.github.com/users/nitarakad/following{/other_user}", "gists_url": "https://api.github.com/users/nitarakad/gists{/gist_id}", "starred_url": "https://api.github.com/users/nitarakad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nitarakad/subscriptions", "organizations_url": "https://api.github.com/users/nitarakad/orgs", "repos_url": "https://api.github.com/users/nitarakad/repos", "events_url": "https://api.github.com/users/nitarakad/events{/privacy}", "received_events_url": "https://api.github.com/users/nitarakad/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting ! This is a bug. For now feel free to set `ignore_verifications=False` in `load_dataset`.\r\nI'm working on a fix", "I just merged a fix :)\r\n\r\nWe'll do a patch release soon. In the meantime feel free to try it from the master branch\r\nThanks again for reporting !", "Closing since this has been fixed by #1925" ]
1,613,758,292,000
1,614,793,248,000
1,614,793,248,000
NONE
null
I am trying to download the `wiki_dpr` dataset. Specifically, I want to download `psgs_w100.multiset.no_index` with no embeddings/no index. In order to do so, I ran: `curr_dataset = load_dataset("wiki_dpr", embeddings_name="multiset", index_name="no_index")` However, I got the following error: `datasets.utils.info_utils.UnexpectedDownloadedFile: {'embeddings_index'}` I tried adding in flags `with_embeddings=False` and `with_index=False`: `curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False, embeddings_name="multiset", index_name="no_index")` But I got the following error: `raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums))) datasets.utils.info_utils.ExpectedMoreDownloadedFiles: {‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_5’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_15’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_30’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_36’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_18’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_41’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_13’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_48’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_10’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_23’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_14’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_34’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_43’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_40’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_47’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_3’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_24’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_7’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_33’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_46’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_42’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_27’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_29’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_26’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_22’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_4’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_20’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_39’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_6’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_16’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_8’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_35’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_49’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_17’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_25’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_0’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_38’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_12’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_44’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_1’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_32’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_19’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_31’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_37’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_9’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_11’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_21’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_28’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_45’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_2’}` Is there anything else I need to set to download the dataset? **UPDATE**: just running `curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False)` gives me the same error.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1915/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1911/comments
https://api.github.com/repos/huggingface/datasets/issues/1911/events
https://github.com/huggingface/datasets/issues/1911
812,009,956
MDU6SXNzdWU4MTIwMDk5NTY=
1,911
Saving processed dataset running infinitely
{ "login": "ayubSubhaniya", "id": 20911334, "node_id": "MDQ6VXNlcjIwOTExMzM0", "avatar_url": "https://avatars.githubusercontent.com/u/20911334?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayubSubhaniya", "html_url": "https://github.com/ayubSubhaniya", "followers_url": "https://api.github.com/users/ayubSubhaniya/followers", "following_url": "https://api.github.com/users/ayubSubhaniya/following{/other_user}", "gists_url": "https://api.github.com/users/ayubSubhaniya/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayubSubhaniya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayubSubhaniya/subscriptions", "organizations_url": "https://api.github.com/users/ayubSubhaniya/orgs", "repos_url": "https://api.github.com/users/ayubSubhaniya/repos", "events_url": "https://api.github.com/users/ayubSubhaniya/events{/privacy}", "received_events_url": "https://api.github.com/users/ayubSubhaniya/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@thomwolf @lhoestq can you guys please take a look and recommend some solution.", "am suspicious of this thing? what's the purpose of this? pickling and unplickling\r\n`self = pickle.loads(pickle.dumps(self))`\r\n\r\n```\r\n def save_to_disk(self, dataset_path: str, fs=None):\r\n \"\"\"\r\n Saves a dataset to a dataset directory, or in a filesystem using either :class:`datasets.filesystem.S3FileSystem` or any implementation of ``fsspec.spec.AbstractFileSystem``.\r\n\r\n Args:\r\n dataset_path (``str``): path (e.g. ``dataset/train``) or remote uri (e.g. ``s3://my-bucket/dataset/train``) of the dataset directory where the dataset will be saved to\r\n fs (Optional[:class:`datasets.filesystem.S3FileSystem`,``fsspec.spec.AbstractFileSystem``], `optional`, defaults ``None``): instance of :class:`datasets.filesystem.S3FileSystem` or ``fsspec.spec.AbstractFileSystem`` used to download the files from remote filesystem.\r\n \"\"\"\r\n assert (\r\n not self.list_indexes()\r\n ), \"please remove all the indexes using `dataset.drop_index` before saving a dataset\"\r\n self = pickle.loads(pickle.dumps(self))\r\n ```", "It's been 24 hours and sadly it's still running. With not a single byte written", "Tried finding the root cause but was unsuccessful.\r\nI am using lazy tokenization with `dataset.set_transform()`, it works like a charm with almost same performance as pre-compute.", "Hi ! This very probably comes from the hack you used.\r\n\r\nThe pickling line was added an a sanity check because save_to_disk uses the same assumptions as pickling for a dataset object. The main assumption is that memory mapped pyarrow tables must be reloadable from the disk. In your case it's not possible since you altered the pyarrow table.\r\nI would suggest you to rebuild a valid Dataset object from your new pyarrow table. To do so you must first save your new table to a file, and then make a new Dataset object from that arrow file.\r\n\r\nYou can save the raw arrow table (without all the `datasets.Datasets` metadata) by calling `map` with `cache_file_name=\"path/to/outut.arrow\"` and `function=None`. Having `function=None` makes the `map` write your dataset on disk with no data transformation.\r\n\r\nOnce you have your new arrow file, load it with `datasets.Dataset.from_file` to have a brand new Dataset object :)\r\n\r\nIn the future we'll have a better support for the fast filtering method from pyarrow so you don't have to do this very unpractical workaround. Since it breaks somes assumptions regarding the core behavior of Dataset objects, this is very discouraged.", "Thanks, @lhoestq for your response. Will try your solution and let you know." ]
1,613,740,159,000
1,614,065,684,000
null
NONE
null
I have a text dataset of size 220M. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes. filter() function was way to slow, so I used a hack to use pyarrow filter table function, which is damm fast. Mentioned [here](https://github.com/huggingface/datasets/issues/1796) ```dataset._data = dataset._data.filter(...)``` It took 1 hr for the filter. Then i use `save_to_disk()` on processed dataset and it is running forever. I have been waiting since 8 hrs, it has not written a single byte. Infact it has actually read from disk more than 100GB, screenshot below shows the stats using `iotop`. Second process is the one. <img width="1672" alt="Screenshot 2021-02-19 at 6 36 53 PM" src="https://user-images.githubusercontent.com/20911334/108508197-7325d780-72e1-11eb-8369-7c057d137d81.png"> I am not able to figure out, whether this is some issue with dataset library or that it is due to my hack for filter() function.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1911/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1911/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1907/comments
https://api.github.com/repos/huggingface/datasets/issues/1907/events
https://github.com/huggingface/datasets/issues/1907
811,520,569
MDU6SXNzdWU4MTE1MjA1Njk=
1,907
DBPedia14 Dataset Checksum bug?
{ "login": "francisco-perez-sorrosal", "id": 918006, "node_id": "MDQ6VXNlcjkxODAwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4", "gravatar_id": "", "url": "https://api.github.com/users/francisco-perez-sorrosal", "html_url": "https://github.com/francisco-perez-sorrosal", "followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers", "following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}", "gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}", "starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions", "organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs", "repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos", "events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}", "received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! :)\r\n\r\nThis looks like the same issue as https://github.com/huggingface/datasets/issues/1856 \r\nBasically google drive has quota issues that makes it inconvenient for downloading files.\r\n\r\nIf the quota of a file is exceeded, you have to wait 24h for the quota to reset (which is painful).\r\n\r\nThe error says that the checksum of the downloaded file doesn't match because google drive returns a text file with the \"Quota Exceeded\" error instead of the actual data file.", "Thanks @lhoestq! Yes, it seems back to normal after a couple of days." ]
1,613,687,148,000
1,614,036,125,000
1,614,036,124,000
CONTRIBUTOR
null
Hi there!!! I've been using successfully the DBPedia dataset (https://huggingface.co/datasets/dbpedia_14) with my codebase in the last couple of weeks, but in the last couple of days now I get this error: ``` Traceback (most recent call last): File "./conditional_classification/basic_pipeline.py", line 178, in <module> main() File "./conditional_classification/basic_pipeline.py", line 128, in main corpus.load_data(limit_train_examples_per_class=args.data_args.train_examples_per_class, File "/home/fp/dev/conditional_classification/conditional_classification/datasets_base.py", line 83, in load_data datasets = load_dataset(self.name, split=dataset_split) File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/load.py", line 609, in load_dataset builder_instance.download_and_prepare( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 526, in download_and_prepare self._download_and_prepare( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 586, in _download_and_prepare verify_checksums( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k'] ``` I've seen this has happened before in other datasets as reported in #537. I've tried clearing my cache and call again `load_dataset` but still is not working. My same codebase is successfully downloading and using other datasets (e.g. AGNews) without any problem, so I guess something has happened specifically to the DBPedia dataset in the last few days. Can you please check if there's a problem with the checksums? Or this is related to any other stuff? I've seen that the path in the cache for the dataset is `/home/fp/.cache/huggingface/datasets/d_bpedia14/dbpedia_14/2.0.0/a70413e39e7a716afd0e90c9e53cb053691f56f9ef5fe317bd07f2c368e8e897...` and includes `d_bpedia14` instead maybe of `dbpedia_14`. Was this maybe a bug introduced recently? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1907/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1907/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1906/comments
https://api.github.com/repos/huggingface/datasets/issues/1906/events
https://github.com/huggingface/datasets/issues/1906
811,405,274
MDU6SXNzdWU4MTE0MDUyNzQ=
1,906
Feature Request: Support for Pandas `Categorical`
{ "login": "justin-yan", "id": 7731709, "node_id": "MDQ6VXNlcjc3MzE3MDk=", "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/justin-yan", "html_url": "https://github.com/justin-yan", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "repos_url": "https://api.github.com/users/justin-yan/repos", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "We already have a ClassLabel type that does this kind of mapping between the label ids (integers) and actual label values (strings).\r\n\r\nI wonder if actually we should use the DictionaryType from Arrow and the Categorical type from pandas for the `datasets` ClassLabel feature type.\r\nCurrently ClassLabel corresponds to `pa.int64()` in pyarrow and `dtype('int64')` in pandas (so the label names are lost during conversions).\r\n\r\nWhat do you think ?", "Now that I've heard you explain ClassLabel, that makes a lot of sense! While DictionaryType for Arrow (I think) can have arbitrarily typed keys, so it won't cover all potential cases, pandas' Category is *probably* the most common use for that pyarrow type, and ClassLabel should match that perfectly?\r\n\r\nOther thoughts:\r\n\r\n- changing the resulting patype on ClassLabel might be backward-incompatible? I'm not totally sure if users of the `datasets` library tend to directly access the `patype` attribute (I don't think we really do, but we haven't been using it for very long yet).\r\n- would ClassLabel's dtype change to `dict[int64, string]`? It seems like in practice a ClassLabel (when not explicitly specified) would be constructed from the DictionaryType branch of `generate_from_arrow_type`, so it's not totally clear to me that anyone ever actually accesses/uses that dtype?\r\n- I don't quite know how `.int2str` and `.str2int` are used in practice - would those be kept? Perhaps the implementation might actually be substantially smaller if we can just delegate to pyarrow's dict methods?\r\n\r\nAnother idea that just occurred to me: add a branch in here to generate a ClassLabel if the dict key is int64 and the values are string: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L932 , and then don't touch anything else.\r\n\r\nIn practice, I don't think this would be backward-incompatible in a way anyone would care about since the current behavior just throws an exception, and this way, we could support *reading* a pandas Categorical into a `Dataset` as a ClassLabel. I *think* from there, while it would require some custom glue it wouldn't be too hard to convert the ClassLabel into a pandas Category if we want to go back - I think this would improve on the current behavior without risking changing the behavior of ClassLabel in a backward-incompat way.\r\n\r\nThoughts? I'm not sure if this is overly cautious. Whichever approach you think is better, I'd be happy to take it on!\r\n", "I think we can first keep the int64 precision but with an arrow Dictionary for ClassLabel, and focus on the connection with arrow and pandas.\r\n\r\nIn this scope, I really like the idea of checking for the dictionary type:\r\n\r\n> Another idea that just occurred to me: add a branch in here to generate a ClassLabel if the dict key is int64 and the values are string: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L932 , and then don't touch anything else.\r\n\r\nThis looks like a great start.\r\n\r\nThen as you said we'd have to add the conversion from classlabel to the correct arrow dictionary type. Arrow is already able to convert from arrow Dictionary to pandas Categorical so it should be enough.\r\n\r\nI can see two things that we must take case of to make this change backward compatible:\r\n- first we must still be able to load an arrow file with arrow int64 dtype and `datasets` ClassLabel type without crashing. This can be fixed by casting the arrow int64 array to an arrow Dictionary array on-the-fly when loading the table in the ArrowReader.\r\n- then we still have to return integers when accessing examples from a ClassLabel column. Currently it would return the strings values since it's based on the pandas behavior for converting from pandas to python/numpy. To do so we just have to adapt the python/numpy extractors in formatting.py (it takes care of converting an arrow table to a dictionary of python objects by doing arrow table -> pandas dataframe -> python dictionary)\r\n\r\nAny help on this matter is very much welcome :)" ]
1,613,677,565,000
1,614,091,130,000
null
CONTRIBUTOR
null
``` from datasets import Dataset import pandas as pd import pyarrow df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category")) pyarrow.Table.from_pandas(df) Dataset.from_pandas(df) # Throws NotImplementedError # TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table ``` I'm curious if https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L796 could be built out in a way similar to `Sequence`? e.g. a `Map` class (or whatever name the maintainers might prefer) that can accept: ``` index_type = generate_from_arrow_type(pa_type.index_type) value_type = generate_from_arrow_type(pa_type.value_type) ``` and then additional code points to modify: - FeatureType: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L694 - A branch to handle Map in get_nested_type: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L719 - I don't quite understand what `encode_nested_example` does but perhaps a branch there? https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L755 - Similarly, I don't quite understand why `Sequence` is used this way in `generate_from_dict`, but perhaps a branch here? https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L775 I couldn't find other usages of `Sequence` outside of defining specific datasets, so I'm not sure if that's a comprehensive set of touchpoints.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1906/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1898/comments
https://api.github.com/repos/huggingface/datasets/issues/1898/events
https://github.com/huggingface/datasets/issues/1898
810,157,251
MDU6SXNzdWU4MTAxNTcyNTE=
1,898
ALT dataset has repeating instances in all splits
{ "login": "10-zin", "id": 33179372, "node_id": "MDQ6VXNlcjMzMTc5Mzcy", "avatar_url": "https://avatars.githubusercontent.com/u/33179372?v=4", "gravatar_id": "", "url": "https://api.github.com/users/10-zin", "html_url": "https://github.com/10-zin", "followers_url": "https://api.github.com/users/10-zin/followers", "following_url": "https://api.github.com/users/10-zin/following{/other_user}", "gists_url": "https://api.github.com/users/10-zin/gists{/gist_id}", "starred_url": "https://api.github.com/users/10-zin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/10-zin/subscriptions", "organizations_url": "https://api.github.com/users/10-zin/orgs", "repos_url": "https://api.github.com/users/10-zin/repos", "events_url": "https://api.github.com/users/10-zin/events{/privacy}", "received_events_url": "https://api.github.com/users/10-zin/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting. This looks like a very bad issue. I'm looking into it", "I just merged a fix, we'll do a patch release soon. Thanks again for reporting, and sorry for the inconvenience.\r\nIn the meantime you can load `ALT` using `datasets` from the master branch", "Thanks!!! works perfectly in the bleading edge master version", "Closed by #1899" ]
1,613,566,302,000
1,613,715,526,000
1,613,715,526,000
NONE
null
The [ALT](https://huggingface.co/datasets/alt) dataset has all the same instances within each split :/ Seemed like a great dataset for some experiments I wanted to carry out, especially since its medium-sized, and has all splits. Would be great if this could be fixed :) Added a snapshot of the contents from `explore-datset` feature, for quick reference. ![image](https://user-images.githubusercontent.com/33179372/108206321-442a2d00-714c-11eb-882f-b4b6e708ef9c.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1898/timeline
null
null
null
false