url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.64B
node_id
stringlengths
18
32
number
int64
1
5.67k
title
stringlengths
1
290
user
stringlengths
870
1.16k
labels
stringlengths
2
985
state
stringclasses
2 values
locked
stringclasses
1 value
assignee
stringlengths
0
1.04k
assignees
stringlengths
2
3.92k
milestone
stringclasses
9 values
comments
sequence
created_at
int64
1,587B
1,680B
updated_at
int64
1,588B
1,680B
closed_at
float64
1,587B
1,680B
βŒ€
author_association
stringclasses
3 values
active_lock_reason
stringclasses
1 value
body
stringlengths
0
228k
reactions
stringlengths
191
196
timeline_url
stringlengths
67
70
performed_via_github_app
stringclasses
1 value
state_reason
stringclasses
4 values
pull_request
stringlengths
0
315
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/175
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/175/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/175/comments
https://api.github.com/repos/huggingface/datasets/issues/175/events
https://github.com/huggingface/datasets/issues/175
621,929,428
MDU6SXNzdWU2MjE5Mjk0Mjg=
175
[Manual data dir] Error message: nlp.load_dataset('xsum') -> TypeError
{'login': 'sshleifer', 'id': 6045025, 'node_id': 'MDQ6VXNlcjYwNDUwMjU=', 'avatar_url': 'https://avatars.githubusercontent.com/u/6045025?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/sshleifer', 'html_url': 'https://github.com/sshleifer', 'followers_url': 'https://api.github.com/users/sshleifer/followers', 'following_url': 'https://api.github.com/users/sshleifer/following{/other_user}', 'gists_url': 'https://api.github.com/users/sshleifer/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/sshleifer/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/sshleifer/subscriptions', 'organizations_url': 'https://api.github.com/users/sshleifer/orgs', 'repos_url': 'https://api.github.com/users/sshleifer/repos', 'events_url': 'https://api.github.com/users/sshleifer/events{/privacy}', 'received_events_url': 'https://api.github.com/users/sshleifer/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,994,032,000
1,589,998,730,000
1,589,998,730,000
CONTRIBUTOR
v 0.1.0 from pip ```python import nlp xsum = nlp.load_dataset('xsum') ``` Issue is `dl_manager.manual_dir`is `None` ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-42-8a32f066f3bd> in <module> ----> 1 xsum = nlp.load_dataset('xsum') ~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 515 download_mode=download_mode, 516 ignore_verifications=ignore_verifications, --> 517 save_infos=save_infos, 518 ) 519 ~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 361 verify_infos = not save_infos and not ignore_verifications 362 self._download_and_prepare( --> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 364 ) 365 # Sync info ~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 397 split_dict = SplitDict(dataset_name=self.name) 398 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 399 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 400 # Checksums verification 401 if verify_infos: ~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/datasets/xsum/5c5fca23aaaa469b7a1c6f095cf12f90d7ab99bcc0d86f689a74fd62634a1472/xsum.py in _split_generators(self, dl_manager) 102 with open(dl_path, "r") as json_file: 103 split_ids = json.load(json_file) --> 104 downloaded_path = os.path.join(dl_manager.manual_dir, "xsum-extracts-from-downloads") 105 return [ 106 nlp.SplitGenerator( ~/miniconda3/envs/nb/lib/python3.7/posixpath.py in join(a, *p) 78 will be discarded. An empty last part will result in a path that 79 ends with a separator.""" ---> 80 a = os.fspath(a) 81 sep = _get_sep(a) 82 path = a TypeError: expected str, bytes or os.PathLike object, not NoneType ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/175/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/175/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/174
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/174/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/174/comments
https://api.github.com/repos/huggingface/datasets/issues/174/events
https://github.com/huggingface/datasets/issues/174
621,928,403
MDU6SXNzdWU2MjE5Mjg0MDM=
174
nlp.load_dataset('xsum') -> TypeError
{'login': 'sshleifer', 'id': 6045025, 'node_id': 'MDQ6VXNlcjYwNDUwMjU=', 'avatar_url': 'https://avatars.githubusercontent.com/u/6045025?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/sshleifer', 'html_url': 'https://github.com/sshleifer', 'followers_url': 'https://api.github.com/users/sshleifer/followers', 'following_url': 'https://api.github.com/users/sshleifer/following{/other_user}', 'gists_url': 'https://api.github.com/users/sshleifer/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/sshleifer/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/sshleifer/subscriptions', 'organizations_url': 'https://api.github.com/users/sshleifer/orgs', 'repos_url': 'https://api.github.com/users/sshleifer/repos', 'events_url': 'https://api.github.com/users/sshleifer/events{/privacy}', 'received_events_url': 'https://api.github.com/users/sshleifer/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,993,949,000
1,589,996,626,000
1,589,996,626,000
CONTRIBUTOR
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/174/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/174/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/173
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/173/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/173/comments
https://api.github.com/repos/huggingface/datasets/issues/173/events
https://github.com/huggingface/datasets/pull/173
621,764,932
MDExOlB1bGxSZXF1ZXN0NDIwNzUyNzQy
173
Rm extracted test dirs
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Thanks for cleaning up the extracted dummy data folders! Instead of changing the file_utils we could also just put these folders under `.gitignore` (or maybe already done?).", "Awesome! I guess you might have to add the changes for the MockDLManager now in a different file though because of my last PR - sorry!" ]
1,589,981,448,000
1,590,165,276,000
1,590,165,275,000
MEMBER
All the dummy data used for tests were duplicated. For each dataset, we had one zip file but also its extracted directory. I removed all these directories Furthermore instead of extracting next to the dummy_data.zip file, we extract in the temp `cached_dir` used for tests, so that all the extracted directories get removed after testing. Finally there was a bug in the `mock_download_manager` that would let it create directories with invalid names, as in #172. I fixed that by encoding url arguments. I had to rename the dummy data for `scientific_papers` and `cnn_dailymail` (the aws tests don't pass for those 2 in this PR, but they will once aws will be synced, as the local ones do) Let me know if it sounds good to you @patrickvonplaten . I'm still not entirely familiar with the mock downloader
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/173/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/173/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/173', 'html_url': 'https://github.com/huggingface/datasets/pull/173', 'diff_url': 'https://github.com/huggingface/datasets/pull/173.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/173.patch', 'merged_at': '2020-05-22T16:34:35Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/172
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/172/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/172/comments
https://api.github.com/repos/huggingface/datasets/issues/172/events
https://github.com/huggingface/datasets/issues/172
621,377,386
MDU6SXNzdWU2MjEzNzczODY=
172
Clone not working on Windows environment
{'login': 'codehunk628', 'id': 51091425, 'node_id': 'MDQ6VXNlcjUxMDkxNDI1', 'avatar_url': 'https://avatars.githubusercontent.com/u/51091425?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/codehunk628', 'html_url': 'https://github.com/codehunk628', 'followers_url': 'https://api.github.com/users/codehunk628/followers', 'following_url': 'https://api.github.com/users/codehunk628/following{/other_user}', 'gists_url': 'https://api.github.com/users/codehunk628/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/codehunk628/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/codehunk628/subscriptions', 'organizations_url': 'https://api.github.com/users/codehunk628/orgs', 'repos_url': 'https://api.github.com/users/codehunk628/repos', 'events_url': 'https://api.github.com/users/codehunk628/events{/privacy}', 'received_events_url': 'https://api.github.com/users/codehunk628/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}]
[ "Should be fixed on master now :)", "Thanks @lhoestq πŸ‘ Now I can uninstall WSL and get back to work with windows.πŸ™‚" ]
1,589,935,514,000
1,590,238,153,000
1,590,233,272,000
CONTRIBUTOR
Cloning in a windows environment is not working because of use of special character '?' in folder name .. Please consider changing the folder name .... Reference to folder - nlp/datasets/cnn_dailymail/dummy/3.0.0/3.0.0/dummy_data-zip-extracted/dummy_data/uc?export=download&id=0BwmD_VLjROrfM1BxdkxVaTY2bWs/dailymail/stories/ error log: fatal: cannot create directory at 'datasets/cnn_dailymail/dummy/3.0.0/3.0.0/dummy_data-zip-extracted/dummy_data/uc?export=download&id=0BwmD_VLjROrfM1BxdkxVaTY2bWs': Invalid argument
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/172/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/172/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/171
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/171/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/171/comments
https://api.github.com/repos/huggingface/datasets/issues/171/events
https://github.com/huggingface/datasets/pull/171
621,199,128
MDExOlB1bGxSZXF1ZXN0NDIwMjk0ODM0
171
fix squad metric format
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "One thing for SQuAD is that I wanted to be able to use the SQuAD dataset directly in the metrics and I'm not sure it will be possible with this format.\r\n\r\n(maybe it's not really possible in general though)", "This is kinda related to one thing I had in mind which is that we may want to be able to dump our model predictions in a `Dataset` as well so that we don't keep them in memory (and we can export them in a nice format later as well when we will have a serialization formats).\r\n\r\nMaybe this is overkill though, I haven't fully wraped my head around this.", "I'm also perfectly fine with merging this PR in the current state and working on a larger scope later.", "This is the format needed to run the official script directly. The format of the squad dataset is different from the input of the metric. \r\n\r\n> One thing for SQuAD is that I wanted to be able to use the SQuAD dataset directly in the metrics and I'm not sure it will be possible with this format.\r\n> \r\n> (maybe it's not really possible in general though)\r\n\r\nOk I see. I'll try to use the same format", "Ok with this update I changed the format to fit the squad dataset format.\r\nNow you can do:\r\n```python\r\nsquad_dset = nlp.load_dataset(\"squad\")\r\nsquad_metric = nlp.load_metric(\"/Users/quentinlhoest/Desktop/hf/nlp-bis/metrics/squad\")\r\npredictions = [\r\n {\"id\": v[\"id\"], \"prediction_text\": v[\"answers\"][\"text\"][0]} # take first possible answer\r\n for v in squad_dset[\"validation\"]\r\n]\r\nsquad_metric.compute(predictions, squad_dset[\"validation\"])\r\n```" ]
1,589,913,456,000
1,590,154,610,000
1,590,154,608,000
MEMBER
The format of the squad metric was wrong. This should fix #143 I tested with ```python3 predictions = [ {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'} ] references = [ {'answers': [{'text': 'Denver Broncos'}], 'id': '56be4db0acb8001400a502ec'} ] ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/171/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/171/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/171', 'html_url': 'https://github.com/huggingface/datasets/pull/171', 'diff_url': 'https://github.com/huggingface/datasets/pull/171.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/171.patch', 'merged_at': '2020-05-22T13:36:48Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/170
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/170/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/170/comments
https://api.github.com/repos/huggingface/datasets/issues/170/events
https://github.com/huggingface/datasets/pull/170
621,119,747
MDExOlB1bGxSZXF1ZXN0NDIwMjMwMDIx
170
Rename anli dataset
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,905,617,000
1,589,977,389,000
1,589,977,388,000
MEMBER
What we have now as the `anli` dataset is actually the Ξ±NLI dataset from the ART challenge dataset. This name is confusing because `anli` is also the name of adversarial NLI (see [https://github.com/facebookresearch/anli](https://github.com/facebookresearch/anli)). I renamed the current `anli` dataset by `art`.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/170/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/170/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/170', 'html_url': 'https://github.com/huggingface/datasets/pull/170', 'diff_url': 'https://github.com/huggingface/datasets/pull/170.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/170.patch', 'merged_at': '2020-05-20T12:23:07Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/169
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/169/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/169/comments
https://api.github.com/repos/huggingface/datasets/issues/169/events
https://github.com/huggingface/datasets/pull/169
621,099,682
MDExOlB1bGxSZXF1ZXN0NDIwMjE1NDkw
169
Adding Qanta (Quizbowl) Dataset
{'login': 'EntilZha', 'id': 1382460, 'node_id': 'MDQ6VXNlcjEzODI0NjA=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1382460?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/EntilZha', 'html_url': 'https://github.com/EntilZha', 'followers_url': 'https://api.github.com/users/EntilZha/followers', 'following_url': 'https://api.github.com/users/EntilZha/following{/other_user}', 'gists_url': 'https://api.github.com/users/EntilZha/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/EntilZha/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/EntilZha/subscriptions', 'organizations_url': 'https://api.github.com/users/EntilZha/orgs', 'repos_url': 'https://api.github.com/users/EntilZha/repos', 'events_url': 'https://api.github.com/users/EntilZha/events{/privacy}', 'received_events_url': 'https://api.github.com/users/EntilZha/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}]
[ "Hi @EntilZha - sorry for waiting so long until taking action here. We created a new command and a new recipe of how to add dummy_data. Can you maybe rebase to `master` as explained in 7. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-contribute-to-nlp and check that your dummy data is correct following the instructions here: https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset ? \r\n\r\nIf the tests described in 5. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset pass we can merge the PR :-) ", "I updated to the most recent master and followed the steps, but still having the similar error where it can't find the correct file since the path to the directory is given, rather than the individual files within them. This still something wrong about how I'm inputting the data or how the tests are reading it?", "It's the dummy_data structure. You actually have to call the dummy data file name `dummy_data` (not .json anything). So there should not be a `dummy_data` folder but for each config only a `dummy_data` which contains your json dummy data. Can you maybe try once more - if it doesn't work I do it for you :-). ", "Would that work if there are multiple files? In my case, I'm including something similar to squad 1.0/2.0 where we have the main dataset + an additional challenge set in different files. Would I have the zip decompress to two files in that case?", "This dataset was actually a special case. It helped us improve the dummy data instructions :-), see #195 .Close this PR and merge #194." ]
1,589,904,181,000
1,590,497,551,000
1,590,497,551,000
CONTRIBUTOR
This PR adds the qanta question answering datasets from [Quizbowl: The Case for Incremental Question Answering](https://arxiv.org/abs/1904.04792) and [Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples](https://www.aclweb.org/anthology/Q19-1029/) (adversarial fold) This partially continues a discussion around fixing dummy data from https://github.com/huggingface/nlp/issues/161 I ran the following code to double check that it works and did some sanity checks on the output. The majority of the code itself is from our `allennlp` version of the dataset reader. ```python import nlp # Default is full question data = nlp.load_dataset('./datasets/qanta') # Four configs # Primarily useful for training data = nlp.load_dataset('./datasets/qanta', 'mode=sentences,char_skip=25') # Primarily used in evaluation data = nlp.load_dataset('./datasets/qanta', 'mode=first,char_skip=25') data = nlp.load_dataset('./datasets/qanta', 'mode=full,char_skip=25') # Primarily useful in evaluation and "live" play data = nlp.load_dataset('./datasets/qanta', 'mode=runs,char_skip=25') ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/169/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/169/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/169', 'html_url': 'https://github.com/huggingface/datasets/pull/169', 'diff_url': 'https://github.com/huggingface/datasets/pull/169.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/169.patch', 'merged_at': None}
true
https://api.github.com/repos/huggingface/datasets/issues/168
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/168/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/168/comments
https://api.github.com/repos/huggingface/datasets/issues/168/events
https://github.com/huggingface/datasets/issues/168
620,959,819
MDU6SXNzdWU2MjA5NTk4MTk=
168
Loading 'wikitext' dataset fails
{'login': 'itay1itzhak', 'id': 25987633, 'node_id': 'MDQ6VXNlcjI1OTg3NjMz', 'avatar_url': 'https://avatars.githubusercontent.com/u/25987633?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/itay1itzhak', 'html_url': 'https://github.com/itay1itzhak', 'followers_url': 'https://api.github.com/users/itay1itzhak/followers', 'following_url': 'https://api.github.com/users/itay1itzhak/following{/other_user}', 'gists_url': 'https://api.github.com/users/itay1itzhak/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/itay1itzhak/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/itay1itzhak/subscriptions', 'organizations_url': 'https://api.github.com/users/itay1itzhak/orgs', 'repos_url': 'https://api.github.com/users/itay1itzhak/repos', 'events_url': 'https://api.github.com/users/itay1itzhak/events{/privacy}', 'received_events_url': 'https://api.github.com/users/itay1itzhak/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Hi, make sure you have a recent version of pyarrow.\r\n\r\nAre you using it in Google Colab? In this case, this error is probably the same as #128", "Thanks!\r\n\r\nYes I'm using Google Colab, it seems like a duplicate then.", "Closing as it is a duplicate", "Hi,\r\nThe squad bug seems to be fixed, but the loading of the 'wikitext' still suffers from this problem (on Colab with pyarrow=0.17.1).", "When you install `nlp` for the first time on a Colab runtime, it updates the `pyarrow` library that was already on colab. This update shows this message on colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\nYou just have to restart the runtime and it should be fine.", "That was it, thanks!" ]
1,589,893,469,000
1,590,529,612,000
1,590,529,612,000
NONE
Loading the 'wikitext' dataset fails with Attribute error: Code to reproduce (From example notebook): import nlp wikitext_dataset = nlp.load_dataset('wikitext') Error: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-17-d5d9df94b13c> in <module>() 11 12 # Load a dataset and print the first examples in the training set ---> 13 wikitext_dataset = nlp.load_dataset('wikitext') 14 print(wikitext_dataset['train'][0]) 6 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=download_mode, 519 ignore_verifications=ignore_verifications, --> 520 save_infos=save_infos, 521 ) 522 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 363 verify_infos = not save_infos and not ignore_verifications 364 self._download_and_prepare( --> 365 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 366 ) 367 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 416 try: 417 # Prepare split will record examples associated to the split --> 418 self._prepare_split(split_generator, **prepare_split_kwargs) 419 except OSError: 420 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or "")) /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator) 594 example = self.info.features.encode_example(record) 595 writer.write(example) --> 596 num_examples, num_bytes = writer.finalize() 597 598 assert num_examples == num_examples, f"Expected to write {split_info.num_examples} but wrote {num_examples}" /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in finalize(self, close_stream) 173 def finalize(self, close_stream=True): 174 if self.pa_writer is not None: --> 175 self.write_on_file() 176 self.pa_writer.close() 177 if close_stream: /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_on_file(self) 124 else: 125 # All good --> 126 self._write_array_on_file(pa_array) 127 self.current_rows = [] 128 /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in _write_array_on_file(self, pa_array) 93 def _write_array_on_file(self, pa_array): 94 """Write a PyArrow Array""" ---> 95 pa_batch = pa.RecordBatch.from_struct_array(pa_array) 96 self._num_bytes += pa_array.nbytes 97 self.pa_writer.write_batch(pa_batch) AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/168/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/168/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/167
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/167/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/167/comments
https://api.github.com/repos/huggingface/datasets/issues/167/events
https://github.com/huggingface/datasets/pull/167
620,908,786
MDExOlB1bGxSZXF1ZXN0NDIwMDY0NDMw
167
[Tests] refactor tests
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Nice !" ]
1,589,888,612,000
1,589,905,032,000
1,589,905,030,000
MEMBER
This PR separates AWS and Local tests to remove these ugly statements in the script: ```python if "/" not in dataset_name: logging.info("Skip {} because it is a canonical dataset") return ``` To run a `aws` test, one should now run the following command: ```python pytest -s tests/test_dataset_common.py::AWSDatasetTest::test_builder_class_wmt14 ``` The same `local` test, can be run with: ```python pytest -s tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_wmt14 ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/167/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/167/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/167', 'html_url': 'https://github.com/huggingface/datasets/pull/167', 'diff_url': 'https://github.com/huggingface/datasets/pull/167.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/167.patch', 'merged_at': '2020-05-19T16:17:10Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/166
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/166/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/166/comments
https://api.github.com/repos/huggingface/datasets/issues/166/events
https://github.com/huggingface/datasets/issues/166
620,850,218
MDU6SXNzdWU2MjA4NTAyMTg=
166
Add a method to shuffle a dataset
{'login': 'thomwolf', 'id': 7353373, 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/thomwolf', 'html_url': 'https://github.com/thomwolf', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'type': 'User', 'site_admin': False}
[{'id': 2067400324, 'node_id': 'MDU6TGFiZWwyMDY3NDAwMzI0', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion', 'name': 'generic discussion', 'color': 'c5def5', 'default': False, 'description': 'Generic discussion on the library'}]
closed
False
[]
[ "+1 for the naming convention\r\n\r\nAbout the `shuffle` method, from my understanding it should be done in `Dataloader` (better separation between dataset processing - usage)", "+1 for shuffle in `Dataloader`. \r\nSome `Dataloader` just store idxs of dataset and just shuffle those idxs, which might(?) be faster than do shuffle in dataset, especially when doing shuffle every epoch.\r\n\r\nAlso +1 for the naming convention.", "As you might already know the issue of dataset shuffling came up in the nlp code [walkthrough](https://youtu.be/G3pOvrKkFuk?t=3204) by Yannic Kilcher\r\n", "We added the `.shuffle` method :)\r\n\r\nClosing this one." ]
1,589,882,926,000
1,592,924,853,000
1,592,924,852,000
MEMBER
Could maybe be a `dataset.shuffle(generator=None, seed=None)` signature method. Also, we could maybe have a clear indication of which method modify in-place and which methods return/cache a modified dataset. I kinda like torch conversion of having an underscore suffix for all the methods which modify a dataset in-place. What do you think?
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/166/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/166/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/165
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/165/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/165/comments
https://api.github.com/repos/huggingface/datasets/issues/165/events
https://github.com/huggingface/datasets/issues/165
620,758,221
MDU6SXNzdWU2MjA3NTgyMjE=
165
ANLI
{'login': 'douwekiela', 'id': 6024930, 'node_id': 'MDQ6VXNlcjYwMjQ5MzA=', 'avatar_url': 'https://avatars.githubusercontent.com/u/6024930?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/douwekiela', 'html_url': 'https://github.com/douwekiela', 'followers_url': 'https://api.github.com/users/douwekiela/followers', 'following_url': 'https://api.github.com/users/douwekiela/following{/other_user}', 'gists_url': 'https://api.github.com/users/douwekiela/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/douwekiela/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/douwekiela/subscriptions', 'organizations_url': 'https://api.github.com/users/douwekiela/orgs', 'repos_url': 'https://api.github.com/users/douwekiela/repos', 'events_url': 'https://api.github.com/users/douwekiela/events{/privacy}', 'received_events_url': 'https://api.github.com/users/douwekiela/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,874,657,000
1,589,977,387,000
1,589,977,387,000
NONE
Can I recommend the following: For ANLI, use https://github.com/facebookresearch/anli. As that paper says, "Our dataset is not to be confused with abductive NLI (Bhagavatula et al., 2019), which calls itself Ξ±NLI, or ART.". Indeed, the paper cited under what is currently called anli says in the abstract "We introduce a challenge dataset, ART". The current naming will confuse people :)
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/165/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/165/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/164
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/164/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/164/comments
https://api.github.com/repos/huggingface/datasets/issues/164/events
https://github.com/huggingface/datasets/issues/164
620,540,250
MDU6SXNzdWU2MjA1NDAyNTA=
164
Add Spanish POR and NER Datasets
{'login': 'mrm8488', 'id': 3653789, 'node_id': 'MDQ6VXNlcjM2NTM3ODk=', 'avatar_url': 'https://avatars.githubusercontent.com/u/3653789?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mrm8488', 'html_url': 'https://github.com/mrm8488', 'followers_url': 'https://api.github.com/users/mrm8488/followers', 'following_url': 'https://api.github.com/users/mrm8488/following{/other_user}', 'gists_url': 'https://api.github.com/users/mrm8488/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mrm8488/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mrm8488/subscriptions', 'organizations_url': 'https://api.github.com/users/mrm8488/orgs', 'repos_url': 'https://api.github.com/users/mrm8488/repos', 'events_url': 'https://api.github.com/users/mrm8488/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mrm8488/received_events', 'type': 'User', 'site_admin': False}
[{'id': 2067376369, 'node_id': 'MDU6TGFiZWwyMDY3Mzc2MzY5', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20request', 'name': 'dataset request', 'color': 'e99695', 'default': False, 'description': 'Requesting to add a new dataset'}]
closed
False
[]
[ "Hello @mrm8488, are these datasets official datasets published in an NLP/CL/ML venue?", "What about this one: https://github.com/ccasimiro88/TranslateAlignRetrieve?" ]
1,589,840,301,000
1,590,424,125,000
1,590,424,125,000
CONTRIBUTOR
Hi guys, In order to cover multilingual support a little step could be adding standard Datasets used for Spanish NER and POS tasks. I can provide it in raw and preprocessed formats.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/164/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/164/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/163
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/163/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/163/comments
https://api.github.com/repos/huggingface/datasets/issues/163/events
https://github.com/huggingface/datasets/issues/163
620,534,307
MDU6SXNzdWU2MjA1MzQzMDc=
163
[Feature request] Add cos-e v1.0
{'login': 'sarahwie', 'id': 8027676, 'node_id': 'MDQ6VXNlcjgwMjc2NzY=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8027676?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/sarahwie', 'html_url': 'https://github.com/sarahwie', 'followers_url': 'https://api.github.com/users/sarahwie/followers', 'following_url': 'https://api.github.com/users/sarahwie/following{/other_user}', 'gists_url': 'https://api.github.com/users/sarahwie/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/sarahwie/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/sarahwie/subscriptions', 'organizations_url': 'https://api.github.com/users/sarahwie/orgs', 'repos_url': 'https://api.github.com/users/sarahwie/repos', 'events_url': 'https://api.github.com/users/sarahwie/events{/privacy}', 'received_events_url': 'https://api.github.com/users/sarahwie/received_events', 'type': 'User', 'site_admin': False}
[{'id': 2067376369, 'node_id': 'MDU6TGFiZWwyMDY3Mzc2MzY5', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20request', 'name': 'dataset request', 'color': 'e99695', 'default': False, 'description': 'Requesting to add a new dataset'}]
closed
False
[]
[ "Sounds good, @mariamabarham do you want to give a look?\r\nI think we should have two configurations so we can allow either version of the dataset to be loaded with the `1.0` version being the default maybe.\r\n\r\nCc some authors of the great cos-e: @nazneenrajani @bmccann", "cos_e v1.0 is related to CQA v1.0 but only CQA v1.11 dataset is available on their website. Indeed their is lots of ids in cos_e v1, which are not in CQA v1.11 or the other way around.\r\n@sarahwie, @thomwolf, @nazneenrajani, @bmccann do you know where I can find CQA v1.0\r\n", "@mariamabarham I'm also not sure where to find CQA 1.0. Perhaps it's not possible to include this version of the dataset. I'll close the issue if that's the case.", "I do have a copy of the dataset. I can upload it to our repo.", "Great @nazneenrajani. let me know once done.\r\nThanks", "@mariamabarham @sarahwie I added them to the cos-e repo https://github.com/salesforce/cos-e/tree/master/data/v1.0", "You can now do\r\n```python\r\nfrom nlp import load_dataset\r\ncos_e = load_dataset(\"cos_e\", \"v1.0\")\r\n```\r\nThanks @mariamabarham !", "Thanks!", "@mariamabarham Just wanted to note that default behavior `cos_e = load_dataset(\"cos_e\")` now loads `v1.0`. Not sure if this is intentional (but the flag specification does work as intended). ", "> @mariamabarham Just wanted to note that default behavior `cos_e = load_dataset(\"cos_e\")` now loads `v1.0`. Not sure if this is intentional (but the flag specification does work as intended).\r\n\r\nIn the new version of `nlp`, if you try `cos_e = load_dataset(\"cos_e\")` it throws this error:\r\n```\r\nValueError: Config name is missing.\r\nPlease pick one among the available configs: ['v1.0', 'v1.11']\r\nExample of usage:\r\n\t`load_dataset('cos_e', 'v1.0')`\r\n```\r\nFor datasets with at least two configurations, we now force the user to pick one (no default)" ]
1,589,839,526,000
1,592,349,325,000
1,592,333,526,000
NONE
I noticed the second release of cos-e (v1.11) is included in this repo. I wanted to request inclusion of v1.0, since this is the version on which results are reported on in [the paper](https://www.aclweb.org/anthology/P19-1487/), and v1.11 has noted [annotation](https://github.com/salesforce/cos-e/issues/2) [issues](https://arxiv.org/pdf/2004.14546.pdf).
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/163/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/163/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/162
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/162/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/162/comments
https://api.github.com/repos/huggingface/datasets/issues/162/events
https://github.com/huggingface/datasets/pull/162
620,513,554
MDExOlB1bGxSZXF1ZXN0NDE5NzQ4Mzky
162
fix prev files hash in map
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Awesome! ", "Hi, yes, this seems to fix #160 -- I cloned the branch locally and verified", "Perfect then :)" ]
1,589,836,851,000
1,589,837,781,000
1,589,837,780,000
MEMBER
Fix the `.map` issue in #160. This makes sure it takes the previous files when computing the hash.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/162/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/162/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/162', 'html_url': 'https://github.com/huggingface/datasets/pull/162', 'diff_url': 'https://github.com/huggingface/datasets/pull/162.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/162.patch', 'merged_at': '2020-05-18T21:36:20Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/161
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/161/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/161/comments
https://api.github.com/repos/huggingface/datasets/issues/161/events
https://github.com/huggingface/datasets/issues/161
620,487,535
MDU6SXNzdWU2MjA0ODc1MzU=
161
Discussion on version identifier & MockDataLoaderManager for test data
{'login': 'EntilZha', 'id': 1382460, 'node_id': 'MDQ6VXNlcjEzODI0NjA=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1382460?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/EntilZha', 'html_url': 'https://github.com/EntilZha', 'followers_url': 'https://api.github.com/users/EntilZha/followers', 'following_url': 'https://api.github.com/users/EntilZha/following{/other_user}', 'gists_url': 'https://api.github.com/users/EntilZha/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/EntilZha/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/EntilZha/subscriptions', 'organizations_url': 'https://api.github.com/users/EntilZha/orgs', 'repos_url': 'https://api.github.com/users/EntilZha/repos', 'events_url': 'https://api.github.com/users/EntilZha/events{/privacy}', 'received_events_url': 'https://api.github.com/users/EntilZha/received_events', 'type': 'User', 'site_admin': False}
[{'id': 2067400324, 'node_id': 'MDU6TGFiZWwyMDY3NDAwMzI0', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion', 'name': 'generic discussion', 'color': 'c5def5', 'default': False, 'description': 'Generic discussion on the library'}]
open
False
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}]
[ "usually you can replace `download` in your dataset script with `download_and_prepare()` - could you share the code for your dataset here? :-) ", "I have an initial version here: https://github.com/EntilZha/nlp/tree/master/datasets/qanta Thats pretty close to what I'll do as a PR, but still want to do some more sanity checks/tests (just got tests passing).\r\n\r\nI figured out how to get all tests passing by adding a download command and some finagling with the data zip https://github.com/EntilZha/nlp/blob/master/tests/utils.py#L127\r\n\r\n", "I'm quite positive that you can just replace the `dl_manager.download()` statements here: https://github.com/EntilZha/nlp/blob/4d46443b65f1f756921db8275594e6af008a1de7/datasets/qanta/qanta.py#L194 with `dl_manager.download_and_extract()` even though you don't extract anything. I would prefer to avoid adding more functions to the MockDataLoadManager and keep it as simple as possible (It's already to complex now IMO). \r\n\r\nCould you check if you can replace the `download()` function? ", "I might be doing something wrong, but swapping those two gives this error:\r\n```\r\n> with open(path) as f:\r\nE IsADirectoryError: [Errno 21] Is a directory: 'datasets/qanta/dummy/mode=first,char_skip=25/2018.4.18/dummy_data-zip-extracted/dummy_data'\r\n\r\nsrc/nlp/datasets/qanta/3d965403133687b819905ead4b69af7bcee365865279b2f797c79f809b4490c3/qanta.py:280: IsADirectoryError\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n```\r\n\r\nSo it seems like the directory name is getting passed. Is this not functioning as expected, or is there some caching happening maybe? I deleted the dummy files and re-ran the import script with no changes. I'm digging a bit in with a debugger, but no clear reason yet", "From what I can tell here: https://github.com/huggingface/nlp/blob/master/tests/utils.py#L115\r\n\r\n1. `data_url` is the correct http link\r\n2. `path_to_dummy_data` is a directory, which is causing the issue\r\n\r\nThat path comes from `download_dummy_data`, which I think assumes that the data comes from the zip file, but isn't aware of individual files. So it seems like it data manager needs to be aware if the url its getting is for a file or a zip/directory, and pass this information along. This might happen in `download_dummy_data`, but probably better to happen in `download_and_extract`? Maybe a simple check to see if `os.path.basename` returns the dummy data zip filename, if not then join paths with the basename of the url?", "I think the dataset script works correctly. Just the dummy data structure seems to be wrong. I will soon add more commands that should make the create of the dummy data easier.\r\n\r\nI'd recommend that you won't concentrate too much on the dummy data.\r\nIf you manage to load the dataset correctly via:\r\n\r\n```python \r\n# use local path to qanta\r\nnlp.load_dataset(\"./datasets/qanta\")\r\n```\r\n\r\nthen feel free to open a PR and we will look into the dummy data problem together :-) \r\n\r\nAlso please make sure that the Version is in the format 1.0.0 (three numbers separated by two points) - not a date. ", "The script loading seems to work fine so I'll work on getting a PR open after a few sanity checks on the data.\r\n\r\nOn version, we currently have it versioned with YYYY.MM.DD scheme so it would be nice to not change that, but will it cause issues?", "> The script loading seems to work fine so I'll work on getting a PR open after a few sanity checks on the data.\r\n> \r\n> On version, we currently have it versioned with YYYY.MM.DD scheme so it would be nice to not change that, but will it cause issues?\r\n\r\nIt would cause issues for sure for the tests....not sure if it would also cause issues otherwise.\r\n\r\nI would prefer to keep the same version style as we have for other models. You could for example simply add version 1.0.0 and add a comment with the date you currently use for the versioning.\r\n\r\n What is your opinion regarding the version here @lhoestq @mariamabarham @thomwolf ? ", "Maybe use the YYYY.MM.DD as the config name ? That's what we are doing for wikipedia", "> Maybe use the YYYY.MM.DD as the config name ? That's what we are doing for wikipedia\r\n\r\nI'm not sure if this will work because the name should be unique and it seems that he has multiple config name in his data with the same version.\r\nAs @patrickvonplaten suggested, I think you can add a comment about the version in the data description.", "Actually maybe our versioning format (inherited from tfds) is too strong for what we use it for?\r\nWe could allow any string maybe?\r\n\r\nI see it more and more like an identifier for the user that we will back with a serious hashing/versioning system.- so we could let the user quite free on it.", "I'm good with either putting it in description, adding it to the config, or loosening version formatting. I mostly don't have a full conceptual grasp of what each identifier ends up meaning in the datasets code so hard to evaluate the best approach.\r\n\r\nFor background, the multiple formats is a consequence of:\r\n\r\n1. Each example is one multi-sentence trivia question\r\n2. For training, its better to treat each sentence as an example\r\n3. For evaluation, should test on: (1) first sentence, (2) full question, and (3) partial questions (does the model get the question right having seen the first half)\r\n\r\nWe use the date format for version since: (1) we expect some degree of updates since new questions come in every year and (2) the timestamp itself matches the Wikipedia dump that it is dependent on (so similar to the Wikipedia dataset).\r\n\r\nperhaps this is better discussed in https://github.com/huggingface/nlp/pull/169 or update title?" ]
1,589,833,890,000
1,590,343,803,000
null
CONTRIBUTOR
Hi, I'm working on adding a dataset and ran into an error due to `download` not being defined on `MockDataLoaderManager`, but being defined in `nlp/utils/download_manager.py`. The readme step running this: `RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_localmydatasetname` triggers the error. If I can get something to work, I can include it in my data PR once I'm done.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/161/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/161/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/160
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/160/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/160/comments
https://api.github.com/repos/huggingface/datasets/issues/160/events
https://github.com/huggingface/datasets/issues/160
620,448,236
MDU6SXNzdWU2MjA0NDgyMzY=
160
caching in map causes same result to be returned for train, validation and test
{'login': 'dpressel', 'id': 247881, 'node_id': 'MDQ6VXNlcjI0Nzg4MQ==', 'avatar_url': 'https://avatars.githubusercontent.com/u/247881?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/dpressel', 'html_url': 'https://github.com/dpressel', 'followers_url': 'https://api.github.com/users/dpressel/followers', 'following_url': 'https://api.github.com/users/dpressel/following{/other_user}', 'gists_url': 'https://api.github.com/users/dpressel/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/dpressel/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/dpressel/subscriptions', 'organizations_url': 'https://api.github.com/users/dpressel/orgs', 'repos_url': 'https://api.github.com/users/dpressel/repos', 'events_url': 'https://api.github.com/users/dpressel/events{/privacy}', 'received_events_url': 'https://api.github.com/users/dpressel/received_events', 'type': 'User', 'site_admin': False}
[{'id': 2067388877, 'node_id': 'MDU6TGFiZWwyMDY3Mzg4ODc3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug', 'name': 'dataset bug', 'color': '2edb81', 'default': False, 'description': 'A bug in a dataset script provided in the library'}]
closed
False
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}]
[ "Hi @dpressel, \r\n\r\nthanks for posting your issue! Can you maybe add a complete code snippet that we can copy paste to reproduce the error? For example, I'm not sure where the variable `train_set` comes from in your code and it seems like you are loading multiple datasets at once? ", "Hi, the full example was listed in the PR above, but here is the exact link:\r\n\r\nhttps://github.com/dpressel/mead-baseline/blob/3c1aa3ca062cb23f303ca98ac40b6652b37ee971/api-examples/layers-classify-hf-datasets.py\r\n\r\nThe problem is coming from\r\n```\r\n if cache_file_name is None:\r\n # we create a unique hash from the function, current dataset file and the mapping args\r\n cache_kwargs = {\r\n \"with_indices\": with_indices,\r\n \"batched\": batched,\r\n \"batch_size\": batch_size,\r\n \"remove_columns\": remove_columns,\r\n \"keep_in_memory\": keep_in_memory,\r\n \"load_from_cache_file\": load_from_cache_file,\r\n \"cache_file_name\": cache_file_name,\r\n \"writer_batch_size\": writer_batch_size,\r\n \"arrow_schema\": arrow_schema,\r\n \"disable_nullable\": disable_nullable,\r\n }\r\n cache_file_name = self._get_cache_file_path(function, cache_kwargs)\r\n```\r\nThe cached value is always the same, but I was able to change that by just renaming the function each time which seems to fix the issue.", "Ok, I think @lhoestq has already found a solution :-) Maybe you can chime in @lhoestq ", "This fixed my issue (I think)\r\n\r\nhttps://github.com/dpressel/mead-baseline/commit/48aa8ecde4b307bd3e7dde5fe71e43a1d4769ee1", "> Ok, I think @lhoestq has already found a solution :-) Maybe you can chime in @lhoestq\r\n\r\nOh, awesome! I see the PR, Ill check it out", "The PR should prevent the cache from losing track of the of the dataset type (based on the location of its data). Not sure about your second problem though (cache off).", "Yes, with caching on, it seems to work without the function renaming hack, I mentioned this also in the PR. Thanks!" ]
1,589,829,723,000
1,589,837,780,000
1,589,837,780,000
NONE
hello, I am working on a program that uses the `nlp` library with the `SST2` dataset. The rough outline of the program is: ``` import nlp as nlp_datasets ... parser.add_argument('--dataset', help='HuggingFace Datasets id', default=['glue', 'sst2'], nargs='+') ... dataset = nlp_datasets.load_dataset(*args.dataset) ... # Create feature vocabs vocabs = create_vocabs(dataset.values(), vectorizers) ... # Create a function to vectorize based on vectorizers and vocabs: print('TS', train_set.num_rows) print('VS', valid_set.num_rows) print('ES', test_set.num_rows) # factory method to create a `convert_to_features` function based on vocabs convert_to_features = create_featurizer(vectorizers, vocabs) train_set = train_set.map(convert_to_features, batched=True) train_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths']) train_loader = torch.utils.data.DataLoader(train_set, batch_size=args.batchsz) valid_set = valid_set.map(convert_to_features, batched=True) valid_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths']) valid_loader = torch.utils.data.DataLoader(valid_set, batch_size=args.batchsz) test_set = test_set.map(convert_to_features, batched=True) test_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths']) test_loader = torch.utils.data.DataLoader(test_set, batch_size=args.batchsz) print('TS', train_set.num_rows) print('VS', valid_set.num_rows) print('ES', test_set.num_rows) ``` Im not sure if Im using it incorrectly, but the results are not what I expect. Namely, the `.map()` seems to grab the datset from the cache and then loses track of what the specific dataset is, instead using my training data for all datasets: ``` TS 67349 VS 872 ES 1821 TS 67349 VS 67349 ES 67349 ``` The behavior changes if I turn off the caching but then the results fail: ``` train_set = train_set.map(convert_to_features, batched=True, load_from_cache_file=False) ... valid_set = valid_set.map(convert_to_features, batched=True, load_from_cache_file=False) ... test_set = test_set.map(convert_to_features, batched=True, load_from_cache_file=False) ``` Now I get the right set of features back... ``` TS 67349 VS 872 ES 1821 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 68/68 [00:00<00:00, 92.78it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 75.47it/s] 0%| | 0/2 [00:00<?, ?it/s]TS 67349 VS 872 ES 1821 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 77.19it/s] ``` but I think its losing track of the original training set: ``` Traceback (most recent call last): File "/home/dpressel/dev/work/baseline/api-examples/layers-classify-hf-datasets.py", line 148, in <module> for x in train_loader: File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__ data = self._next_data() File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/dpressel/anaconda3/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 338, in __getitem__ output_all_columns=self._output_all_columns, File "/home/dpressel/anaconda3/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 294, in _getitem outputs = self._unnest(self._data.slice(key, 1).to_pydict()) File "pyarrow/table.pxi", line 1211, in pyarrow.lib.Table.slice File "pyarrow/public-api.pxi", line 390, in pyarrow.lib.pyarrow_wrap_table File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Column 3: In chunk 0: Invalid: Length spanned by list offsets (15859698) larger than values array (length 100000) Process finished with exit code 1 ``` The full-example program (minus the print stmts) is here: https://github.com/dpressel/mead-baseline/pull/620/files
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/160/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/160/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/159
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/159/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/159/comments
https://api.github.com/repos/huggingface/datasets/issues/159/events
https://github.com/huggingface/datasets/issues/159
620,420,700
MDU6SXNzdWU2MjA0MjA3MDA=
159
How can we add more datasets to nlp library?
{'login': 'Tahsin-Mayeesha', 'id': 17886829, 'node_id': 'MDQ6VXNlcjE3ODg2ODI5', 'avatar_url': 'https://avatars.githubusercontent.com/u/17886829?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Tahsin-Mayeesha', 'html_url': 'https://github.com/Tahsin-Mayeesha', 'followers_url': 'https://api.github.com/users/Tahsin-Mayeesha/followers', 'following_url': 'https://api.github.com/users/Tahsin-Mayeesha/following{/other_user}', 'gists_url': 'https://api.github.com/users/Tahsin-Mayeesha/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/Tahsin-Mayeesha/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/Tahsin-Mayeesha/subscriptions', 'organizations_url': 'https://api.github.com/users/Tahsin-Mayeesha/orgs', 'repos_url': 'https://api.github.com/users/Tahsin-Mayeesha/repos', 'events_url': 'https://api.github.com/users/Tahsin-Mayeesha/events{/privacy}', 'received_events_url': 'https://api.github.com/users/Tahsin-Mayeesha/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Found it. https://github.com/huggingface/nlp/tree/master/datasets" ]
1,589,826,931,000
1,589,827,028,000
1,589,827,027,000
NONE
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/159/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/159/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/158
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/158/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/158/comments
https://api.github.com/repos/huggingface/datasets/issues/158/events
https://github.com/huggingface/datasets/pull/158
620,396,658
MDExOlB1bGxSZXF1ZXN0NDE5NjUyNTQy
158
add Toronto Books Corpus
{'login': 'mariamabarham', 'id': 38249783, 'node_id': 'MDQ6VXNlcjM4MjQ5Nzgz', 'avatar_url': 'https://avatars.githubusercontent.com/u/38249783?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariamabarham', 'html_url': 'https://github.com/mariamabarham', 'followers_url': 'https://api.github.com/users/mariamabarham/followers', 'following_url': 'https://api.github.com/users/mariamabarham/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariamabarham/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariamabarham/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariamabarham/subscriptions', 'organizations_url': 'https://api.github.com/users/mariamabarham/orgs', 'repos_url': 'https://api.github.com/users/mariamabarham/repos', 'events_url': 'https://api.github.com/users/mariamabarham/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariamabarham/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,824,485,000
1,591,861,755,000
1,589,873,696,000
CONTRIBUTOR
This PR adds the Toronto Books Corpus. . It on consider TMX and plain text files (Moses) defined in the table **Statistics and TMX/Moses Downloads** [here](http://opus.nlpl.eu/Books.php )
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/158/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/158/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/158', 'html_url': 'https://github.com/huggingface/datasets/pull/158', 'diff_url': 'https://github.com/huggingface/datasets/pull/158.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/158.patch', 'merged_at': None}
true
https://api.github.com/repos/huggingface/datasets/issues/157
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/157/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/157/comments
https://api.github.com/repos/huggingface/datasets/issues/157/events
https://github.com/huggingface/datasets/issues/157
620,356,542
MDU6SXNzdWU2MjAzNTY1NDI=
157
nlp.load_dataset() gives "TypeError: list_() takes exactly one argument (2 given)"
{'login': 'saahiluppal', 'id': 47444392, 'node_id': 'MDQ6VXNlcjQ3NDQ0Mzky', 'avatar_url': 'https://avatars.githubusercontent.com/u/47444392?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/saahiluppal', 'html_url': 'https://github.com/saahiluppal', 'followers_url': 'https://api.github.com/users/saahiluppal/followers', 'following_url': 'https://api.github.com/users/saahiluppal/following{/other_user}', 'gists_url': 'https://api.github.com/users/saahiluppal/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/saahiluppal/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/saahiluppal/subscriptions', 'organizations_url': 'https://api.github.com/users/saahiluppal/orgs', 'repos_url': 'https://api.github.com/users/saahiluppal/repos', 'events_url': 'https://api.github.com/users/saahiluppal/events{/privacy}', 'received_events_url': 'https://api.github.com/users/saahiluppal/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}]
[ "You can just run: \r\n`val = nlp.load_dataset('squad')` \r\n\r\nif you want to have just the validation script you can also do:\r\n\r\n`val = nlp.load_dataset('squad', split=\"validation\")`", "If you want to load a local dataset, make sure you include a `./` before the folder name. ", "This happens by just doing run all cells on colab ... I assumed the colab example is broken. ", "Oh I see you might have a wrong version of pyarrow install on the colab -> could you try the following. Add these lines to the beginning of your notebook, restart the runtime and run it again:\r\n```\r\n!pip uninstall -y -qq pyarrow\r\n!pip uninstall -y -qq nlp\r\n!pip install -qq git+https://github.com/huggingface/nlp.git\r\n```", "> Oh I see you might have a wrong version of pyarrow install on the colab -> could you try the following. Add these lines to the beginning of your notebook, restart the runtime and run it again:\r\n> \r\n> ```\r\n> !pip uninstall -y -qq pyarrow\r\n> !pip uninstall -y -qq nlp\r\n> !pip install -qq git+https://github.com/huggingface/nlp.git\r\n> ```\r\n\r\nTried, having the same error.", "Can you post a link here of your colab? I'll make a copy of it and see what's wrong", "This should be fixed in the current version of the notebook. You can try it again", "Also see: https://github.com/huggingface/nlp/issues/222", "I am getting this error when running this command\r\n```\r\nval = nlp.load_dataset('squad', split=\"validation\")\r\n```\r\n\r\nFileNotFoundError: [Errno 2] No such file or directory: '/root/.cache/huggingface/datasets/squad/plain_text/1.0.0/dataset_info.json'\r\n\r\nCan anybody help?", "It seems like your download was corrupted :-/ Can you run the following command: \r\n\r\n```\r\nrm -r /root/.cache/huggingface/datasets\r\n```\r\n\r\nto delete the cache completely and rerun the download? ", "I tried the notebook again today and it worked without barfing. πŸ‘Œ " ]
1,589,820,398,000
1,591,344,538,000
1,591,344,538,000
NONE
I'm trying to load datasets from nlp but there seems to have error saying "TypeError: list_() takes exactly one argument (2 given)" gist can be found here https://gist.github.com/saahiluppal/c4b878f330b10b9ab9762bc0776c0a6a
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/157/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/157/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/156
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/156/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/156/comments
https://api.github.com/repos/huggingface/datasets/issues/156/events
https://github.com/huggingface/datasets/issues/156
620,263,687
MDU6SXNzdWU2MjAyNjM2ODc=
156
SyntaxError with WMT datasets
{'login': 'tomhosking', 'id': 9419158, 'node_id': 'MDQ6VXNlcjk0MTkxNTg=', 'avatar_url': 'https://avatars.githubusercontent.com/u/9419158?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/tomhosking', 'html_url': 'https://github.com/tomhosking', 'followers_url': 'https://api.github.com/users/tomhosking/followers', 'following_url': 'https://api.github.com/users/tomhosking/following{/other_user}', 'gists_url': 'https://api.github.com/users/tomhosking/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/tomhosking/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/tomhosking/subscriptions', 'organizations_url': 'https://api.github.com/users/tomhosking/orgs', 'repos_url': 'https://api.github.com/users/tomhosking/repos', 'events_url': 'https://api.github.com/users/tomhosking/events{/privacy}', 'received_events_url': 'https://api.github.com/users/tomhosking/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}]
[ "Jeez - don't know what happened there :D Should be fixed now! \r\n\r\nThanks a lot for reporting this @tomhosking !", "Hi @patrickvonplaten!\r\n\r\nI'm now getting the below error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-28-3206959998b9> in <module>\r\n 1 import nlp\r\n 2 \r\n----> 3 dataset = nlp.load_dataset('wmt14')\r\n 4 print(dataset['train'][0])\r\n\r\n~/.local/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 507 # Instantiate the dataset builder\r\n 508 builder_instance = builder_cls(\r\n--> 509 cache_dir=cache_dir, name=name, version=version, data_dir=data_dir, data_files=data_files, **config_kwargs,\r\n 510 )\r\n 511 \r\n\r\nTypeError: Can't instantiate abstract class Wmt with abstract methods _subsets\r\n```\r\n\r\n", "To correct this error I think you need the master branch of `nlp`. Can you try to install `nlp` with. `WMT` was not included at the beta release of the library. \r\n\r\nCan you try:\r\n`pip install git+https://github.com/huggingface/nlp.git`\r\n\r\nand check again? ", "That works, thanks :)\r\n\r\nThe WMT datasets are listed in by `list_datasets()` in the beta release on pypi - it would be good to only show datasets that are actually supported by that version?", "Usually, the idea is that a dataset can be added without releasing a new version. The problem in the case of `WMT` was that some \"core\" code of the library had to be changed as well. \r\n\r\n@thomwolf @lhoestq @julien-c - How should we go about this. If we add a dataset that also requires \"core\" code changes, how do we handle the versioning? The moment a dataset is on AWS it will actually be listed with `list_datasets()` in all earlier versions...\r\n\r\nIs there a way to somehow insert the `pip version` to the HfApi() and get only the datasets that were available for this version (at the date of the release of the version) @julien-c ? ", "We plan to have something like a `requirements.txt` per dataset to prevent user from loading dataset with old version of `nlp` or any other libraries. Right now the solution is just to keep `nlp` up to date when you want to load a dataset that leverages the latests features of `nlp`.\r\n\r\nFor datasets that are on AWS but that use features that are not released yet we should be able to filter those from the `list_dataset` as soon as we have the `requirements.txt` feature on (filter datasets that need a future version of `nlp`).\r\n\r\nShall we rename this issue to be more explicit about the problem ?\r\nSomething like `Specify the minimum version of the nlp library required for each dataset` ?", "Closing this one.\r\nFeel free to re-open if you have other questions :)" ]
1,589,812,698,000
1,595,522,515,000
1,595,522,515,000
NONE
The following snippet produces a syntax error: ``` import nlp dataset = nlp.load_dataset('wmt14') print(dataset['train'][0]) ``` ``` Traceback (most recent call last): File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-8-3206959998b9>", line 3, in <module> dataset = nlp.load_dataset('wmt14') File "/home/tom/.local/lib/python3.6/site-packages/nlp/load.py", line 505, in load_dataset builder_cls = import_main_class(module_path, dataset=True) File "/home/tom/.local/lib/python3.6/site-packages/nlp/load.py", line 56, in import_main_class module = importlib.import_module(module_path) File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/tom/.local/lib/python3.6/site-packages/nlp/datasets/wmt14/c258d646f4f5870b0245f783b7aa0af85c7117e06aacf1e0340bd81935094de2/wmt14.py", line 21, in <module> from .wmt_utils import Wmt, WmtConfig File "/home/tom/.local/lib/python3.6/site-packages/nlp/datasets/wmt14/c258d646f4f5870b0245f783b7aa0af85c7117e06aacf1e0340bd81935094de2/wmt_utils.py", line 659 <<<<<<< HEAD ^ SyntaxError: invalid syntax ``` Python version: `3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]` Running on Ubuntu 18.04, via a Jupyter notebook
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/156/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/156/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/155
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/155/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/155/comments
https://api.github.com/repos/huggingface/datasets/issues/155/events
https://github.com/huggingface/datasets/pull/155
620,067,946
MDExOlB1bGxSZXF1ZXN0NDE5Mzg1ODM0
155
Include more links in README, fix typos
{'login': 'Bharat123rox', 'id': 13381361, 'node_id': 'MDQ6VXNlcjEzMzgxMzYx', 'avatar_url': 'https://avatars.githubusercontent.com/u/13381361?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Bharat123rox', 'html_url': 'https://github.com/Bharat123rox', 'followers_url': 'https://api.github.com/users/Bharat123rox/followers', 'following_url': 'https://api.github.com/users/Bharat123rox/following{/other_user}', 'gists_url': 'https://api.github.com/users/Bharat123rox/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/Bharat123rox/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/Bharat123rox/subscriptions', 'organizations_url': 'https://api.github.com/users/Bharat123rox/orgs', 'repos_url': 'https://api.github.com/users/Bharat123rox/repos', 'events_url': 'https://api.github.com/users/Bharat123rox/events{/privacy}', 'received_events_url': 'https://api.github.com/users/Bharat123rox/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "I fixed a conflict :) thanks !" ]
1,589,795,228,000
1,590,654,717,000
1,590,654,717,000
CONTRIBUTOR
Include more links and fix typos in README
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/155/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/155/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/155', 'html_url': 'https://github.com/huggingface/datasets/pull/155', 'diff_url': 'https://github.com/huggingface/datasets/pull/155.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/155.patch', 'merged_at': '2020-05-28T08:31:57Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/154
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/154/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/154/comments
https://api.github.com/repos/huggingface/datasets/issues/154/events
https://github.com/huggingface/datasets/pull/154
620,059,066
MDExOlB1bGxSZXF1ZXN0NDE5Mzc4Mzgw
154
add Ubuntu Dialogs Corpus datasets
{'login': 'mariamabarham', 'id': 38249783, 'node_id': 'MDQ6VXNlcjM4MjQ5Nzgz', 'avatar_url': 'https://avatars.githubusercontent.com/u/38249783?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariamabarham', 'html_url': 'https://github.com/mariamabarham', 'followers_url': 'https://api.github.com/users/mariamabarham/followers', 'following_url': 'https://api.github.com/users/mariamabarham/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariamabarham/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariamabarham/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariamabarham/subscriptions', 'organizations_url': 'https://api.github.com/users/mariamabarham/orgs', 'repos_url': 'https://api.github.com/users/mariamabarham/repos', 'events_url': 'https://api.github.com/users/mariamabarham/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariamabarham/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,794,488,000
1,589,796,748,000
1,589,796,747,000
CONTRIBUTOR
This PR adds the Ubuntu Dialog Corpus datasets version 2.0.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/154/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/154/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/154', 'html_url': 'https://github.com/huggingface/datasets/pull/154', 'diff_url': 'https://github.com/huggingface/datasets/pull/154.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/154.patch', 'merged_at': '2020-05-18T10:12:27Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/153
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/153/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/153/comments
https://api.github.com/repos/huggingface/datasets/issues/153/events
https://github.com/huggingface/datasets/issues/153
619,972,246
MDU6SXNzdWU2MTk5NzIyNDY=
153
Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations
{'login': 'thomwolf', 'id': 7353373, 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/thomwolf', 'html_url': 'https://github.com/thomwolf', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'type': 'User', 'site_admin': False}
[{'id': 2067400324, 'node_id': 'MDU6TGFiZWwyMDY3NDAwMzI0', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion', 'name': 'generic discussion', 'color': 'c5def5', 'default': False, 'description': 'Generic discussion on the library'}]
open
False
[]
[ "As @yoavgo suggested, there should be the possibility to call a function like nlp.bib that outputs all bibtex ref from the datasets and models actually used and eventually nlp.bib.forreadme that would output the same info + versions numbers so they can be included in a readme.md file.", "Actually, double checking with @mariamabarham, we already have this feature I think.\r\n\r\nIt's like this currently:\r\n```python\r\n>>> from nlp import load_dataset\r\n>>> \r\n>>> dataset = load_dataset('glue', 'cola', split='train')\r\n>>> print(dataset.info.citation)\r\n@article{warstadt2018neural,\r\n title={Neural Network Acceptability Judgments},\r\n author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},\r\n journal={arXiv preprint arXiv:1805.12471},\r\n year={2018}\r\n}\r\n@inproceedings{wang2019glue,\r\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\r\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\r\n note={In the Proceedings of ICLR.},\r\n year={2019}\r\n}\r\n\r\nNote that each GLUE dataset has its own citation. Please see the source to see\r\nthe correct citation for each contained dataset.\r\n```\r\n\r\nWhat do you think @dseddah?", "Looks good but why would there be a difference between the ref in the source and the one to be printed? ", "Yes, I think we should remove this warning @mariamabarham.\r\n\r\nIt's probably a relic of tfds which didn't have the same way to access citations. " ]
1,589,786,662,000
1,589,836,696,000
null
MEMBER
Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly referenced and the citation/specific homepage/etc are very visible and accessible and not only the generic citation of the meta-dataset itself. Let's take GLUE as an example: The configuration has the citation for each dataset included (e.g. [here](https://github.com/huggingface/nlp/blob/master/datasets/glue/glue.py#L154-L161)) but it should be copied inside the dataset info so that, when people access `dataset.info.citation` they get both the citation for GLUE and the citation for the specific datasets inside GLUE that they have loaded.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/153/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/153/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/152
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/152/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/152/comments
https://api.github.com/repos/huggingface/datasets/issues/152/events
https://github.com/huggingface/datasets/pull/152
619,971,900
MDExOlB1bGxSZXF1ZXN0NDE5MzA4OTE2
152
Add GLUE config name check
{'login': 'Bharat123rox', 'id': 13381361, 'node_id': 'MDQ6VXNlcjEzMzgxMzYx', 'avatar_url': 'https://avatars.githubusercontent.com/u/13381361?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Bharat123rox', 'html_url': 'https://github.com/Bharat123rox', 'followers_url': 'https://api.github.com/users/Bharat123rox/followers', 'following_url': 'https://api.github.com/users/Bharat123rox/following{/other_user}', 'gists_url': 'https://api.github.com/users/Bharat123rox/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/Bharat123rox/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/Bharat123rox/subscriptions', 'organizations_url': 'https://api.github.com/users/Bharat123rox/orgs', 'repos_url': 'https://api.github.com/users/Bharat123rox/repos', 'events_url': 'https://api.github.com/users/Bharat123rox/events{/privacy}', 'received_events_url': 'https://api.github.com/users/Bharat123rox/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "If tests are being added, any guidance on where to add tests would be helpful!\r\n\r\nTagging @thomwolf for review", "Looks good to me. Is this compatible with the way we are doing tests right now @patrickvonplaten ?", "If the tests pass it should be fine :-) \r\n\r\n@Bharat123rox could you check whether the tests pass locally via: \r\n`pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_glue`", "The test fails with an `AssertionError` because the name is not being passed to kwargs, however I'm not sure how to do that, because only the config file is being passed to the tests of all datasets?\r\n\r\nI'm guessing this is the corresponding code:\r\nhttps://github.com/huggingface/nlp/blob/2b3621bb5c78caf02c5a969b8e67fa0c145da4e6/tests/test_dataset_common.py#L141-L143\r\n\r\nAnd these are the logs:\r\n```\r\n___________________ DatasetTest.test_load_dataset_local_glue ___________________\r\n\r\nself = <tests.test_dataset_common.DatasetTest testMethod=test_load_dataset_local_glue>\r\ndataset_name = 'glue'\r\n\r\n @local\r\n def test_load_dataset_local(self, dataset_name):\r\n # test only first config\r\n if \"/\" in dataset_name:\r\n logging.info(\"Skip {} because it is not a canonical dataset\")\r\n return\r\n\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True)\r\n\r\ntests/test_dataset_common.py:200:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\ntests/test_dataset_common.py:74: in check_load_dataset\r\n dataset_builder = dataset_builder_cls(config=config, cache_dir=processed_temp_dir)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = <nlp.datasets.glue.fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597.glue.Glue object at 0x135c0ea90>\r\nargs = ()\r\nkwargs = {'cache_dir': '/var/folders/r6/mnw5ntvn5y72j7d4s1fm273m0000gn/T/tmpa9rpq3tl', 'config': GlueConfig(name='cola', versio...linguistic theory. Each example is a sequence of words annotated\\nwith whether it is a grammatical English sentence.')}\r\n\r\n def __init__(self, *args, **kwargs):\r\n> assert ('name' in kwargs and kwargs['name'] is not None), \"Glue has to be called with a configuration name\"\r\nE AssertionError: Glue has to be called with a configuration name\r\n\r\n/usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.py:139: AssertionError\r\n----------------------------- Captured stderr call -----------------------------\r\nINFO:nlp.load:Checking ./datasets/glue/glue.py for additional imports.\r\nINFO:filelock:Lock 5209998288 acquired on ./datasets/glue/glue.py.lock\r\nINFO:nlp.load:Found main folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue\r\nINFO:nlp.load:Found specific version folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\r\nINFO:nlp.load:Found script file from ./datasets/glue/glue.py to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.py\r\nINFO:nlp.load:Found dataset infos file from ./datasets/glue/dataset_infos.json to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/dataset_infos.json\r\nINFO:nlp.load:Found metadata file for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.json\r\nINFO:filelock:Lock 5209998288 released on ./datasets/glue/glue.py.lock\r\nINFO:nlp.load:Checking ./datasets/glue/glue.py for additional imports.\r\nINFO:filelock:Lock 5196802640 acquired on ./datasets/glue/glue.py.lock\r\nINFO:nlp.load:Found main folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue\r\nINFO:nlp.load:Found specific version folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\r\nINFO:nlp.load:Found script file from ./datasets/glue/glue.py to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.py\r\nINFO:nlp.load:Found dataset infos file from ./datasets/glue/dataset_infos.json to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/dataset_infos.json\r\nINFO:nlp.load:Found metadata file for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.json\r\nINFO:filelock:Lock 5196802640 released on ./datasets/glue/glue.py.lock\r\n------------------------------ Captured log call -------------------------------\r\nINFO nlp.load:load.py:157 Checking ./datasets/glue/glue.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 5209998288 acquired on ./datasets/glue/glue.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\r\nINFO nlp.load:load.py:346 Found script file from ./datasets/glue/glue.py to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.py\r\nINFO nlp.load:load.py:356 Found dataset infos file from ./datasets/glue/dataset_infos.json to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/dataset_infos.json\r\nINFO nlp.load:load.py:367 Found metadata file for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.json\r\nINFO filelock:filelock.py:318 Lock 5209998288 released on ./datasets/glue/glue.py.lock\r\nINFO nlp.load:load.py:157 Checking ./datasets/glue/glue.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 5196802640 acquired on ./datasets/glue/glue.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\r\nINFO nlp.load:load.py:346 Found script file from ./datasets/glue/glue.py to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.py\r\nINFO nlp.load:load.py:356 Found dataset infos file from ./datasets/glue/dataset_infos.json to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/dataset_infos.json\r\nINFO nlp.load:load.py:367 Found metadata file for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.json\r\nINFO filelock:filelock.py:318 Lock 5196802640 released on ./datasets/glue/glue.py.lock\r\n```", "Closing as #130 is fixed !" ]
1,589,786,623,000
1,590,617,352,000
1,590,617,352,000
CONTRIBUTOR
Fixes #130 by adding a name check to the Glue class
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/152/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/152/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/152', 'html_url': 'https://github.com/huggingface/datasets/pull/152', 'diff_url': 'https://github.com/huggingface/datasets/pull/152.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/152.patch', 'merged_at': None}
true
https://api.github.com/repos/huggingface/datasets/issues/151
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/151/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/151/comments
https://api.github.com/repos/huggingface/datasets/issues/151/events
https://github.com/huggingface/datasets/pull/151
619,968,480
MDExOlB1bGxSZXF1ZXN0NDE5MzA2MTYz
151
Fix JSON tests.
{'login': 'jplu', 'id': 959590, 'node_id': 'MDQ6VXNlcjk1OTU5MA==', 'avatar_url': 'https://avatars.githubusercontent.com/u/959590?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/jplu', 'html_url': 'https://github.com/jplu', 'followers_url': 'https://api.github.com/users/jplu/followers', 'following_url': 'https://api.github.com/users/jplu/following{/other_user}', 'gists_url': 'https://api.github.com/users/jplu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/jplu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jplu/subscriptions', 'organizations_url': 'https://api.github.com/users/jplu/orgs', 'repos_url': 'https://api.github.com/users/jplu/repos', 'events_url': 'https://api.github.com/users/jplu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/jplu/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,786,258,000
1,589,786,512,000
1,589,786,511,000
CONTRIBUTOR
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/151/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/151/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/151', 'html_url': 'https://github.com/huggingface/datasets/pull/151', 'diff_url': 'https://github.com/huggingface/datasets/pull/151.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/151.patch', 'merged_at': '2020-05-18T07:21:51Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/150
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/150/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/150/comments
https://api.github.com/repos/huggingface/datasets/issues/150/events
https://github.com/huggingface/datasets/pull/150
619,809,645
MDExOlB1bGxSZXF1ZXN0NDE5MTgyODU4
150
Add WNUT 17 NER dataset
{'login': 'stefan-it', 'id': 20651387, 'node_id': 'MDQ6VXNlcjIwNjUxMzg3', 'avatar_url': 'https://avatars.githubusercontent.com/u/20651387?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/stefan-it', 'html_url': 'https://github.com/stefan-it', 'followers_url': 'https://api.github.com/users/stefan-it/followers', 'following_url': 'https://api.github.com/users/stefan-it/following{/other_user}', 'gists_url': 'https://api.github.com/users/stefan-it/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/stefan-it/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/stefan-it/subscriptions', 'organizations_url': 'https://api.github.com/users/stefan-it/orgs', 'repos_url': 'https://api.github.com/users/stefan-it/repos', 'events_url': 'https://api.github.com/users/stefan-it/events{/privacy}', 'received_events_url': 'https://api.github.com/users/stefan-it/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "The PR looks awesome! \r\nSince you have already added a dataset I imagine the tests as described in 5. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset all pass, right @stefan-it ?\r\n\r\nI think we are then good to merge this :-) @lhoestq ", "Nice !\r\n\r\nOne thing though: I saw that you copied the `dataset_info.json` (one split info), which is different from the `dataset_infos.json` (split infos of all configs) that we expect.\r\n\r\nCould you generate the `dataset_infos.json` file using this command please ?\r\n```\r\npython nlp-cli test datasets/wnut_17 --save_infos --all_configs\r\n```", "Hi @patrickvonplaten I just rebased onto latest `master` version and executed the commands. All tests passed then :)\r\n\r\n@lhoestq thanks for that hint! I've generated and added the `dataset_infos.json` and deleted `dataset_info.json`.", "Awesome ! I guess it's ready to be merged now :)" ]
1,589,753,944,000
1,590,525,479,000
1,590,525,479,000
CONTRIBUTOR
Hi, this PR adds the WNUT 17 dataset to `nlp`. > Emerging and Rare entity recognition > This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarisation), but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms. Take for example the tweet β€œso.. kktny in 30 mins?” - even human experts find entity kktny hard to detect and resolve. This task will evaluate the ability to detect and classify novel, emerging, singleton named entities in noisy text. > > The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities. More information about the dataset can be found on the [shared task page](https://noisy-text.github.io/2017/emerging-rare-entities.html). Dataset is taken is taken from their [GitHub repository](https://github.com/leondz/emerging_entities_17), because the data provided in this repository contains minor fixes in the dataset format. ## Usage Then the WNUT 17 dataset can be used in `nlp` like this: ```python import nlp wnut_17 = nlp.load_dataset("./datasets/wnut_17/wnut_17.py") print(wnut_17) ``` This outputs: ```txt 'train': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 3394) 'validation': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 1009) 'test': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 1287) ``` Number are identical with the ones in [this paper](https://www.ijcai.org/Proceedings/2019/0702.pdf) and are the same as using the `dataset` reader in Flair. ## Features The following feature format is used to represent a sentence in the WNUT 17 dataset: | Feature | Example | Description | ---- | ---- | ----------------- | `id` | `0` | Number (id) of current sentence | `tokens` | `["AHFA", "extends", "deadline"]` | List of tokens (strings) for a sentence | `labels` | `["B-group", "O", "O"]` | List of labels (outer span) The following labels are used in WNUT 17: ```txt O B-corporation I-corporation B-location I-location B-product I-product B-person I-person B-group I-group B-creative-work I-creative-work ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/150/reactions', 'total_count': 1, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 1, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/150/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/150', 'html_url': 'https://github.com/huggingface/datasets/pull/150', 'diff_url': 'https://github.com/huggingface/datasets/pull/150.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/150.patch', 'merged_at': '2020-05-26T20:37:59Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/149
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/149/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/149/comments
https://api.github.com/repos/huggingface/datasets/issues/149/events
https://github.com/huggingface/datasets/issues/149
619,735,739
MDU6SXNzdWU2MTk3MzU3Mzk=
149
[Feature request] Add Ubuntu Dialogue Corpus dataset
{'login': 'danth', 'id': 28959268, 'node_id': 'MDQ6VXNlcjI4OTU5MjY4', 'avatar_url': 'https://avatars.githubusercontent.com/u/28959268?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/danth', 'html_url': 'https://github.com/danth', 'followers_url': 'https://api.github.com/users/danth/followers', 'following_url': 'https://api.github.com/users/danth/following{/other_user}', 'gists_url': 'https://api.github.com/users/danth/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/danth/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/danth/subscriptions', 'organizations_url': 'https://api.github.com/users/danth/orgs', 'repos_url': 'https://api.github.com/users/danth/repos', 'events_url': 'https://api.github.com/users/danth/events{/privacy}', 'received_events_url': 'https://api.github.com/users/danth/received_events', 'type': 'User', 'site_admin': False}
[{'id': 2067376369, 'node_id': 'MDU6TGFiZWwyMDY3Mzc2MzY5', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20request', 'name': 'dataset request', 'color': 'e99695', 'default': False, 'description': 'Requesting to add a new dataset'}]
closed
False
[]
[ "@AlphaMycelium the Ubuntu Dialogue Corpus [version 2]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator) is added. Note that it requires a manual download by following the download instructions in the [repos]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator).\r\nMaybe we can close this issue for now?" ]
1,589,730,159,000
1,589,821,306,000
1,589,821,306,000
NONE
https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/149/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/149/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/148
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/148/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/148/comments
https://api.github.com/repos/huggingface/datasets/issues/148/events
https://github.com/huggingface/datasets/issues/148
619,590,555
MDU6SXNzdWU2MTk1OTA1NTU=
148
_download_and_prepare() got an unexpected keyword argument 'verify_infos'
{'login': 'richarddwang', 'id': 17963619, 'node_id': 'MDQ6VXNlcjE3OTYzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/17963619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/richarddwang', 'html_url': 'https://github.com/richarddwang', 'followers_url': 'https://api.github.com/users/richarddwang/followers', 'following_url': 'https://api.github.com/users/richarddwang/following{/other_user}', 'gists_url': 'https://api.github.com/users/richarddwang/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/richarddwang/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/richarddwang/subscriptions', 'organizations_url': 'https://api.github.com/users/richarddwang/orgs', 'repos_url': 'https://api.github.com/users/richarddwang/repos', 'events_url': 'https://api.github.com/users/richarddwang/events{/privacy}', 'received_events_url': 'https://api.github.com/users/richarddwang/received_events', 'type': 'User', 'site_admin': False}
[{'id': 2067388877, 'node_id': 'MDU6TGFiZWwyMDY3Mzg4ODc3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug', 'name': 'dataset bug', 'color': '2edb81', 'default': False, 'description': 'A bug in a dataset script provided in the library'}]
closed
False
[]
[ "Same error for dataset 'wiki40b'", "Should be fixed on master :)" ]
1,589,680,133,000
1,589,787,513,000
1,589,787,513,000
CONTRIBUTOR
# Reproduce In Colab, ``` %pip install -q nlp %pip install -q apache_beam mwparserfromhell dataset = nlp.load_dataset('wikipedia') ``` get ``` Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wikipedia/20200501.aa/1.0.0... --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-52471d2a0088> in <module>() ----> 1 dataset = nlp.load_dataset('wikipedia') 1 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 515 download_mode=download_mode, 516 ignore_verifications=ignore_verifications, --> 517 save_infos=save_infos, 518 ) 519 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 361 verify_infos = not save_infos and not ignore_verifications 362 self._download_and_prepare( --> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 364 ) 365 # Sync info TypeError: _download_and_prepare() got an unexpected keyword argument 'verify_infos' ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/148/reactions', 'total_count': 2, '+1': 2, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/148/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/147
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/147/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/147/comments
https://api.github.com/repos/huggingface/datasets/issues/147/events
https://github.com/huggingface/datasets/issues/147
619,581,907
MDU6SXNzdWU2MTk1ODE5MDc=
147
Error with sklearn train_test_split
{'login': 'ClonedOne', 'id': 6853743, 'node_id': 'MDQ6VXNlcjY4NTM3NDM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/6853743?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/ClonedOne', 'html_url': 'https://github.com/ClonedOne', 'followers_url': 'https://api.github.com/users/ClonedOne/followers', 'following_url': 'https://api.github.com/users/ClonedOne/following{/other_user}', 'gists_url': 'https://api.github.com/users/ClonedOne/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/ClonedOne/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/ClonedOne/subscriptions', 'organizations_url': 'https://api.github.com/users/ClonedOne/orgs', 'repos_url': 'https://api.github.com/users/ClonedOne/repos', 'events_url': 'https://api.github.com/users/ClonedOne/events{/privacy}', 'received_events_url': 'https://api.github.com/users/ClonedOne/received_events', 'type': 'User', 'site_admin': False}
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}]
closed
False
[]
[ "Indeed. Probably we will want to have a similar method directly in the library", "Related: #166 " ]
1,589,675,304,000
1,592,497,403,000
1,592,497,403,000
NONE
It would be nice if we could use sklearn `train_test_split` to quickly generate subsets from the dataset objects returned by `nlp.load_dataset`. At the moment the code: ```python data = nlp.load_dataset('imdb', cache_dir=data_cache) f_half, s_half = train_test_split(data['train'], test_size=0.5, random_state=seed) ``` throws: ``` ValueError: Can only get row(s) (int or slice) or columns (string). ``` It's not a big deal, since there are other ways to split the data, but it would be a cool thing to have.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/147/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/147/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/146
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/146/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/146/comments
https://api.github.com/repos/huggingface/datasets/issues/146/events
https://github.com/huggingface/datasets/pull/146
619,564,653
MDExOlB1bGxSZXF1ZXN0NDE5MDI5MjUx
146
Add BERTScore to metrics
{'login': 'felixgwu', 'id': 7753366, 'node_id': 'MDQ6VXNlcjc3NTMzNjY=', 'avatar_url': 'https://avatars.githubusercontent.com/u/7753366?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/felixgwu', 'html_url': 'https://github.com/felixgwu', 'followers_url': 'https://api.github.com/users/felixgwu/followers', 'following_url': 'https://api.github.com/users/felixgwu/following{/other_user}', 'gists_url': 'https://api.github.com/users/felixgwu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/felixgwu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/felixgwu/subscriptions', 'organizations_url': 'https://api.github.com/users/felixgwu/orgs', 'repos_url': 'https://api.github.com/users/felixgwu/repos', 'events_url': 'https://api.github.com/users/felixgwu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/felixgwu/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,666,979,000
1,589,754,130,000
1,589,754,129,000
CONTRIBUTOR
This PR adds [BERTScore](https://arxiv.org/abs/1904.09675) to metrics. Here is an example of how to use it. ```sh import nlp bertscore = nlp.load_metric('metrics/bertscore') # or simply nlp.load_metric('bertscore') after this is added to huggingface's s3 bucket predictions = ['example', 'fruit'] references = [['this is an example.', 'this is one example.'], ['apple']] results = bertscore.compute(predictions, references, lang='en') print(results) ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/146/reactions', 'total_count': 3, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 3, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/146/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/146', 'html_url': 'https://github.com/huggingface/datasets/pull/146', 'diff_url': 'https://github.com/huggingface/datasets/pull/146.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/146.patch', 'merged_at': '2020-05-17T22:22:09Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/145
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/145/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/145/comments
https://api.github.com/repos/huggingface/datasets/issues/145/events
https://github.com/huggingface/datasets/pull/145
619,480,549
MDExOlB1bGxSZXF1ZXN0NDE4OTcxMjg0
145
[AWS Tests] Follow-up PR from #144
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,637,226,000
1,589,637,263,000
1,589,637,262,000
MEMBER
I forgot to add this line in PR #145 .
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/145/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/145/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/145', 'html_url': 'https://github.com/huggingface/datasets/pull/145', 'diff_url': 'https://github.com/huggingface/datasets/pull/145.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/145.patch', 'merged_at': '2020-05-16T13:54:22Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/144
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/144/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/144/comments
https://api.github.com/repos/huggingface/datasets/issues/144/events
https://github.com/huggingface/datasets/pull/144
619,477,367
MDExOlB1bGxSZXF1ZXN0NDE4OTY5NjA1
144
[AWS tests] AWS test should not run for canonical datasets
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,636,370,000
1,589,636,674,000
1,589,636,673,000
MEMBER
AWS tests should in general not run for canonical datasets. Only local tests will run in this case. This way a PR is able to pass when adding a new dataset. This PR changes to logic to the following: 1) All datasets that are present in `nlp/datasets` are tested only locally. This way when one adds a canonical dataset, the PR includes his dataset in the tests. 2) All datasets that are only present on AWS, such as `webis/tl_dr` atm are tested only on AWS. I think the testing structure might need a bigger refactoring and better documentation very soon. Merging for now to unblock new PRs @thomwolf @mariamabarham .
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/144/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/144/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/144', 'html_url': 'https://github.com/huggingface/datasets/pull/144', 'diff_url': 'https://github.com/huggingface/datasets/pull/144.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/144.patch', 'merged_at': '2020-05-16T13:44:33Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/143
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/143/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/143/comments
https://api.github.com/repos/huggingface/datasets/issues/143/events
https://github.com/huggingface/datasets/issues/143
619,457,641
MDU6SXNzdWU2MTk0NTc2NDE=
143
ArrowTypeError in squad metrics
{'login': 'patil-suraj', 'id': 27137566, 'node_id': 'MDQ6VXNlcjI3MTM3NTY2', 'avatar_url': 'https://avatars.githubusercontent.com/u/27137566?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patil-suraj', 'html_url': 'https://github.com/patil-suraj', 'followers_url': 'https://api.github.com/users/patil-suraj/followers', 'following_url': 'https://api.github.com/users/patil-suraj/following{/other_user}', 'gists_url': 'https://api.github.com/users/patil-suraj/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patil-suraj/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patil-suraj/subscriptions', 'organizations_url': 'https://api.github.com/users/patil-suraj/orgs', 'repos_url': 'https://api.github.com/users/patil-suraj/repos', 'events_url': 'https://api.github.com/users/patil-suraj/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patil-suraj/received_events', 'type': 'User', 'site_admin': False}
[{'id': 2067393914, 'node_id': 'MDU6TGFiZWwyMDY3MzkzOTE0', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/metric%20bug', 'name': 'metric bug', 'color': '25b21e', 'default': False, 'description': 'A bug in a metric script'}]
closed
False
[]
[ "There was an issue in the format, thanks.\r\nNow you can do\r\n```python3\r\nsquad_dset = nlp.load_dataset(\"squad\")\r\nsquad_metric = nlp.load_metric(\"/Users/quentinlhoest/Desktop/hf/nlp-bis/metrics/squad\")\r\npredictions = [\r\n {\"id\": v[\"id\"], \"prediction_text\": v[\"answers\"][\"text\"][0]} # take first possible answer\r\n for v in squad_dset[\"validation\"]\r\n]\r\nsquad_metric.compute(predictions, squad_dset[\"validation\"])\r\n```\r\n\r\nand the expected format is \r\n```\r\nArgs:\r\n predictions: List of question-answers dictionaries with the following key-values:\r\n - 'id': id of the question-answer pair as given in the references (see below)\r\n - 'prediction_text': the text of the answer\r\n references: List of question-answers dictionaries with the following key-values:\r\n - 'id': id of the question-answer pair (see above),\r\n - 'answers': a Dict {'text': list of possible texts for the answer, as a list of strings}\r\n```" ]
1,589,630,797,000
1,590,154,732,000
1,590,154,608,000
MEMBER
`squad_metric.compute` is giving following error ``` ArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type ``` This is how my predictions and references look like ``` predictions[0] # {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'} ``` ``` references[0] # {'answers': [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}], 'id': '56be4db0acb8001400a502ec'} ``` These are structured as per the `squad_metric.compute` help string.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/143/reactions', 'total_count': 1, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 1}
https://api.github.com/repos/huggingface/datasets/issues/143/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/142
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/142/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/142/comments
https://api.github.com/repos/huggingface/datasets/issues/142/events
https://github.com/huggingface/datasets/pull/142
619,450,068
MDExOlB1bGxSZXF1ZXN0NDE4OTU0OTc1
142
[WMT] Add all wmt
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,628,526,000
1,589,717,901,000
1,589,717,900,000
MEMBER
This PR adds all wmt datasets scripts. At the moment the script is **not** functional for the language pairs "cs-en", "ru-en", "hi-en" because apparently it takes up to a week to get the manual data for these datasets: see http://ufal.mff.cuni.cz/czeng. The datasets are fully functional though for the "big" language pairs "de-en" and "fr-en". Overall I think the scripts are very messy and might need a big refactoring at some point. For now I think there are good to merge (most dataset configs can be used). I will add "cs", "ru" and "hi" when the manual data is available.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/142/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/142/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/142', 'html_url': 'https://github.com/huggingface/datasets/pull/142', 'diff_url': 'https://github.com/huggingface/datasets/pull/142.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/142.patch', 'merged_at': '2020-05-17T12:18:20Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/141
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/141/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/141/comments
https://api.github.com/repos/huggingface/datasets/issues/141/events
https://github.com/huggingface/datasets/pull/141
619,447,090
MDExOlB1bGxSZXF1ZXN0NDE4OTUzMzQw
141
[Clean up] remove bogus folder
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Same for the dataset_infos.json at the project root no ?", "Sorry guys, I haven't noticed. Thank you for mentioning it." ]
1,589,627,622,000
1,589,635,467,000
1,589,635,466,000
MEMBER
@mariamabarham - I think you accidentally placed it there.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/141/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/141/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/141', 'html_url': 'https://github.com/huggingface/datasets/pull/141', 'diff_url': 'https://github.com/huggingface/datasets/pull/141.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/141.patch', 'merged_at': '2020-05-16T13:24:25Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/140
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/140/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/140/comments
https://api.github.com/repos/huggingface/datasets/issues/140/events
https://github.com/huggingface/datasets/pull/140
619,443,613
MDExOlB1bGxSZXF1ZXN0NDE4OTUxMzg4
140
[Tests] run local tests as default
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "You are right and I think those are usual best practice :) I'm 100% fine with this^^", "Merging this for now to unblock other PRs." ]
1,589,626,566,000
1,589,635,304,000
1,589,635,303,000
MEMBER
This PR also enables local tests by default I think it's safer for now to enable both local and aws tests for every commit. The problem currently is that when we do a PR to add a dataset, the dataset is not yet on AWS on therefore not tested on the PR itself. Thus the PR will always be green even if the datasets are not correct. This PR aims at fixing this. ## Suggestion on how to commit to the repo from now on: Now since the repo is "online", I think we should adopt a couple of best practices: 1) - No direct committing to the repo anymore. Every change should be opened in a PR and be well documented so that we can find it later 2) - Every PR has to be reviewed by at least x people (I guess @thomwolf you should decide here) because we now have to be much more careful when doing changes to the API for backward compatibility, etc...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/140/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/140/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/140', 'html_url': 'https://github.com/huggingface/datasets/pull/140', 'diff_url': 'https://github.com/huggingface/datasets/pull/140.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/140.patch', 'merged_at': '2020-05-16T13:21:43Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/139
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/139/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/139/comments
https://api.github.com/repos/huggingface/datasets/issues/139/events
https://github.com/huggingface/datasets/pull/139
619,327,409
MDExOlB1bGxSZXF1ZXN0NDE4ODc4NzMy
139
Add GermEval 2014 NER dataset
{'login': 'stefan-it', 'id': 20651387, 'node_id': 'MDQ6VXNlcjIwNjUxMzg3', 'avatar_url': 'https://avatars.githubusercontent.com/u/20651387?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/stefan-it', 'html_url': 'https://github.com/stefan-it', 'followers_url': 'https://api.github.com/users/stefan-it/followers', 'following_url': 'https://api.github.com/users/stefan-it/following{/other_user}', 'gists_url': 'https://api.github.com/users/stefan-it/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/stefan-it/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/stefan-it/subscriptions', 'organizations_url': 'https://api.github.com/users/stefan-it/orgs', 'repos_url': 'https://api.github.com/users/stefan-it/repos', 'events_url': 'https://api.github.com/users/stefan-it/events{/privacy}', 'received_events_url': 'https://api.github.com/users/stefan-it/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}]
[ "Had really fun playing around with this new library :heart: ", "That's awesome - thanks @stefan-it :-) \r\n\r\nCould you maybe rebase to master and check if all dummy data tests are fine. I should have included the local tests directly in the test suite so that all PRs are fully checked: #140 - sorry :D ", "@patrickvonplaten Rebased it πŸ˜…\r\n\r\nHow can it test πŸ€” I used:\r\n\r\n```bash\r\nRUN_SLOW=1 RUN_LOCAL=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_local_germeval_14\r\n# and\r\nRUN_SLOW=1 RUN_LOCAL=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_dataset_all_configs_local_germeval_14\r\n```\r\n\r\nand the tests still pass :)", "Perfect, if these tests pass that's great - I'll merge the PR then :-) Was it very difficult to create the dummy data structure? " ]
1,589,586,129,000
1,589,637,397,000
1,589,637,382,000
CONTRIBUTOR
Hi, this PR adds the GermEval 2014 NER dataset πŸ˜ƒ > The GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation [1] with the following properties: > - The data was sampled from German Wikipedia and News Corpora as a collection of citations. > - The dataset covers over 31,000 sentences corresponding to over 590,000 tokens. > - The NER annotation uses the NoSta-D guidelines, which extend the TΓΌbingen Treebank guidelines, using four main NER categories with sub-structure, and annotating embeddings among NEs such as [ORG FC Kickers [LOC Darmstadt]]. Dataset will be downloaded from the [official GermEval 2014 website](https://sites.google.com/site/germeval2014ner/data). ## Dataset format Here's an example of the dataset format from the original dataset: ```tsv # http://de.wikipedia.org/wiki/Manfred_Korfmann [2009-10-17] 1 Aufgrund O O 2 seiner O O 3 Initiative O O 4 fand O O 5 2001/2002 O O 6 in O O 7 Stuttgart B-LOC O 8 , O O 9 Braunschweig B-LOC O 10 und O O 11 Bonn B-LOC O 12 eine O O 13 große O O 14 und O O 15 publizistisch O O 16 vielbeachtete O O 17 Troia-Ausstellung B-LOCpart O 18 statt O O 19 , O O 20 β€ž O O 21 Troia B-OTH B-LOC 22 - I-OTH O 23 Traum I-OTH O 24 und I-OTH O 25 Wirklichkeit I-OTH O 26 β€œ O O 27 . O O ``` The sentence is encoded as one token per line (tab separated columns. The first column contains either a `#`, which signals the source the sentence is cited from and the date it was retrieved, or the token number within the sentence. The second column contains the token. Column three and four contain the named entity (in IOB2 scheme). Outer spans are encoded in the third column, embedded/nested spans in the fourth column. ## Features I decided to keep most information from the dataset. That means the so called "source" information (where the sentences come from + date information) is also returned for each sentence in the feature vector. For each sentence in the dataset, one feature vector (`nlp.Features` definition) will be returned: | Feature | Example | Description | ---- | ---- | ----------------- | `id` | `0` | Number (id) of current sentence | `source` | `http://de.wikipedia.org/wiki/Manfred_Korfmann [2009-10-17]` | URL and retrieval date as string | `tokens` | `["Schwartau", "sagte", ":"]` | List of tokens (strings) for a sentence | `labels` | `["B-PER", "O", "O"]` | List of labels (outer span) | `nested-labels` | `["O", "O", "O"]` | List of labels for nested span ## Example The following command downloads the dataset from the official GermEval 2014 page and pre-processed it: ```bash python nlp-cli test datasets/germeval_14 --all_configs ``` It then outputs the number for training, development and testset. The training set consists of 24,000 sentences, the development set of 2,200 and the test of 5,100 sentences. Now it can be imported and used with `nlp`: ```python import nlp germeval = nlp.load_dataset("./datasets/germeval_14/germeval_14.py") assert len(germeval["train"]) == 24000 # Show first sentence of training set: germeval["train"][0] ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/139/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/139/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/139', 'html_url': 'https://github.com/huggingface/datasets/pull/139', 'diff_url': 'https://github.com/huggingface/datasets/pull/139.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/139.patch', 'merged_at': '2020-05-16T13:56:22Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/138
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/138/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/138/comments
https://api.github.com/repos/huggingface/datasets/issues/138/events
https://github.com/huggingface/datasets/issues/138
619,225,191
MDU6SXNzdWU2MTkyMjUxOTE=
138
Consider renaming to nld
{'login': 'honnibal', 'id': 8059750, 'node_id': 'MDQ6VXNlcjgwNTk3NTA=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8059750?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/honnibal', 'html_url': 'https://github.com/honnibal', 'followers_url': 'https://api.github.com/users/honnibal/followers', 'following_url': 'https://api.github.com/users/honnibal/following{/other_user}', 'gists_url': 'https://api.github.com/users/honnibal/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/honnibal/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/honnibal/subscriptions', 'organizations_url': 'https://api.github.com/users/honnibal/orgs', 'repos_url': 'https://api.github.com/users/honnibal/repos', 'events_url': 'https://api.github.com/users/honnibal/events{/privacy}', 'received_events_url': 'https://api.github.com/users/honnibal/received_events', 'type': 'User', 'site_admin': False}
[{'id': 2067400324, 'node_id': 'MDU6TGFiZWwyMDY3NDAwMzI0', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion', 'name': 'generic discussion', 'color': 'c5def5', 'default': False, 'description': 'Generic discussion on the library'}]
closed
False
[]
[ "I would suggest `nlds`. NLP is a very general, broad and ambiguous term, the library is not about NLP (as in processing) per se, it is about accessing Natural Language related datasets. So the name should reflect its purpose.\r\n", "Chiming in to second everything @honnibal said, and to add that I think the current name is going to impact the discoverability of this library. People who are looking for \"NLP Datasets\" through a search engine are going to see a library called `nlp` and think it's too broad. People who are looking to do NLP in python are going to search \"Python NLP\" and end up here, confused that this is a collection of datasets.\r\n\r\nThe names of the other huggingface libraries work because they're the only game in town: there are not very many robust, distinct libraries for `tokenizers` or `transformers` in python, for example. But there are several options for NLP in python, and adding this as a possible search result for \"python nlp\" when datasets are likely not what someone is searching for adds noise and frustrates potential users.", "I'm also not sure whether the naming of `nlp` is the problem itself, as long as it comes with the appropriate identifier, so maybe something like `huggingface_nlp`? This is analogous to what @honnibal and spacy are doing for `spacy-transformers`. Of course, this is a \"step back\" from the recent changes/renaming of transformers, but may be some middle ground between a complete rebranding, and keeping it identifiable.", "Interesting, thanks for sharing your thoughts.\r\n\r\nAs we’ll move toward a first non-beta release, we will pool the community of contributors/users of the library for their opinions on a good final name (like when we renamed the beautifully (?) named `pytorch-pretrained-bert`)\r\n\r\nIn the meantime, using `from nlp import load_dataset, load_metric` should work πŸ˜‰", "I feel like we are conflating two distinct subjects here:\r\n\r\n1. @honnibal's point is that using `nlp` as a package name might break existing code and bring developer usability issues in the future\r\n2. @pmbaumgartner's point is that the `nlp` package name is too broad and shouldn't be used by a package that exposes only datasets and metrics\r\n\r\n(let me know if I mischaracterize your point)\r\n\r\nI'll chime in to say that the first point is a bit silly IMO. As Python developers due to the limitations of the import system we already have to share:\r\n- a single flat namespace for packages\r\n- which also conflicts with local modules i.e. local files\r\n\r\nIf we add the constraint that this flat namespace also be shared with variable names this gets untractable pretty fast :)\r\n\r\nI also think all Python software developers/ML engineers/scientists are capable of at least a subset of:\r\n- importing only the methods that they need like @thomwolf suggested\r\n- aliasing their import\r\n- renaming a local variable", "By the way, `nlp` will very likely not be only about datasets, and not even just about datasets and metrics.\r\n\r\nI see it as a laboratory for testing several long-term ideas about how we could do NLP in terms of research as well as open-source and community sharing, most of these ideas being too experimental/big to fit in `transformers`.\r\n\r\nSome of the directions we would like to explore are about sharing, traceability and more experimental models, as well as seeing a model as the community-based process of creating a composite entity from data, optimization, and code.\r\n\r\nWe'll see how these ideas end up being implemented and we'll better know how we should define the library when we start to dive into these topics. I'll try to get the `nlp` team to draft a roadmap on these topics at some point.", "> If we add the constraint that this flat namespace also be shared with variable names this gets untractable pretty fast :)\r\n\r\nI'm sort of confused by your point here. The namespace *is* shared by variable names. You should not use local variables that are named the same as modules, because then you cannot use the module within the scope of your function.\r\n\r\nFor instance,\r\n\r\n```python\r\n\r\nimport nlp\r\nimport transformers\r\n\r\nnlp = transformers.pipeline(\"sentiment-analysis\")\r\n```\r\n\r\nThis is a bug: you've just overwritten the module, so now you can't use it. Or instead:\r\n\r\n```python\r\n\r\nimport transformers\r\n\r\nnlp = transformers.pipeline(\"sentiment-analysis\")\r\n# (Later, e.g. in a notebook)\r\nimport nlp\r\n```\r\n\r\nThis is also a bug: you've overwritten your variable with an import.\r\n\r\nIf you have a module named `nlp`, you should avoid using `nlp` as a variable, or you'll have bugs in some contexts and inconsistencies in other contexts. You'll have situations where you need to import differently in one module vs another, or name variables differently in one context vs another, which is bad.\r\n\r\n> importing only the methods that they need like @thomwolf suggested\r\n\r\nOkay but the same logic applies to naming the module *literally anything else*. There's absolutely no point in having a module name that's 3 letters if you always plan to do `import from`! It would be entirely better to name it `nlp_datasets` if you don't want people to do `import nlp`.\r\n\r\nAnd finally:\r\n\r\n> By the way, nlp will very likely not be only about datasets, and not even just about datasets and metrics.\r\n\r\nSo...it isn't a datasets library? https://twitter.com/Thom_Wolf/status/1261282491622731781\r\n\r\nI'm confused πŸ˜• ", "Dropping by as I noticed that the library has been renamed `datasets` so I wonder if the conversation above is settled (`nlp` not used anymore) :) ", "I guess indeed", "I'd argue that `datasets` is worse than `nlp`. Datasets should be a user specific decision and not encapsulate all of python (`pip install datasets`). If this package contained every dataset in the world (NLP / vision / etc) then it would make sense =/", "I can't speak for the HF team @jramapuram, but as member of the community it looks to me that HF wanted to avoid the past path of changing names as scope broadened over time:\r\n\r\nRemember\r\nhttps://github.com/huggingface/pytorch-openai-transformer-lm\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT\r\nhttps://github.com/huggingface/pytorch-transformers\r\nand now\r\nhttps://github.com/huggingface/transformers\r\n\r\n;) \r\n\r\nJokes aside, seems that the library is growing in a multi-modal direction (https://github.com/huggingface/datasets/pull/363) so the current name is not that implausible. Possibly HF ambition is really to grow its community and bring here a large chunk of datasets of the world (including tabular / vision / audio?).", "Yea I see your point. However, wouldn't scoping solve the entire problem? \r\n\r\n```python\r\nimport huggingface.datasets as D\r\nimport huggingface.transformers as T\r\n```\r\n\r\nCalling something `datasets` is akin to saying I'm going to name my package `python` --> `import python` ", "Sorry to reply to an old thread, but the name issue really makes troubles recently in my project.\r\n\r\nI'd never known in advance there's a package called \"datasets\". My first thought is that such a general term may be safe to arbitrarily use. Avoiding such a common name because of its ambiguity is quite weird.\r\n\r\nAs we know in python it's not easy to differentiate system-wide and project-wide import like in C and C++.\r\n\r\nOn the contrary I fully understand the challenge to rename a popular library. So it seems to provide a \"huggingface\" wrapper library as suggested above by @jramapuram may be a happy medium for both developers and users.\r\n\r\nBest Regards." ]
1,589,574,207,000
1,663,305,502,000
1,601,251,690,000
NONE
Hey :) Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing. The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This means the package makes `nlp` a bad variable name everywhere in the codebase. I've always used `nlp` as the canonical variable name of spaCy's `Language` objects, and this is a convention that a lot of other code has followed (Stanza, flair, etc). And actually, your `transformers` library uses `nlp` as the name for its `Pipeline` instance in your readme. If you stick with the `nlp` name for this package, if anyone uses it then they should rewrite all of that code. If `nlp` is a bad choice of variable anywhere, it's a bad choice of variable everywhere --- because you shouldn't have to notice whether some other function uses a module when you're naming variables within a function. You want to have one convention that you can stick to everywhere. If people use your `nlp` package and continue to use the `nlp` variable name, they'll find themselves with confusing bugs. There will be many many bits of code cut-and-paste from tutorials that give confusing results when combined with the data loading from the `nlp` library. The problem will be especially bad for shadowed modules (people might reasonably have a module named `nlp.py` within their codebase) and notebooks, as people might run notebook cells for data loading out-of-order. I don't think it's an exaggeration to say that if your library becomes popular, we'll all be answering issues around this about once a week for the next few years. That seems pretty unideal, so I do hope you'll reconsider. I suggest `nld` as a better name. It more accurately represents what the package actually does. It's pretty unideal to have a package named `nlp` that doesn't do any processing, and contains data about natural language generation or other non-NLP tasks. The name is equally short, and is sort of a visual pun on `nlp`, since a d is a rotated p.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/138/reactions', 'total_count': 33, '+1': 33, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/138/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/136
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/136/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/136/comments
https://api.github.com/repos/huggingface/datasets/issues/136/events
https://github.com/huggingface/datasets/pull/136
619,211,018
MDExOlB1bGxSZXF1ZXN0NDE4NzgxNzI4
136
Update README.md
{'login': 'renaud', 'id': 75369, 'node_id': 'MDQ6VXNlcjc1MzY5', 'avatar_url': 'https://avatars.githubusercontent.com/u/75369?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/renaud', 'html_url': 'https://github.com/renaud', 'followers_url': 'https://api.github.com/users/renaud/followers', 'following_url': 'https://api.github.com/users/renaud/following{/other_user}', 'gists_url': 'https://api.github.com/users/renaud/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/renaud/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/renaud/subscriptions', 'organizations_url': 'https://api.github.com/users/renaud/orgs', 'repos_url': 'https://api.github.com/users/renaud/repos', 'events_url': 'https://api.github.com/users/renaud/events{/privacy}', 'received_events_url': 'https://api.github.com/users/renaud/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Thanks, this was fixed with #135 :)" ]
1,589,572,867,000
1,589,717,848,000
1,589,717,848,000
NONE
small typo
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/136/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/136/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/136', 'html_url': 'https://github.com/huggingface/datasets/pull/136', 'diff_url': 'https://github.com/huggingface/datasets/pull/136.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/136.patch', 'merged_at': None}
true
https://api.github.com/repos/huggingface/datasets/issues/135
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/135/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/135/comments
https://api.github.com/repos/huggingface/datasets/issues/135/events
https://github.com/huggingface/datasets/pull/135
619,206,708
MDExOlB1bGxSZXF1ZXN0NDE4Nzc4MTMw
135
Fix print statement in READ.md
{'login': 'codehunk628', 'id': 51091425, 'node_id': 'MDQ6VXNlcjUxMDkxNDI1', 'avatar_url': 'https://avatars.githubusercontent.com/u/51091425?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/codehunk628', 'html_url': 'https://github.com/codehunk628', 'followers_url': 'https://api.github.com/users/codehunk628/followers', 'following_url': 'https://api.github.com/users/codehunk628/following{/other_user}', 'gists_url': 'https://api.github.com/users/codehunk628/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/codehunk628/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/codehunk628/subscriptions', 'organizations_url': 'https://api.github.com/users/codehunk628/orgs', 'repos_url': 'https://api.github.com/users/codehunk628/repos', 'events_url': 'https://api.github.com/users/codehunk628/events{/privacy}', 'received_events_url': 'https://api.github.com/users/codehunk628/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Indeed, thanks!" ]
1,589,572,343,000
1,589,717,646,000
1,589,717,645,000
CONTRIBUTOR
print statement was throwing generator object instead of printing names of available datasets/metrics
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/135/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/135/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/135', 'html_url': 'https://github.com/huggingface/datasets/pull/135', 'diff_url': 'https://github.com/huggingface/datasets/pull/135.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/135.patch', 'merged_at': '2020-05-17T12:14:05Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/134
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/134/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/134/comments
https://api.github.com/repos/huggingface/datasets/issues/134/events
https://github.com/huggingface/datasets/pull/134
619,112,641
MDExOlB1bGxSZXF1ZXN0NDE4Njk5OTYz
134
Update README.md
{'login': 'pranv', 'id': 8753078, 'node_id': 'MDQ6VXNlcjg3NTMwNzg=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8753078?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/pranv', 'html_url': 'https://github.com/pranv', 'followers_url': 'https://api.github.com/users/pranv/followers', 'following_url': 'https://api.github.com/users/pranv/following{/other_user}', 'gists_url': 'https://api.github.com/users/pranv/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/pranv/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/pranv/subscriptions', 'organizations_url': 'https://api.github.com/users/pranv/orgs', 'repos_url': 'https://api.github.com/users/pranv/repos', 'events_url': 'https://api.github.com/users/pranv/events{/privacy}', 'received_events_url': 'https://api.github.com/users/pranv/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "the readme got removed, closing this one" ]
1,589,561,774,000
1,590,654,109,000
1,590,654,109,000
NONE
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/134/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/134/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/134', 'html_url': 'https://github.com/huggingface/datasets/pull/134', 'diff_url': 'https://github.com/huggingface/datasets/pull/134.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/134.patch', 'merged_at': None}
true
https://api.github.com/repos/huggingface/datasets/issues/133
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/133/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/133/comments
https://api.github.com/repos/huggingface/datasets/issues/133/events
https://github.com/huggingface/datasets/issues/133
619,094,954
MDU6SXNzdWU2MTkwOTQ5NTQ=
133
[Question] Using/adding a local dataset
{'login': 'zphang', 'id': 1668462, 'node_id': 'MDQ6VXNlcjE2Njg0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1668462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/zphang', 'html_url': 'https://github.com/zphang', 'followers_url': 'https://api.github.com/users/zphang/followers', 'following_url': 'https://api.github.com/users/zphang/following{/other_user}', 'gists_url': 'https://api.github.com/users/zphang/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/zphang/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/zphang/subscriptions', 'organizations_url': 'https://api.github.com/users/zphang/orgs', 'repos_url': 'https://api.github.com/users/zphang/repos', 'events_url': 'https://api.github.com/users/zphang/events{/privacy}', 'received_events_url': 'https://api.github.com/users/zphang/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Hi @zphang,\r\n\r\nSo you can just give the local path to a dataset script file and it should work.\r\n\r\nHere is an example:\r\n- you can download one of the scripts in the `datasets` folder of the present repo (or clone the repo)\r\n- then you can load it with `load_dataset('PATH/TO/YOUR/LOCAL/SCRIPT.py')`\r\n\r\nDoes it make sense?", "Could you give a more concrete example, please? \r\n\r\nI looked up wikitext dataset script from the repo. Should I just overwrite the `data_file` on line 98 to point to the local dataset directory? Would it work for different configurations of wikitext (wikitext2, wikitext103 etc.)?\r\n\r\nOr maybe we can use DownloadManager to specify local dataset location? In that case, where do we use DownloadManager instance?\r\n\r\nThanks", "Hi @MaveriQ , although what I am doing is to commit a new dataset, but I think looking at imdb script might help.\r\nYou may want to use `dl_manager.download_custom`, give it a url(arbitrary string), a custom_download(arbitrary function) and return a path, and finally use _get sample to fetch a sample.", "The download manager supports local directories. You can specify a local directory instead of a url and it should work.", "Closing this one.\r\nFeel free to re-open if you have other questions :)" ]
1,589,559,966,000
1,595,522,649,000
1,595,522,649,000
NONE
Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets. It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this. A notebook/example script demonstrating this would be very helpful.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/133/reactions', 'total_count': 6, '+1': 6, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/133/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/132
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/132/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/132/comments
https://api.github.com/repos/huggingface/datasets/issues/132/events
https://github.com/huggingface/datasets/issues/132
619,077,851
MDU6SXNzdWU2MTkwNzc4NTE=
132
[Feature Request] Add the OpenWebText dataset
{'login': 'LysandreJik', 'id': 30755778, 'node_id': 'MDQ6VXNlcjMwNzU1Nzc4', 'avatar_url': 'https://avatars.githubusercontent.com/u/30755778?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/LysandreJik', 'html_url': 'https://github.com/LysandreJik', 'followers_url': 'https://api.github.com/users/LysandreJik/followers', 'following_url': 'https://api.github.com/users/LysandreJik/following{/other_user}', 'gists_url': 'https://api.github.com/users/LysandreJik/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/LysandreJik/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/LysandreJik/subscriptions', 'organizations_url': 'https://api.github.com/users/LysandreJik/orgs', 'repos_url': 'https://api.github.com/users/LysandreJik/repos', 'events_url': 'https://api.github.com/users/LysandreJik/events{/privacy}', 'received_events_url': 'https://api.github.com/users/LysandreJik/received_events', 'type': 'User', 'site_admin': False}
[{'id': 2067376369, 'node_id': 'MDU6TGFiZWwyMDY3Mzc2MzY5', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20request', 'name': 'dataset request', 'color': 'e99695', 'default': False, 'description': 'Requesting to add a new dataset'}]
closed
False
[]
[ "We're experimenting with hosting the OpenWebText corpus on Zenodo for easier downloading. https://zenodo.org/record/3834942#.Xs1w8i-z2J8", "Closing since it's been added in #660 " ]
1,589,558,249,000
1,602,080,568,000
1,602,080,568,000
MEMBER
The OpenWebText dataset is an open clone of OpenAI's WebText dataset. It can be used to train ELECTRA as is specified in the [README](https://www.github.com/google-research/electra). More information and the download link are available [here](https://skylion007.github.io/OpenWebTextCorpus/).
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/132/reactions', 'total_count': 2, '+1': 2, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/132/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/131
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/131/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/131/comments
https://api.github.com/repos/huggingface/datasets/issues/131/events
https://github.com/huggingface/datasets/issues/131
619,073,731
MDU6SXNzdWU2MTkwNzM3MzE=
131
[Feature request] Add Toronto BookCorpus dataset
{'login': 'jarednielsen', 'id': 4564897, 'node_id': 'MDQ6VXNlcjQ1NjQ4OTc=', 'avatar_url': 'https://avatars.githubusercontent.com/u/4564897?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/jarednielsen', 'html_url': 'https://github.com/jarednielsen', 'followers_url': 'https://api.github.com/users/jarednielsen/followers', 'following_url': 'https://api.github.com/users/jarednielsen/following{/other_user}', 'gists_url': 'https://api.github.com/users/jarednielsen/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/jarednielsen/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jarednielsen/subscriptions', 'organizations_url': 'https://api.github.com/users/jarednielsen/orgs', 'repos_url': 'https://api.github.com/users/jarednielsen/repos', 'events_url': 'https://api.github.com/users/jarednielsen/events{/privacy}', 'received_events_url': 'https://api.github.com/users/jarednielsen/received_events', 'type': 'User', 'site_admin': False}
[{'id': 2067376369, 'node_id': 'MDU6TGFiZWwyMDY3Mzc2MzY5', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20request', 'name': 'dataset request', 'color': 'e99695', 'default': False, 'description': 'Requesting to add a new dataset'}]
closed
False
[]
[ "As far as I understand, `wikitext` is refer to `WikiText-103` and `WikiText-2` that created by researchers in Salesforce, and mostly used in traditional language modeling.\r\n\r\nYou might want to say `wikipedia`, a dump from wikimedia foundation.\r\n\r\nAlso I would like to have Toronto BookCorpus too ! Though it involves copyright problem...", "Hi, @lhoestq, just a reminder that this is solved by #248 .πŸ˜‰ " ]
1,589,557,844,000
1,593,379,651,000
1,593,379,651,000
CONTRIBUTOR
I know the copyright/distribution of this one is complex, but it would be great to have! That, combined with the existing `wikitext`, would provide a complete dataset for pretraining models like BERT.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/131/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/131/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/130
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/130/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/130/comments
https://api.github.com/repos/huggingface/datasets/issues/130/events
https://github.com/huggingface/datasets/issues/130
619,035,440
MDU6SXNzdWU2MTkwMzU0NDA=
130
Loading GLUE dataset loads CoLA by default
{'login': 'zphang', 'id': 1668462, 'node_id': 'MDQ6VXNlcjE2Njg0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1668462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/zphang', 'html_url': 'https://github.com/zphang', 'followers_url': 'https://api.github.com/users/zphang/followers', 'following_url': 'https://api.github.com/users/zphang/following{/other_user}', 'gists_url': 'https://api.github.com/users/zphang/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/zphang/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/zphang/subscriptions', 'organizations_url': 'https://api.github.com/users/zphang/orgs', 'repos_url': 'https://api.github.com/users/zphang/repos', 'events_url': 'https://api.github.com/users/zphang/events{/privacy}', 'received_events_url': 'https://api.github.com/users/zphang/received_events', 'type': 'User', 'site_admin': False}
[{'id': 2067388877, 'node_id': 'MDU6TGFiZWwyMDY3Mzg4ODc3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug', 'name': 'dataset bug', 'color': '2edb81', 'default': False, 'description': 'A bug in a dataset script provided in the library'}]
closed
False
[]
[ "As a follow-up to this: It looks like the actual GLUE task name is supplied as the `name` argument. Is there a way to check what `name`s/sub-datasets are available under a grouping like GLUE? That information doesn't seem to be readily available in info from `nlp.list_datasets()`.\r\n\r\nEdit: I found the info under `Glue.BUILDER_CONFIGS`", "Yes so the first config is loaded by default when no `name` is supplied but for GLUE this should probably throw an error indeed.\r\n\r\nWe can probably just add an `__init__` at the top of the `class Glue(nlp.GeneratorBasedBuilder)` in the `glue.py` script which does this check:\r\n```\r\nclass Glue(nlp.GeneratorBasedBuilder):\r\n def __init__(self, *args, **kwargs):\r\n assert 'name' in kwargs and kwargs[name] is not None, \"Glue has to be called with a configuration name\"\r\n super(Glue, self).__init__(*args, **kwargs)\r\n```", "An error is raised if the sub-dataset is not specified :)\r\n```\r\nValueError: Config name is missing.\r\nPlease pick one among the available configs: ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'mnli_mismatched', 'mnli_matched', 'qnli', 'rte', 'wnli', 'ax']\r\nExample of usage:\r\n\t`load_dataset('glue', 'cola')`\r\n```" ]
1,589,554,550,000
1,590,617,295,000
1,590,617,295,000
NONE
If I run: ```python dataset = nlp.load_dataset('glue') ``` The resultant dataset seems to be CoLA be default, without throwing any error. This is in contrast to calling: ```python metric = nlp.load_metric("glue") ``` which throws an error telling the user that they need to specify a task in GLUE. Should the same apply for loading datasets?
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/130/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/130/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/129
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/129/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/129/comments
https://api.github.com/repos/huggingface/datasets/issues/129/events
https://github.com/huggingface/datasets/issues/129
618,997,725
MDU6SXNzdWU2MTg5OTc3MjU=
129
[Feature request] Add Google Natural Question dataset
{'login': 'elyase', 'id': 1175888, 'node_id': 'MDQ6VXNlcjExNzU4ODg=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1175888?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/elyase', 'html_url': 'https://github.com/elyase', 'followers_url': 'https://api.github.com/users/elyase/followers', 'following_url': 'https://api.github.com/users/elyase/following{/other_user}', 'gists_url': 'https://api.github.com/users/elyase/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/elyase/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/elyase/subscriptions', 'organizations_url': 'https://api.github.com/users/elyase/orgs', 'repos_url': 'https://api.github.com/users/elyase/repos', 'events_url': 'https://api.github.com/users/elyase/events{/privacy}', 'received_events_url': 'https://api.github.com/users/elyase/received_events', 'type': 'User', 'site_admin': False}
[{'id': 2067376369, 'node_id': 'MDU6TGFiZWwyMDY3Mzc2MzY5', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20request', 'name': 'dataset request', 'color': 'e99695', 'default': False, 'description': 'Requesting to add a new dataset'}]
closed
False
[]
[ "Indeed, I think this one is almost ready cc @lhoestq ", "I'm doing the latest adjustments to make the processing of the dataset run on Dataflow", "Is there an update to this? It will be very beneficial for the QA community!", "Still work in progress :)\r\nThe idea is to have the dataset already processed somewhere so that the user only have to download the processed files. I'm also doing it for wikipedia.", "Super appreciate your hard work !!\r\nI'll cross my fingers and hope easily loadable wikipedia dataset will come soon. ", "Quick update on NQ: due to some limitations I met using apache beam + parquet I was not able to use the dataset in a nested parquet structure in python to convert it to our Apache Arrow format yet.\r\nHowever we had planned to change this conversion step anyways so we'll make just sure that it enables to process and convert the NQ dataset to arrow.", "NQ was added in #427 πŸŽ‰" ]
1,589,552,060,000
1,595,510,489,000
1,595,510,489,000
NONE
Would be great to have https://github.com/google-research-datasets/natural-questions as an alternative to SQuAD.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/129/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/129/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/128
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/128/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/128/comments
https://api.github.com/repos/huggingface/datasets/issues/128/events
https://github.com/huggingface/datasets/issues/128
618,951,117
MDU6SXNzdWU2MTg5NTExMTc=
128
Some error inside nlp.load_dataset()
{'login': 'polkaYK', 'id': 18486287, 'node_id': 'MDQ6VXNlcjE4NDg2Mjg3', 'avatar_url': 'https://avatars.githubusercontent.com/u/18486287?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/polkaYK', 'html_url': 'https://github.com/polkaYK', 'followers_url': 'https://api.github.com/users/polkaYK/followers', 'following_url': 'https://api.github.com/users/polkaYK/following{/other_user}', 'gists_url': 'https://api.github.com/users/polkaYK/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/polkaYK/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/polkaYK/subscriptions', 'organizations_url': 'https://api.github.com/users/polkaYK/orgs', 'repos_url': 'https://api.github.com/users/polkaYK/repos', 'events_url': 'https://api.github.com/users/polkaYK/events{/privacy}', 'received_events_url': 'https://api.github.com/users/polkaYK/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Google colab has an old version of Apache Arrow built-in.\r\nBe sure you execute the \"pip install\" cell and restart the notebook environment if the colab asks for it.", "Thanks for reply, worked fine!\r\n" ]
1,589,547,689,000
1,589,548,240,000
1,589,548,240,000
NONE
First of all, nice work! I am going through [this overview notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb) In simple step `dataset = nlp.load_dataset('squad', split='validation[:10%]')` I get an error, which is connected with some inner code, I think: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-d848d3a99b8c> in <module>() 1 # Downloading and loading a dataset 2 ----> 3 dataset = nlp.load_dataset('squad', split='validation[:10%]') 8 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 515 download_mode=download_mode, 516 ignore_verifications=ignore_verifications, --> 517 save_infos=save_infos, 518 ) 519 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 361 verify_infos = not save_infos and not ignore_verifications 362 self._download_and_prepare( --> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 364 ) 365 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 414 try: 415 # Prepare split will record examples associated to the split --> 416 self._prepare_split(split_generator, **prepare_split_kwargs) 417 except OSError: 418 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or "")) /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator) 585 fname = "{}-{}.arrow".format(self.name, split_generator.name) 586 fpath = os.path.join(self._cache_dir, fname) --> 587 examples_type = self.info.features.type 588 writer = ArrowWriter(data_type=examples_type, path=fpath, writer_batch_size=self._writer_batch_size) 589 /usr/local/lib/python3.6/dist-packages/nlp/features.py in type(self) 460 @property 461 def type(self): --> 462 return get_nested_type(self) 463 464 @classmethod /usr/local/lib/python3.6/dist-packages/nlp/features.py in get_nested_type(schema) 370 # Nested structures: we allow dict, list/tuples, sequences 371 if isinstance(schema, dict): --> 372 return pa.struct({key: get_nested_type(value) for key, value in schema.items()}) 373 elif isinstance(schema, (list, tuple)): 374 assert len(schema) == 1, "We defining list feature, you should just provide one example of the inner type" /usr/local/lib/python3.6/dist-packages/nlp/features.py in <dictcomp>(.0) 370 # Nested structures: we allow dict, list/tuples, sequences 371 if isinstance(schema, dict): --> 372 return pa.struct({key: get_nested_type(value) for key, value in schema.items()}) 373 elif isinstance(schema, (list, tuple)): 374 assert len(schema) == 1, "We defining list feature, you should just provide one example of the inner type" /usr/local/lib/python3.6/dist-packages/nlp/features.py in get_nested_type(schema) 379 # We allow to reverse list of dict => dict of list for compatiblity with tfds 380 if isinstance(inner_type, pa.StructType): --> 381 return pa.struct(dict((f.name, pa.list_(f.type, schema.length)) for f in inner_type)) 382 return pa.list_(inner_type, schema.length) 383 /usr/local/lib/python3.6/dist-packages/nlp/features.py in <genexpr>(.0) 379 # We allow to reverse list of dict => dict of list for compatiblity with tfds 380 if isinstance(inner_type, pa.StructType): --> 381 return pa.struct(dict((f.name, pa.list_(f.type, schema.length)) for f in inner_type)) 382 return pa.list_(inner_type, schema.length) 383 TypeError: list_() takes exactly one argument (2 given) ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/128/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/128/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/127
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/127/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/127/comments
https://api.github.com/repos/huggingface/datasets/issues/127/events
https://github.com/huggingface/datasets/pull/127
618,909,042
MDExOlB1bGxSZXF1ZXN0NDE4NTQ1MDcy
127
Update Overview.ipynb
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,543,208,000
1,589,543,247,000
1,589,543,245,000
MEMBER
update notebook
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/127/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/127/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/127', 'html_url': 'https://github.com/huggingface/datasets/pull/127', 'diff_url': 'https://github.com/huggingface/datasets/pull/127.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/127.patch', 'merged_at': '2020-05-15T11:47:25Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/126
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/126/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/126/comments
https://api.github.com/repos/huggingface/datasets/issues/126/events
https://github.com/huggingface/datasets/pull/126
618,897,499
MDExOlB1bGxSZXF1ZXN0NDE4NTM1Mzc5
126
remove webis
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,541,920,000
1,589,542,284,000
1,589,542,226,000
MEMBER
Remove webis from dataset folder. Our first dataset script that only lives on AWS :-) https://s3.console.aws.amazon.com/s3/buckets/datasets.huggingface.co/nlp/datasets/webis/tl_dr/?region=us-east-1 @julien-c @jplu
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/126/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/126/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/126', 'html_url': 'https://github.com/huggingface/datasets/pull/126', 'diff_url': 'https://github.com/huggingface/datasets/pull/126.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/126.patch', 'merged_at': '2020-05-15T11:30:26Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/125
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/125/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/125/comments
https://api.github.com/repos/huggingface/datasets/issues/125/events
https://github.com/huggingface/datasets/pull/125
618,869,048
MDExOlB1bGxSZXF1ZXN0NDE4NTExNDE0
125
[Newsroom] add newsroom
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,538,874,000
1,589,539,027,000
1,589,539,022,000
MEMBER
I checked it with the data link of the mail you forwarded @thomwolf => works well!
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/125/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/125/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/125', 'html_url': 'https://github.com/huggingface/datasets/pull/125', 'diff_url': 'https://github.com/huggingface/datasets/pull/125.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/125.patch', 'merged_at': '2020-05-15T10:37:02Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/124
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/124/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/124/comments
https://api.github.com/repos/huggingface/datasets/issues/124/events
https://github.com/huggingface/datasets/pull/124
618,864,284
MDExOlB1bGxSZXF1ZXN0NDE4NTA3NDUx
124
Xsum, require manual download of some files
{'login': 'mariamabarham', 'id': 38249783, 'node_id': 'MDQ6VXNlcjM4MjQ5Nzgz', 'avatar_url': 'https://avatars.githubusercontent.com/u/38249783?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariamabarham', 'html_url': 'https://github.com/mariamabarham', 'followers_url': 'https://api.github.com/users/mariamabarham/followers', 'following_url': 'https://api.github.com/users/mariamabarham/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariamabarham/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariamabarham/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariamabarham/subscriptions', 'organizations_url': 'https://api.github.com/users/mariamabarham/orgs', 'repos_url': 'https://api.github.com/users/mariamabarham/repos', 'events_url': 'https://api.github.com/users/mariamabarham/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariamabarham/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,538,373,000
1,589,540,688,000
1,589,540,686,000
CONTRIBUTOR
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/124/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/124/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/124', 'html_url': 'https://github.com/huggingface/datasets/pull/124', 'diff_url': 'https://github.com/huggingface/datasets/pull/124.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/124.patch', 'merged_at': '2020-05-15T11:04:46Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/123
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/123/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/123/comments
https://api.github.com/repos/huggingface/datasets/issues/123/events
https://github.com/huggingface/datasets/pull/123
618,820,140
MDExOlB1bGxSZXF1ZXN0NDE4NDcxODU5
123
[Tests] Local => aws
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "For each dataset, If there exist a `dataset_info.json`, then the command `nlp-cli test path/to/my/dataset --al_configs` is successful only if the `dataset_infos.json` is correct. The infos are correct if the size and checksums of the downloaded file are correct, and if the number of examples in each split are correct.\r\n\r\nNote: the `test` command is supposed to test the script, that's why it runs the script even if the cached files already exist. Let me know if it's good to you.", "> For each dataset, If there exist a `dataset_info.json`, then the command `nlp-cli test path/to/my/dataset --al_configs` is successful only if the `dataset_infos.json` is correct. The infos are correct if the size and checksums of the downloaded file are correct, and if the number of examples in each split are correct.\r\n> \r\n> Note: the `test` command is supposed to test the script, that's why it runs the script even if the cached files already exist. Let me know if it's good to you.\r\n\r\nDoes it have to download the whole data to check if the checksums are correct? I guess so no? ", "> > For each dataset, If there exist a `dataset_info.json`, then the command `nlp-cli test path/to/my/dataset --al_configs` is successful only if the `dataset_infos.json` is correct. The infos are correct if the size and checksums of the downloaded file are correct, and if the number of examples in each split are correct.\r\n> > Note: the `test` command is supposed to test the script, that's why it runs the script even if the cached files already exist. Let me know if it's good to you.\r\n> \r\n> Does it have to download the whole data to check if the checksums are correct? I guess so no?\r\n\r\nYes it has to download them all (unless they were already downloaded in which case it just uses the cached downloaded files)." ]
1,589,533,945,000
1,589,537,172,000
1,589,537,006,000
MEMBER
## Change default Test from local => aws As a default we set` aws=True`, `Local=False`, `slow=False` ### 1. RUN_AWS=1 (default) This runs 4 tests per dataset script. a) Does the dataset script have a valid etag / Can it be reached on AWS? b) Can we load its `builder_class`? c) Can we load **all** dataset configs? d) _Most importantly_: Can we load the dataset? Important - we currently only test the first config of each dataset to reduce test time. Total test time is around 1min20s. ### 2. RUN_LOCAL=1 RUN_AWS=0 ***This should be done when debugging dataset scripts of the ./datasets folder*** This only runs 1 test per dataset test, which is equivalent to aws d) - Can we load the dataset from the local `datasets` directory? ### 3. RUN_SLOW=1 We should set up to run these tests maybe 1 time per week ? @thomwolf The `slow` tests include two more important tests. e) Can we load the dataset with all possible configs? This test will probably fail at the moment because a lot of dummy data is missing. We should add the dummy data step by step to be sure that all configs work. f) Test that the actual dataset can be loaded. This will take quite some time to run, but is important to make sure that the "real" data can be loaded. It will also test whether the dataset script has the correct checksums file which is currently not tested with `aws=True`. @lhoestq - is there an easy way to check cheaply whether the `dataset_info.json` is correct for each dataset script?
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/123/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/123/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/123', 'html_url': 'https://github.com/huggingface/datasets/pull/123', 'diff_url': 'https://github.com/huggingface/datasets/pull/123.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/123.patch', 'merged_at': '2020-05-15T10:03:26Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/122
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/122/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/122/comments
https://api.github.com/repos/huggingface/datasets/issues/122/events
https://github.com/huggingface/datasets/pull/122
618,813,182
MDExOlB1bGxSZXF1ZXN0NDE4NDY2Mzc3
122
Final cleanup of readme and metrics
{'login': 'thomwolf', 'id': 7353373, 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/thomwolf', 'html_url': 'https://github.com/thomwolf', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,533,252,000
1,630,698,009,000
1,589,533,342,000
MEMBER
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/122/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/122/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/122', 'html_url': 'https://github.com/huggingface/datasets/pull/122', 'diff_url': 'https://github.com/huggingface/datasets/pull/122.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/122.patch', 'merged_at': '2020-05-15T09:02:22Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/121
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/121/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/121/comments
https://api.github.com/repos/huggingface/datasets/issues/121/events
https://github.com/huggingface/datasets/pull/121
618,790,040
MDExOlB1bGxSZXF1ZXN0NDE4NDQ4MTkx
121
make style
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,531,016,000
1,589,531,139,000
1,589,531,138,000
MEMBER
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/121/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/121/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/121', 'html_url': 'https://github.com/huggingface/datasets/pull/121', 'diff_url': 'https://github.com/huggingface/datasets/pull/121.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/121.patch', 'merged_at': '2020-05-15T08:25:38Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/120
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/120/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/120/comments
https://api.github.com/repos/huggingface/datasets/issues/120/events
https://github.com/huggingface/datasets/issues/120
618,737,783
MDU6SXNzdWU2MTg3Mzc3ODM=
120
πŸ› `map` not working
{'login': 'astariul', 'id': 43774355, 'node_id': 'MDQ6VXNlcjQzNzc0MzU1', 'avatar_url': 'https://avatars.githubusercontent.com/u/43774355?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/astariul', 'html_url': 'https://github.com/astariul', 'followers_url': 'https://api.github.com/users/astariul/followers', 'following_url': 'https://api.github.com/users/astariul/following{/other_user}', 'gists_url': 'https://api.github.com/users/astariul/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/astariul/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/astariul/subscriptions', 'organizations_url': 'https://api.github.com/users/astariul/orgs', 'repos_url': 'https://api.github.com/users/astariul/repos', 'events_url': 'https://api.github.com/users/astariul/events{/privacy}', 'received_events_url': 'https://api.github.com/users/astariul/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "I didn't assign the output πŸ€¦β€β™‚οΈ\r\n\r\n```python\r\ndataset.map(test)\r\n```\r\n\r\nshould be :\r\n\r\n```python\r\ndataset = dataset.map(test)\r\n```" ]
1,589,524,988,000
1,589,526,158,000
1,589,526,158,000
NONE
I'm trying to run a basic example (mapping function to add a prefix). [Here is the colab notebook I'm using.](https://colab.research.google.com/drive/1YH4JCAy0R1MMSc-k_Vlik_s1LEzP_t1h?usp=sharing) ```python import nlp dataset = nlp.load_dataset('squad', split='validation[:10%]') def test(sample): sample['title'] = "test prefix @@@ " + sample["title"] return sample print(dataset[0]['title']) dataset.map(test) print(dataset[0]['title']) ``` Output : > Super_Bowl_50 Super_Bowl_50 Expected output : > Super_Bowl_50 test prefix @@@ Super_Bowl_50
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/120/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/120/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/119
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/119/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/119/comments
https://api.github.com/repos/huggingface/datasets/issues/119/events
https://github.com/huggingface/datasets/issues/119
618,652,145
MDU6SXNzdWU2MTg2NTIxNDU=
119
πŸ› Colab : type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
{'login': 'astariul', 'id': 43774355, 'node_id': 'MDQ6VXNlcjQzNzc0MzU1', 'avatar_url': 'https://avatars.githubusercontent.com/u/43774355?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/astariul', 'html_url': 'https://github.com/astariul', 'followers_url': 'https://api.github.com/users/astariul/followers', 'following_url': 'https://api.github.com/users/astariul/following{/other_user}', 'gists_url': 'https://api.github.com/users/astariul/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/astariul/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/astariul/subscriptions', 'organizations_url': 'https://api.github.com/users/astariul/orgs', 'repos_url': 'https://api.github.com/users/astariul/repos', 'events_url': 'https://api.github.com/users/astariul/events{/privacy}', 'received_events_url': 'https://api.github.com/users/astariul/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "It's strange, after installing `nlp` on Colab, the `pyarrow` version seems fine from `pip` but not from python :\r\n\r\n```python\r\nimport pyarrow\r\n\r\n!pip show pyarrow\r\nprint(\"version = {}\".format(pyarrow.__version__))\r\n```\r\n\r\n> Name: pyarrow\r\nVersion: 0.17.0\r\nSummary: Python library for Apache Arrow\r\nHome-page: https://arrow.apache.org/\r\nAuthor: None\r\nAuthor-email: None\r\nLicense: Apache License, Version 2.0\r\nLocation: /usr/local/lib/python3.6/dist-packages\r\nRequires: numpy\r\nRequired-by: nlp, feather-format\r\n> \r\n> version = 0.14.1", "Ok I just had to restart the runtime after installing `nlp`. After restarting, the version of `pyarrow` is fine." ]
1,589,509,646,000
1,589,519,482,000
1,589,510,728,000
NONE
I'm trying to load CNN/DM dataset on Colab. [Colab notebook](https://colab.research.google.com/drive/11Mf7iNhIyt6GpgA1dBEtg3cyMHmMhtZS?usp=sharing) But I meet this error : > AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/119/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/119/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/118
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/118/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/118/comments
https://api.github.com/repos/huggingface/datasets/issues/118/events
https://github.com/huggingface/datasets/issues/118
618,643,088
MDU6SXNzdWU2MTg2NDMwODg=
118
❓ How to apply a map to all subsets ?
{'login': 'astariul', 'id': 43774355, 'node_id': 'MDQ6VXNlcjQzNzc0MzU1', 'avatar_url': 'https://avatars.githubusercontent.com/u/43774355?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/astariul', 'html_url': 'https://github.com/astariul', 'followers_url': 'https://api.github.com/users/astariul/followers', 'following_url': 'https://api.github.com/users/astariul/following{/other_user}', 'gists_url': 'https://api.github.com/users/astariul/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/astariul/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/astariul/subscriptions', 'organizations_url': 'https://api.github.com/users/astariul/orgs', 'repos_url': 'https://api.github.com/users/astariul/repos', 'events_url': 'https://api.github.com/users/astariul/events{/privacy}', 'received_events_url': 'https://api.github.com/users/astariul/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "That's the way!" ]
1,589,507,932,000
1,589,526,349,000
1,589,526,265,000
NONE
I'm working with CNN/DM dataset, where I have 3 subsets : `train`, `test`, `validation`. Should I apply my map function on the subsets one by one ? ```python import nlp cnn_dm = nlp.load_dataset('cnn_dailymail') for corpus in ['train', 'test', 'validation']: cnn_dm[corpus] = cnn_dm[corpus].map(my_func) ``` Or is there a better way to do this ?
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/118/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/118/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/117/comments
https://api.github.com/repos/huggingface/datasets/issues/117/events
https://github.com/huggingface/datasets/issues/117
618,632,573
MDU6SXNzdWU2MTg2MzI1NzM=
117
❓ How to remove specific rows of a dataset ?
{'login': 'astariul', 'id': 43774355, 'node_id': 'MDQ6VXNlcjQzNzc0MzU1', 'avatar_url': 'https://avatars.githubusercontent.com/u/43774355?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/astariul', 'html_url': 'https://github.com/astariul', 'followers_url': 'https://api.github.com/users/astariul/followers', 'following_url': 'https://api.github.com/users/astariul/following{/other_user}', 'gists_url': 'https://api.github.com/users/astariul/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/astariul/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/astariul/subscriptions', 'organizations_url': 'https://api.github.com/users/astariul/orgs', 'repos_url': 'https://api.github.com/users/astariul/repos', 'events_url': 'https://api.github.com/users/astariul/events{/privacy}', 'received_events_url': 'https://api.github.com/users/astariul/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Hi, you can't do that at the moment.", "Can you do it by now? Coz it would be awfully helpful!", "you can convert dataset object to pandas and remove a feature and convert back to dataset .", "That's what I ended up doing too. but it feels like a workaround to a feature that should be added to the datasets class." ]
1,589,505,906,000
1,657,874,204,000
1,589,526,272,000
NONE
I saw on the [example notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb#scrollTo=efFhDWhlvSVC) how to remove a specific column : ```python dataset.drop('id') ``` But I didn't find how to remove a specific row. **For example, how can I remove all sample with `id` < 10 ?**
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/117/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/117/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/116
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/116/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/116/comments
https://api.github.com/repos/huggingface/datasets/issues/116/events
https://github.com/huggingface/datasets/issues/116
618,628,264
MDU6SXNzdWU2MTg2MjgyNjQ=
116
πŸ› Trying to use ROUGE metric : pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323
{'login': 'astariul', 'id': 43774355, 'node_id': 'MDQ6VXNlcjQzNzc0MzU1', 'avatar_url': 'https://avatars.githubusercontent.com/u/43774355?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/astariul', 'html_url': 'https://github.com/astariul', 'followers_url': 'https://api.github.com/users/astariul/followers', 'following_url': 'https://api.github.com/users/astariul/following{/other_user}', 'gists_url': 'https://api.github.com/users/astariul/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/astariul/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/astariul/subscriptions', 'organizations_url': 'https://api.github.com/users/astariul/orgs', 'repos_url': 'https://api.github.com/users/astariul/repos', 'events_url': 'https://api.github.com/users/astariul/events{/privacy}', 'received_events_url': 'https://api.github.com/users/astariul/received_events', 'type': 'User', 'site_admin': False}
[{'id': 2067393914, 'node_id': 'MDU6TGFiZWwyMDY3MzkzOTE0', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/metric%20bug', 'name': 'metric bug', 'color': '25b21e', 'default': False, 'description': 'A bug in a metric script'}]
closed
False
[]
[ "Can you share your data files or a minimally reproducible example?", "Sure, [here is a Colab notebook](https://colab.research.google.com/drive/1uiS89fnHMG7HV_cYxp3r-_LqJQvNNKs9?usp=sharing) reproducing the error.\r\n\r\n> ArrowInvalid: Column 1 named references expected length 36 but got length 56", "This is because `add` takes as input a batch of elements and you provided only one. I think we should have `add` for one prediction/reference and `add_batch` for a batch of predictions/references. This would make it more coherent with the way we use Arrow.\r\n\r\nLet me do this change", "Thanks for noticing though. I was mainly used to do `.compute` directly ^^", "Thanks @lhoestq it works :)" ]
1,589,505,126,000
1,590,709,387,000
1,590,709,387,000
NONE
I'm trying to use rouge metric. I have to files : `test.pred.tokenized` and `test.gold.tokenized` with each line containing a sentence. I tried : ```python import nlp rouge = nlp.load_metric('rouge') with open("test.pred.tokenized") as p, open("test.gold.tokenized") as g: for lp, lg in zip(p, g): rouge.add(lp, lg) ``` But I meet following error : > pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323 --- Full stack-trace : ``` Traceback (most recent call last): File "<stdin>", line 3, in <module> File "/home/me/.venv/transformers/lib/python3.6/site-packages/nlp/metric.py", line 224, in add self.writer.write_batch(batch) File "/home/me/.venv/transformers/lib/python3.6/site-packages/nlp/arrow_writer.py", line 148, in write_batch pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema) File "pyarrow/table.pxi", line 1550, in pyarrow.lib.Table.from_pydict File "pyarrow/table.pxi", line 1503, in pyarrow.lib.Table.from_arrays File "pyarrow/public-api.pxi", line 390, in pyarrow.lib.pyarrow_wrap_table File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323 ``` (`nlp` installed from source)
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/116/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/116/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/115/comments
https://api.github.com/repos/huggingface/datasets/issues/115/events
https://github.com/huggingface/datasets/issues/115
618,615,855
MDU6SXNzdWU2MTg2MTU4NTU=
115
AttributeError: 'dict' object has no attribute 'info'
{'login': 'astariul', 'id': 43774355, 'node_id': 'MDQ6VXNlcjQzNzc0MzU1', 'avatar_url': 'https://avatars.githubusercontent.com/u/43774355?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/astariul', 'html_url': 'https://github.com/astariul', 'followers_url': 'https://api.github.com/users/astariul/followers', 'following_url': 'https://api.github.com/users/astariul/following{/other_user}', 'gists_url': 'https://api.github.com/users/astariul/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/astariul/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/astariul/subscriptions', 'organizations_url': 'https://api.github.com/users/astariul/orgs', 'repos_url': 'https://api.github.com/users/astariul/repos', 'events_url': 'https://api.github.com/users/astariul/events{/privacy}', 'received_events_url': 'https://api.github.com/users/astariul/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}]
[ "I could access the info by first accessing the different splits :\r\n\r\n```python\r\nimport nlp\r\n\r\ncnn_dm = nlp.load_dataset('cnn_dailymail')\r\nprint(cnn_dm['train'].info)\r\n```\r\n\r\nInformation seems to be duplicated between the subsets :\r\n\r\n```python\r\nprint(cnn_dm[\"train\"].info == cnn_dm[\"test\"].info == cnn_dm[\"validation\"].info)\r\n# True\r\n```\r\n\r\nIs it expected ?", "Good point @Colanim ! What happens under the hood when running:\r\n\r\n```python\r\nimport nlp\r\n\r\ncnn_dm = nlp.load_dataset('cnn_dailymail')\r\n```\r\n\r\nis that for every split in `cnn_dailymail`, a different dataset object (which all holds the same info) is created. This has the advantages that the datasets are easily separable in a training setup. \r\nAlso note that you can load e.g. only the `train` split of the dataset via:\r\n\r\n```python\r\ncnn_dm_train = nlp.load_dataset('cnn_dailymail', split=\"train\")\r\nprint(cnn_dm_train.info)\r\n```\r\n\r\nI think we should make the `info` object slightly different when creating the dataset for each split - at the moment it contains for example the variable `splits` which should maybe be renamed to `split` and contain only one `SplitInfo` object ...\r\n" ]
1,589,502,587,000
1,589,721,060,000
1,589,721,060,000
NONE
I'm trying to access the information of CNN/DM dataset : ```python cnn_dm = nlp.load_dataset('cnn_dailymail') print(cnn_dm.info) ``` returns : > AttributeError: 'dict' object has no attribute 'info'
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/115/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/115/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/114
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/114/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/114/comments
https://api.github.com/repos/huggingface/datasets/issues/114/events
https://github.com/huggingface/datasets/issues/114
618,611,310
MDU6SXNzdWU2MTg2MTEzMTA=
114
Couldn't reach CNN/DM dataset
{'login': 'astariul', 'id': 43774355, 'node_id': 'MDQ6VXNlcjQzNzc0MzU1', 'avatar_url': 'https://avatars.githubusercontent.com/u/43774355?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/astariul', 'html_url': 'https://github.com/astariul', 'followers_url': 'https://api.github.com/users/astariul/followers', 'following_url': 'https://api.github.com/users/astariul/following{/other_user}', 'gists_url': 'https://api.github.com/users/astariul/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/astariul/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/astariul/subscriptions', 'organizations_url': 'https://api.github.com/users/astariul/orgs', 'repos_url': 'https://api.github.com/users/astariul/repos', 'events_url': 'https://api.github.com/users/astariul/events{/privacy}', 'received_events_url': 'https://api.github.com/users/astariul/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Installing from source (instead of Pypi package) solved the problem." ]
1,589,501,777,000
1,589,501,992,000
1,589,501,991,000
NONE
I can't get CNN / DailyMail dataset. ```python import nlp assert "cnn_dailymail" in [dataset.id for dataset in nlp.list_datasets()] cnn_dm = nlp.load_dataset('cnn_dailymail') ``` [Colab notebook](https://colab.research.google.com/drive/1zQ3bYAVzm1h0mw0yWPqKAg_4EUlSx5Ex?usp=sharing) gives following error : ``` ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/cnn_dailymail/cnn_dailymail.py ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/114/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/114/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/113
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/113/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/113/comments
https://api.github.com/repos/huggingface/datasets/issues/113/events
https://github.com/huggingface/datasets/pull/113
618,590,562
MDExOlB1bGxSZXF1ZXN0NDE4MjkxNjIx
113
Adding docstrings and some doc
{'login': 'thomwolf', 'id': 7353373, 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/thomwolf', 'html_url': 'https://github.com/thomwolf', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,498,081,000
1,589,498,565,000
1,589,498,564,000
MEMBER
Some doc
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/113/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/113/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/113', 'html_url': 'https://github.com/huggingface/datasets/pull/113', 'diff_url': 'https://github.com/huggingface/datasets/pull/113.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/113.patch', 'merged_at': '2020-05-14T23:22:44Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/112
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/112/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/112/comments
https://api.github.com/repos/huggingface/datasets/issues/112/events
https://github.com/huggingface/datasets/pull/112
618,569,195
MDExOlB1bGxSZXF1ZXN0NDE4Mjc0MTU4
112
Qa4mre - add dataset
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,494,671,000
1,589,534,203,000
1,589,534,202,000
MEMBER
Added dummy data test only for the first config. Will do the rest later. I had to do add some minor hacks to an important function to make it work. There might be a cleaner way to handle it - can you take a look @thomwolf ?
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/112/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/112/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/112', 'html_url': 'https://github.com/huggingface/datasets/pull/112', 'diff_url': 'https://github.com/huggingface/datasets/pull/112.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/112.patch', 'merged_at': '2020-05-15T09:16:42Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/111
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/111/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/111/comments
https://api.github.com/repos/huggingface/datasets/issues/111/events
https://github.com/huggingface/datasets/pull/111
618,528,060
MDExOlB1bGxSZXF1ZXN0NDE4MjQwMjMy
111
[Clean-up] remove under construction datastes
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,489,533,000
1,589,489,543,000
1,589,489,542,000
MEMBER
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/111/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/111/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/111', 'html_url': 'https://github.com/huggingface/datasets/pull/111', 'diff_url': 'https://github.com/huggingface/datasets/pull/111.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/111.patch', 'merged_at': '2020-05-14T20:52:22Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/110
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/110/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/110/comments
https://api.github.com/repos/huggingface/datasets/issues/110/events
https://github.com/huggingface/datasets/pull/110
618,520,325
MDExOlB1bGxSZXF1ZXN0NDE4MjMzODIy
110
fix reddit tifu dummy data
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,488,657,000
1,589,488,814,000
1,589,488,813,000
MEMBER
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/110/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/110/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/110', 'html_url': 'https://github.com/huggingface/datasets/pull/110', 'diff_url': 'https://github.com/huggingface/datasets/pull/110.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/110.patch', 'merged_at': '2020-05-14T20:40:13Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/109
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/109/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/109/comments
https://api.github.com/repos/huggingface/datasets/issues/109/events
https://github.com/huggingface/datasets/pull/109
618,508,359
MDExOlB1bGxSZXF1ZXN0NDE4MjI0MDYw
109
[Reclor] fix reclor
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,487,386,000
1,589,487,549,000
1,589,487,548,000
MEMBER
- That's probably one me. Could have made the manual data test more flexible. @mariamabarham
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/109/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/109/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/109', 'html_url': 'https://github.com/huggingface/datasets/pull/109', 'diff_url': 'https://github.com/huggingface/datasets/pull/109.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/109.patch', 'merged_at': '2020-05-14T20:19:08Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/108
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/108/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/108/comments
https://api.github.com/repos/huggingface/datasets/issues/108/events
https://github.com/huggingface/datasets/pull/108
618,386,394
MDExOlB1bGxSZXF1ZXN0NDE4MTIzMzc3
108
convert can use manual dir as second argument
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,475,152,000
1,589,475,163,000
1,589,475,162,000
MEMBER
@mariamabarham
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/108/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/108/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/108', 'html_url': 'https://github.com/huggingface/datasets/pull/108', 'diff_url': 'https://github.com/huggingface/datasets/pull/108.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/108.patch', 'merged_at': '2020-05-14T16:52:42Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/107
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/107/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/107/comments
https://api.github.com/repos/huggingface/datasets/issues/107/events
https://github.com/huggingface/datasets/pull/107
618,373,045
MDExOlB1bGxSZXF1ZXN0NDE4MTEyNzcx
107
add writer_batch_size to GeneratorBasedBuilder
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Awesome that's great!" ]
1,589,474,139,000
1,589,475,030,000
1,589,475,029,000
MEMBER
You can now specify `writer_batch_size` in the builder arguments or directly in `load_dataset`
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/107/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/107/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/107', 'html_url': 'https://github.com/huggingface/datasets/pull/107', 'diff_url': 'https://github.com/huggingface/datasets/pull/107.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/107.patch', 'merged_at': '2020-05-14T16:50:29Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/106
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/106/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/106/comments
https://api.github.com/repos/huggingface/datasets/issues/106/events
https://github.com/huggingface/datasets/pull/106
618,361,418
MDExOlB1bGxSZXF1ZXN0NDE4MTAzMjM3
106
Add data dir test command
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Nice - I think we can merge this. I will update the checksums for `wikihow` then as well" ]
1,589,473,119,000
1,589,474,951,000
1,589,474,950,000
MEMBER
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/106/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/106/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/106', 'html_url': 'https://github.com/huggingface/datasets/pull/106', 'diff_url': 'https://github.com/huggingface/datasets/pull/106.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/106.patch', 'merged_at': '2020-05-14T16:49:10Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/105
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/105/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/105/comments
https://api.github.com/repos/huggingface/datasets/issues/105/events
https://github.com/huggingface/datasets/pull/105
618,345,191
MDExOlB1bGxSZXF1ZXN0NDE4MDg5Njgz
105
[New structure on AWS] Adapt paths
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,471,757,000
1,589,471,788,000
1,589,471,787,000
MEMBER
Some small changes so that we have the correct paths. @julien-c
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/105/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/105/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/105', 'html_url': 'https://github.com/huggingface/datasets/pull/105', 'diff_url': 'https://github.com/huggingface/datasets/pull/105.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/105.patch', 'merged_at': '2020-05-14T15:56:27Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/104
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/104/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/104/comments
https://api.github.com/repos/huggingface/datasets/issues/104/events
https://github.com/huggingface/datasets/pull/104
618,277,081
MDExOlB1bGxSZXF1ZXN0NDE4MDMzOTY0
104
Add trivia_q
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,466,439,000
1,594,532,060,000
1,589,487,812,000
MEMBER
Currently tested only for one config to pass tests. Needs to add more dummy data later.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/104/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/104/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/104', 'html_url': 'https://github.com/huggingface/datasets/pull/104', 'diff_url': 'https://github.com/huggingface/datasets/pull/104.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/104.patch', 'merged_at': '2020-05-14T20:23:32Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/103
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/103/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/103/comments
https://api.github.com/repos/huggingface/datasets/issues/103/events
https://github.com/huggingface/datasets/pull/103
618,233,637
MDExOlB1bGxSZXF1ZXN0NDE3OTk5MDIy
103
[Manual downloads] add logic proposal for manual downloads and add wikihow
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "> Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset.\r\n> \r\n> The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`.\r\n> \r\n> The dataset can then be loaded via:\r\n> \r\n> ```python\r\n> import nlp\r\n> nlp.load_dataset(\"wikihow\", data_dir=\"~/wikihow/manual_dir\")\r\n> ```\r\n> \r\n> I added/changed so that there are explicit error messages when using manually downloaded files.\r\n\r\nwouldn't be nicer if we can have `manual_dir/wikihow`? ", "> > Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset.\r\n> > The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`.\r\n> > The dataset can then be loaded via:\r\n> > ```python\r\n> > import nlp\r\n> > nlp.load_dataset(\"wikihow\", data_dir=\"~/wikihow/manual_dir\")\r\n> > ```\r\n> > \r\n> > \r\n> > I added/changed so that there are explicit error messages when using manually downloaded files.\r\n> \r\n> wouldn't be nicer if we can have `manual_dir/wikihow`?\r\n\r\nSure, I mean the user can decide whatever he likes best :-) The path one puts in `data_dir` will be used as the path to the manual dir. `nlp.load_dataset(\"wikihow\", data_dir=\"~/manual_dir/wikihow\")` would work as well as any other path ;-) ", "Perfect! You can merge!" ]
1,589,463,036,000
1,589,466,461,000
1,589,466,460,000
MEMBER
Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset. The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`. The dataset can then be loaded via: ```python import nlp nlp.load_dataset("wikihow", data_dir="~/wikihow/manual_dir") ``` I added/changed so that there are explicit error messages when using manually downloaded files.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/103/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/103/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/103', 'html_url': 'https://github.com/huggingface/datasets/pull/103', 'diff_url': 'https://github.com/huggingface/datasets/pull/103.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/103.patch', 'merged_at': '2020-05-14T14:27:40Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/102/comments
https://api.github.com/repos/huggingface/datasets/issues/102/events
https://github.com/huggingface/datasets/pull/102
618,231,216
MDExOlB1bGxSZXF1ZXN0NDE3OTk3MDQz
102
Run save infos
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Haha that cornell dialogue dataset - that ran for 3h on my computer as well. The `generate_examples` method in this script is one of the most inefficient code samples I've ever seen :D ", "Indeed it's been 3 hours already\r\n```73111 examples [3:07:48, 2.40 examples/s]```" ]
1,589,462,846,000
1,589,470,984,000
1,589,470,983,000
MEMBER
I replaced the old checksum file with the new `dataset_infos.json` by running the script on almost all the datasets we have. The only one that is still running on my side is the cornell dialog
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/102/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/102/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/102', 'html_url': 'https://github.com/huggingface/datasets/pull/102', 'diff_url': 'https://github.com/huggingface/datasets/pull/102.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/102.patch', 'merged_at': '2020-05-14T15:43:03Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/101
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/101/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/101/comments
https://api.github.com/repos/huggingface/datasets/issues/101/events
https://github.com/huggingface/datasets/pull/101
618,111,651
MDExOlB1bGxSZXF1ZXN0NDE3ODk5OTQ2
101
[Reddit] add reddit
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,451,902,000
1,589,452,045,000
1,589,452,044,000
MEMBER
- Everything worked fine @mariamabarham. Made my computer nearly crash, but all seems to be working :-)
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/101/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/101/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/101', 'html_url': 'https://github.com/huggingface/datasets/pull/101', 'diff_url': 'https://github.com/huggingface/datasets/pull/101.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/101.patch', 'merged_at': '2020-05-14T10:27:24Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/100
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/100/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/100/comments
https://api.github.com/repos/huggingface/datasets/issues/100/events
https://github.com/huggingface/datasets/pull/100
618,081,602
MDExOlB1bGxSZXF1ZXN0NDE3ODc1MjE2
100
Add per type scores in seqeval metric
{'login': 'jplu', 'id': 959590, 'node_id': 'MDQ6VXNlcjk1OTU5MA==', 'avatar_url': 'https://avatars.githubusercontent.com/u/959590?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/jplu', 'html_url': 'https://github.com/jplu', 'followers_url': 'https://api.github.com/users/jplu/followers', 'following_url': 'https://api.github.com/users/jplu/following{/other_user}', 'gists_url': 'https://api.github.com/users/jplu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/jplu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jplu/subscriptions', 'organizations_url': 'https://api.github.com/users/jplu/orgs', 'repos_url': 'https://api.github.com/users/jplu/repos', 'events_url': 'https://api.github.com/users/jplu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/jplu/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "LGTM :-) Some small suggestions to shorten the code a bit :-) ", "Can you put the kwargs as normal kwargs instead of a dict? (And add them to the kwargs description As well)", "@thom Is-it what you meant?", "Yes and there is a dynamically generated doc string in the metric script KWARGS DESCRIPTION" ]
1,589,449,072,000
1,589,498,495,000
1,589,498,494,000
CONTRIBUTOR
This PR add a bit more detail in the seqeval metric. Now the usage and output are: ```python import nlp met = nlp.load_metric('metrics/seqeval') references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] met.compute(predictions, references) #Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8} ``` It is also possible to compute scores for non IOB notations, POS tagging for example hasn't this kind of notation. Add `suffix` parameter: ```python import nlp met = nlp.load_metric('metrics/seqeval') references = [['O', 'O', 'O', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']] predictions = [['O', 'O', 'MISC', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']] met.compute(predictions, references, metrics_kwargs={"suffix": True}) #Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.9} ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/100/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/100/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/100', 'html_url': 'https://github.com/huggingface/datasets/pull/100', 'diff_url': 'https://github.com/huggingface/datasets/pull/100.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/100.patch', 'merged_at': '2020-05-14T23:21:34Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/99
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/99/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/99/comments
https://api.github.com/repos/huggingface/datasets/issues/99/events
https://github.com/huggingface/datasets/pull/99
618,026,700
MDExOlB1bGxSZXF1ZXN0NDE3ODMxNjky
99
[Cmrc 2018] fix cmrc2018
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,444,523,000
1,589,446,182,000
1,589,446,181,000
MEMBER
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/99/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/99/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/99', 'html_url': 'https://github.com/huggingface/datasets/pull/99', 'diff_url': 'https://github.com/huggingface/datasets/pull/99.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/99.patch', 'merged_at': '2020-05-14T08:49:41Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/98
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/98/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/98/comments
https://api.github.com/repos/huggingface/datasets/issues/98/events
https://github.com/huggingface/datasets/pull/98
617,957,739
MDExOlB1bGxSZXF1ZXN0NDE3Nzc3NDcy
98
Webis tl-dr
{'login': 'jplu', 'id': 959590, 'node_id': 'MDQ6VXNlcjk1OTU5MA==', 'avatar_url': 'https://avatars.githubusercontent.com/u/959590?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/jplu', 'html_url': 'https://github.com/jplu', 'followers_url': 'https://api.github.com/users/jplu/followers', 'following_url': 'https://api.github.com/users/jplu/following{/other_user}', 'gists_url': 'https://api.github.com/users/jplu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/jplu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jplu/subscriptions', 'organizations_url': 'https://api.github.com/users/jplu/orgs', 'repos_url': 'https://api.github.com/users/jplu/repos', 'events_url': 'https://api.github.com/users/jplu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/jplu/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Should that rather be in an organization scope, @thomwolf @patrickvonplaten ?", "> Should that rather be in an organization scope, @thomwolf @patrickvonplaten ?\r\n\r\nI'm a bit indifferent - both would be fine for me!", "@jplu - if creating the dummy_data is too tedious, I can do it as well :-) ", "There is dummy_data here, no ?", "Yeah I think naming it webis/tl_dr would be best @jplu if that works for you", "No problem at all!! On it^^", "> There is dummy_data here, no ?\r\n\r\nSome paths were wrong - the structure is really confusing and the error messages don't really help either - I have to think about how to make this easier to understand!\r\n\r\nHope it was ok that I fiddled with your PR !", "> Some paths were wrong - the structure is really confusing and the error message don't really help either - I have to think about how to make this easier to understand!\r\n\r\nOh ok! I haven't noticed that sorry :(\r\n\r\n> Hope it was ok that I fiddled with your PR !\r\n\r\nOf course it was ok :)", "@julien-c Looks like what you have in mind?\r\n\r\n```python\r\nimport nlp\r\nnlp.load_dataset(\"datasets/webis\", \"tl_dr\")\r\n\r\n#Output: Downloading and preparing dataset webis/tl_dr (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/jplu/.cache/huggingface/datasets/webis/tl_dr/1.0.0...\r\n```", "Merging this for now. Maybe we can see whether to rename it in a different PR @julien-c ? \r\n", "Hi, \r\nAuthor here of the webis-tldr corpus. Any plans on integrating this dataset into the hub? I remember we could access it in the previous versions of the library. If there is a particular issue that I can help with, do let me know.\r\n\r\nThanks!", "Hi @shahbazsyed, this dataset _is_ inside the hub but it's namespaced by the organization name `webis`.\r\n\r\nYou can load it following the steps described in https://huggingface.co/datasets/webis/tl_dr\r\n\r\nHere's a Colab showcasing that it works: https://colab.research.google.com/drive/11IrzRVpnMLJZ8_UFFHLR8FhiajjAHRUU?usp=sharing\r\n\r\nThe reason the code is in S3 and not in this repo is that the dataset is namespaced under the `webis` organization. We don't have a lot of namespaced datasets yet but this should become the main way we add more datasets in the future.\r\nLet us know if that's an issue for you. Thank you!" ]
1,589,437,338,000
1,599,127,221,000
1,589,489,656,000
CONTRIBUTOR
Add the Webid TL:DR dataset.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/98/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/98/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/98', 'html_url': 'https://github.com/huggingface/datasets/pull/98', 'diff_url': 'https://github.com/huggingface/datasets/pull/98.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/98.patch', 'merged_at': '2020-05-14T20:54:15Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/97
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/97/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/97/comments
https://api.github.com/repos/huggingface/datasets/issues/97/events
https://github.com/huggingface/datasets/pull/97
617,809,431
MDExOlB1bGxSZXF1ZXN0NDE3NjU4MDcy
97
[Csv] add tests for csv dataset script
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "@thomwolf - can you check and merge if ok? " ]
1,589,411,171,000
1,589,412,196,000
1,589,412,195,000
MEMBER
Adds dummy data tests for csv.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/97/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/97/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/97', 'html_url': 'https://github.com/huggingface/datasets/pull/97', 'diff_url': 'https://github.com/huggingface/datasets/pull/97.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/97.patch', 'merged_at': '2020-05-13T23:23:15Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/96
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/96/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/96/comments
https://api.github.com/repos/huggingface/datasets/issues/96/events
https://github.com/huggingface/datasets/pull/96
617,739,521
MDExOlB1bGxSZXF1ZXN0NDE3NjAwMjY4
96
lm1b
{'login': 'jplu', 'id': 959590, 'node_id': 'MDQ6VXNlcjk1OTU5MA==', 'avatar_url': 'https://avatars.githubusercontent.com/u/959590?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/jplu', 'html_url': 'https://github.com/jplu', 'followers_url': 'https://api.github.com/users/jplu/followers', 'following_url': 'https://api.github.com/users/jplu/following{/other_user}', 'gists_url': 'https://api.github.com/users/jplu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/jplu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jplu/subscriptions', 'organizations_url': 'https://api.github.com/users/jplu/orgs', 'repos_url': 'https://api.github.com/users/jplu/repos', 'events_url': 'https://api.github.com/users/jplu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/jplu/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "I might have a different version of `isort` than others. It seems like I'm always reordering the imports of others. But isn't really a problem..." ]
1,589,402,324,000
1,589,465,610,000
1,589,465,609,000
CONTRIBUTOR
Add lm1b dataset.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/96/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/96/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/96', 'html_url': 'https://github.com/huggingface/datasets/pull/96', 'diff_url': 'https://github.com/huggingface/datasets/pull/96.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/96.patch', 'merged_at': '2020-05-14T14:13:29Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/95
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/95/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/95/comments
https://api.github.com/repos/huggingface/datasets/issues/95/events
https://github.com/huggingface/datasets/pull/95
617,703,037
MDExOlB1bGxSZXF1ZXN0NDE3NTY5NzA4
95
Replace checksums files by Dataset infos json
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Great! LGTM :-) ", "> Ok, really clean!\r\n> I like the logic (not a huge fan of using `_asdict_inner` but it makes sense).\r\n> I think it's a nice improvement!\r\n> \r\n> How should we update the files in the repo? Run a big job on a server or on somebody's computer who has most of the datasets already downloaded?\r\n\r\nMaybe we can split the updates among us...IMO most datasets run very quickly. \r\nI think I've downloaded 50 datasets and 80% are loaded in <5min, 15% in <1h and then `wmt` which is still downloading (since 12h). \r\nI deleted my cache because the `wmt` downloads require quite a lot of space, so I only have parts of the `wmt` datasets on my computer. \r\n\r\n@mariamabarham I guess you have downloaded most of the datasets no? " ]
1,589,398,576,000
1,589,446,723,000
1,589,446,722,000
MEMBER
### Better verifications when loading a dataset I replaced the `urls_checksums` directory that used to contain `checksums.txt` and `cached_sizes.txt`, by a single file `dataset_infos.json`. It's just a dict `config_name` -> `DatasetInfo`. It simplifies and improves how verifications of checksums and splits sizes are done, as they're all stored in `DatasetInfo` (one per config). Also, having already access to `DatasetInfo` enables to check disk space before running `download_and_prepare` for a given config. The dataset infos json file is user readable, you can take a look at the squad one that I generated in this PR. ### Renaming According to these changes, I did some renaming: `save_checksums` -> `save_infos` `ignore_checksums` -> `ignore_verifications` for example, when you are creating a dataset you have to run ```nlp-cli test path/to/my/dataset --save_infos --all_configs``` instead of ```nlp-cli test path/to/my/dataset --save_checksums --all_configs``` ### And now, the fun part We'll have to rerun the `nlp-cli test ... --save_infos --all_configs` for all the datasets ----------------- feedback appreciated !
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/95/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/95/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/95', 'html_url': 'https://github.com/huggingface/datasets/pull/95', 'diff_url': 'https://github.com/huggingface/datasets/pull/95.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/95.patch', 'merged_at': '2020-05-14T08:58:42Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/94
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/94/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/94/comments
https://api.github.com/repos/huggingface/datasets/issues/94/events
https://github.com/huggingface/datasets/pull/94
617,571,340
MDExOlB1bGxSZXF1ZXN0NDE3NDYyMTIw
94
Librispeech
{'login': 'jplu', 'id': 959590, 'node_id': 'MDQ6VXNlcjk1OTU5MA==', 'avatar_url': 'https://avatars.githubusercontent.com/u/959590?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/jplu', 'html_url': 'https://github.com/jplu', 'followers_url': 'https://api.github.com/users/jplu/followers', 'following_url': 'https://api.github.com/users/jplu/following{/other_user}', 'gists_url': 'https://api.github.com/users/jplu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/jplu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jplu/subscriptions', 'organizations_url': 'https://api.github.com/users/jplu/orgs', 'repos_url': 'https://api.github.com/users/jplu/repos', 'events_url': 'https://api.github.com/users/jplu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/jplu/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "@jplu - I changed this weird archieve - iter method to something simpler. It's only one file to download anyways so I don't see the point of using weird iter methods...It's a huge file though :D 30 million lines of text. Took me quite some time to download :D " ]
1,589,385,854,000
1,589,405,343,000
1,589,405,342,000
CONTRIBUTOR
Add librispeech dataset and remove some useless content.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/94/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/94/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/94', 'html_url': 'https://github.com/huggingface/datasets/pull/94', 'diff_url': 'https://github.com/huggingface/datasets/pull/94.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/94.patch', 'merged_at': '2020-05-13T21:29:02Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/93
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/93/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/93/comments
https://api.github.com/repos/huggingface/datasets/issues/93/events
https://github.com/huggingface/datasets/pull/93
617,522,029
MDExOlB1bGxSZXF1ZXN0NDE3NDIxODUy
93
Cleanup notebooks and various fixes
{'login': 'thomwolf', 'id': 7353373, 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/thomwolf', 'html_url': 'https://github.com/thomwolf', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,381,938,000
1,589,382,108,000
1,589,382,107,000
MEMBER
Fixes on dataset (more flexible) metrics (fix) and general clean ups
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/93/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/93/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/93', 'html_url': 'https://github.com/huggingface/datasets/pull/93', 'diff_url': 'https://github.com/huggingface/datasets/pull/93.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/93.patch', 'merged_at': '2020-05-13T15:01:47Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/92
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/92/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/92/comments
https://api.github.com/repos/huggingface/datasets/issues/92/events
https://github.com/huggingface/datasets/pull/92
617,341,505
MDExOlB1bGxSZXF1ZXN0NDE3Mjc1ODky
92
[WIP] add wmt14
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,366,523,000
1,589,627,858,000
1,589,627,857,000
MEMBER
WMT14 takes forever to download :-/ - WMT is the first dataset that uses an abstract class IMO, so I had to modify the `load_dataset_module` a bit.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/92/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/92/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/92', 'html_url': 'https://github.com/huggingface/datasets/pull/92', 'diff_url': 'https://github.com/huggingface/datasets/pull/92.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/92.patch', 'merged_at': '2020-05-16T11:17:37Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/91
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/91/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/91/comments
https://api.github.com/repos/huggingface/datasets/issues/91/events
https://github.com/huggingface/datasets/pull/91
617,339,484
MDExOlB1bGxSZXF1ZXN0NDE3Mjc0MjA0
91
[Paracrawl] add paracrawl
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,366,340,000
1,589,366,415,000
1,589,366,414,000
MEMBER
- Huge dataset - took ~1h to download - Also this PR reformats all dataset scripts and adds `datasets` to `make style`
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/91/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/91/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/91', 'html_url': 'https://github.com/huggingface/datasets/pull/91', 'diff_url': 'https://github.com/huggingface/datasets/pull/91.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/91.patch', 'merged_at': '2020-05-13T10:40:14Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/90
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/90/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/90/comments
https://api.github.com/repos/huggingface/datasets/issues/90/events
https://github.com/huggingface/datasets/pull/90
617,311,877
MDExOlB1bGxSZXF1ZXN0NDE3MjUxODE0
90
Add download gg drive
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "awesome - so no manual downloaded needed here? ", "Yes exactly. It works like a standard download" ]
1,589,363,762,000
1,589,373,988,000
1,589,364,331,000
MEMBER
We can now add datasets that download from google drive
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/90/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/90/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/90', 'html_url': 'https://github.com/huggingface/datasets/pull/90', 'diff_url': 'https://github.com/huggingface/datasets/pull/90.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/90.patch', 'merged_at': '2020-05-13T10:05:31Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/89
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/89/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/89/comments
https://api.github.com/repos/huggingface/datasets/issues/89/events
https://github.com/huggingface/datasets/pull/89
617,295,069
MDExOlB1bGxSZXF1ZXN0NDE3MjM4MjU4
89
Add list and inspect methods - cleanup hf_api
{'login': 'thomwolf', 'id': 7353373, 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/thomwolf', 'html_url': 'https://github.com/thomwolf', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,362,215,000
1,589,378,700,000
1,589,362,390,000
MEMBER
Add a bunch of methods to easily list and inspect the processing scripts up-loaded on S3: ```python nlp.list_datasets() nlp.list_metrics() # Copy and prepare the scripts at `local_path` for easy inspection/modification. nlp.inspect_dataset(path, local_path) # Copy and prepare the scripts at `local_path` for easy inspection/modification. nlp.inspect_metric(path, local_path) ``` Also clean up the `HfAPI` to use `dataclasses` for better user-experience
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/89/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/89/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/89', 'html_url': 'https://github.com/huggingface/datasets/pull/89', 'diff_url': 'https://github.com/huggingface/datasets/pull/89.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/89.patch', 'merged_at': '2020-05-13T09:33:10Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/88
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/88/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/88/comments
https://api.github.com/repos/huggingface/datasets/issues/88/events
https://github.com/huggingface/datasets/pull/88
617,284,664
MDExOlB1bGxSZXF1ZXN0NDE3MjI5ODQw
88
Add wiki40b
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Looks good to me. I have not really looked too much into the Beam Datasets yet though - so I think you can merge whenever you think is good for Beam datasets :-) " ]
1,589,361,361,000
1,589,373,115,000
1,589,373,114,000
MEMBER
This one is a beam dataset that downloads files using tensorflow. I tested it on a small config and it works fine
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/88/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/88/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/88', 'html_url': 'https://github.com/huggingface/datasets/pull/88', 'diff_url': 'https://github.com/huggingface/datasets/pull/88.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/88.patch', 'merged_at': '2020-05-13T12:31:54Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/87
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/87/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/87/comments
https://api.github.com/repos/huggingface/datasets/issues/87/events
https://github.com/huggingface/datasets/pull/87
617,267,118
MDExOlB1bGxSZXF1ZXN0NDE3MjE1NzA0
87
Add Flores
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,359,889,000
1,589,361,814,000
1,589,361,813,000
MEMBER
Beautiful language for sure!
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/87/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/87/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/87', 'html_url': 'https://github.com/huggingface/datasets/pull/87', 'diff_url': 'https://github.com/huggingface/datasets/pull/87.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/87.patch', 'merged_at': '2020-05-13T09:23:33Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/86
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/86/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/86/comments
https://api.github.com/repos/huggingface/datasets/issues/86/events
https://github.com/huggingface/datasets/pull/86
617,260,972
MDExOlB1bGxSZXF1ZXN0NDE3MjEwNzY2
86
[Load => load_dataset] change naming
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,359,380,000
1,589,359,858,000
1,589,359,857,000
MEMBER
Rename leftovers @thomwolf
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/86/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/86/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/86', 'html_url': 'https://github.com/huggingface/datasets/pull/86', 'diff_url': 'https://github.com/huggingface/datasets/pull/86.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/86.patch', 'merged_at': '2020-05-13T08:50:57Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/85
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/85/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/85/comments
https://api.github.com/repos/huggingface/datasets/issues/85/events
https://github.com/huggingface/datasets/pull/85
617,253,428
MDExOlB1bGxSZXF1ZXN0NDE3MjA0ODA4
85
Add boolq
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Awesome :-) Thanks for adding the function to the Mock DL Manager" ]
1,589,358,747,000
1,589,360,979,000
1,589,360,978,000
MEMBER
I just added the dummy data for this dataset. This one was uses `tf.io.gfile.copy` to download the data but I added the support for custom download in the mock_download_manager. I also had to add a `tensorflow` dependency for tests.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/85/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/85/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/85', 'html_url': 'https://github.com/huggingface/datasets/pull/85', 'diff_url': 'https://github.com/huggingface/datasets/pull/85.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/85.patch', 'merged_at': '2020-05-13T09:09:38Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/84
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/84/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/84/comments
https://api.github.com/repos/huggingface/datasets/issues/84/events
https://github.com/huggingface/datasets/pull/84
617,249,815
MDExOlB1bGxSZXF1ZXN0NDE3MjAxODcz
84
[TedHrLr] add left dummy data
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,358,440,000
1,589,358,562,000
1,589,358,561,000
MEMBER
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/84/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/84/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/84', 'html_url': 'https://github.com/huggingface/datasets/pull/84', 'diff_url': 'https://github.com/huggingface/datasets/pull/84.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/84.patch', 'merged_at': '2020-05-13T08:29:21Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/83
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/83/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/83/comments
https://api.github.com/repos/huggingface/datasets/issues/83/events
https://github.com/huggingface/datasets/pull/83
616,863,601
MDExOlB1bGxSZXF1ZXN0NDE2ODkyOTUz
83
New datasets
{'login': 'mariamabarham', 'id': 38249783, 'node_id': 'MDQ6VXNlcjM4MjQ5Nzgz', 'avatar_url': 'https://avatars.githubusercontent.com/u/38249783?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariamabarham', 'html_url': 'https://github.com/mariamabarham', 'followers_url': 'https://api.github.com/users/mariamabarham/followers', 'following_url': 'https://api.github.com/users/mariamabarham/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariamabarham/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariamabarham/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariamabarham/subscriptions', 'organizations_url': 'https://api.github.com/users/mariamabarham/orgs', 'repos_url': 'https://api.github.com/users/mariamabarham/repos', 'events_url': 'https://api.github.com/users/mariamabarham/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariamabarham/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,307,747,000
1,589,307,767,000
1,589,307,765,000
CONTRIBUTOR
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/83/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/83/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/83', 'html_url': 'https://github.com/huggingface/datasets/pull/83', 'diff_url': 'https://github.com/huggingface/datasets/pull/83.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/83.patch', 'merged_at': '2020-05-12T18:22:45Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/82
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/82/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/82/comments
https://api.github.com/repos/huggingface/datasets/issues/82/events
https://github.com/huggingface/datasets/pull/82
616,805,194
MDExOlB1bGxSZXF1ZXN0NDE2ODQ1Njc5
82
[Datasets] add ted_hrlr
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,302,010,000
1,589,356,374,000
1,589,356,373,000
MEMBER
@thomwolf - After looking at `xnli` I think it's better to leave the translation features and add a `translation` key to make them work in our framework. The result looks like this: ![Screenshot from 2020-05-12 18-34-43](https://user-images.githubusercontent.com/23423619/81721933-ee1faf00-9480-11ea-9e95-d6557cbd0ce0.png) you can see that each split has a `translation` key which value is the nlp.features.Translation object. That's a simple change. If it's ok for you, I will add dummy data for the other configs and treat the other translation scripts in the same way.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/82/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/82/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/82', 'html_url': 'https://github.com/huggingface/datasets/pull/82', 'diff_url': 'https://github.com/huggingface/datasets/pull/82.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/82.patch', 'merged_at': '2020-05-13T07:52:52Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/81
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/81/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/81/comments
https://api.github.com/repos/huggingface/datasets/issues/81/events
https://github.com/huggingface/datasets/pull/81
616,793,010
MDExOlB1bGxSZXF1ZXN0NDE2ODM1NzE1
81
add tests
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,300,899,000
1,589,355,837,000
1,589,355,836,000
MEMBER
Tests for py_utils functions and for the BaseReader used to read from arrow and parquet. I also removed unused utils functions.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/81/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/81/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/81', 'html_url': 'https://github.com/huggingface/datasets/pull/81', 'diff_url': 'https://github.com/huggingface/datasets/pull/81.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/81.patch', 'merged_at': '2020-05-13T07:43:56Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/80
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/80/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/80/comments
https://api.github.com/repos/huggingface/datasets/issues/80/events
https://github.com/huggingface/datasets/pull/80
616,786,803
MDExOlB1bGxSZXF1ZXN0NDE2ODMwNjk3
80
Add nbytes + nexamples check
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "Looks good to me! Should we hard code those numbers in the config classes and make sure that when loading a dataset that the numbers match? " ]
1,589,300,323,000
1,589,356,354,000
1,589,356,353,000
MEMBER
### Save size and number of examples Now when you do `save_checksums`, it also create `cached_sizes.txt` right next to the checksum file. This new file stores the bytes sizes and the number of examples of each split that has been prepared and stored in the cache. Example: ``` # Cached sizes: <full_config_name> <num_bytes> <num_examples> hansards/house/1.0.0/test 22906629 122290 hansards/house/1.0.0/train 191459584 947969 hansards/senate/1.0.0/test 5711686 25553 hansards/senate/1.0.0/train 40324278 182135 ``` ### Check processing output If there is a `caches_sizes.txt`, then each time we run `download_and_prepare` it will make sure that the sizes match. You can set `ignore_checksums=True` if you don't want that to happen. ### Fill Dataset Info All the split infos and the checksums are now stored correctly in DatasetInfo after `download_and_prepare` ### Check space on disk before running `download_and_prepare` Check if the space is lower than the sum of the sizes of the files in `checksums.txt` and `cached_files.txt`. This is not ideal though as it considers the files for all configs. TODO: A better way to do it would be to have save the `DatasetInfo` instead of the `checksums.txt` and `cached_sizes.txt`, in order to have one file per dataset config (and therefore consider only the sizes of the files for one config and not all of them). It can also be the occasion to factorize all the `download_and_prepare` verifications. Maybe next PR ?
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/80/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/80/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/80', 'html_url': 'https://github.com/huggingface/datasets/pull/80', 'diff_url': 'https://github.com/huggingface/datasets/pull/80.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/80.patch', 'merged_at': '2020-05-13T07:52:33Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/79
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/79/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/79/comments
https://api.github.com/repos/huggingface/datasets/issues/79/events
https://github.com/huggingface/datasets/pull/79
616,785,613
MDExOlB1bGxSZXF1ZXN0NDE2ODI5NzMy
79
[Convert] add new pattern
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,300,211,000
1,589,300,230,000
1,589,300,229,000
MEMBER
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/79/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/79/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/79', 'html_url': 'https://github.com/huggingface/datasets/pull/79', 'diff_url': 'https://github.com/huggingface/datasets/pull/79.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/79.patch', 'merged_at': '2020-05-12T16:17:09Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/78
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/78/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/78/comments
https://api.github.com/repos/huggingface/datasets/issues/78/events
https://github.com/huggingface/datasets/pull/78
616,774,275
MDExOlB1bGxSZXF1ZXN0NDE2ODIwNzU5
78
[Tests] skip beam dataset tests for now
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "@lhoestq - I moved the wkipedia file to the \"correct\" folder. ", "Nice thanks !" ]
1,589,299,258,000
1,589,300,184,000
1,589,300,182,000
MEMBER
For now we will skip tests for Beam Datasets
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/78/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/78/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/78', 'html_url': 'https://github.com/huggingface/datasets/pull/78', 'diff_url': 'https://github.com/huggingface/datasets/pull/78.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/78.patch', 'merged_at': '2020-05-12T16:16:22Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/77
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/77/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/77/comments
https://api.github.com/repos/huggingface/datasets/issues/77/events
https://github.com/huggingface/datasets/pull/77
616,674,601
MDExOlB1bGxSZXF1ZXN0NDE2NzQwMjAz
77
New datasets
{'login': 'mariamabarham', 'id': 38249783, 'node_id': 'MDQ6VXNlcjM4MjQ5Nzgz', 'avatar_url': 'https://avatars.githubusercontent.com/u/38249783?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariamabarham', 'html_url': 'https://github.com/mariamabarham', 'followers_url': 'https://api.github.com/users/mariamabarham/followers', 'following_url': 'https://api.github.com/users/mariamabarham/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariamabarham/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariamabarham/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariamabarham/subscriptions', 'organizations_url': 'https://api.github.com/users/mariamabarham/orgs', 'repos_url': 'https://api.github.com/users/mariamabarham/repos', 'events_url': 'https://api.github.com/users/mariamabarham/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariamabarham/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,291,519,000
1,589,292,136,000
1,589,292,135,000
CONTRIBUTOR
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/77/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/77/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/77', 'html_url': 'https://github.com/huggingface/datasets/pull/77', 'diff_url': 'https://github.com/huggingface/datasets/pull/77.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/77.patch', 'merged_at': '2020-05-12T14:02:15Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/76
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/76/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/76/comments
https://api.github.com/repos/huggingface/datasets/issues/76/events
https://github.com/huggingface/datasets/pull/76
616,579,228
MDExOlB1bGxSZXF1ZXN0NDE2NjYyMTk2
76
pin flake 8
{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/users/patrickvonplaten/followers', 'following_url': 'https://api.github.com/users/patrickvonplaten/following{/other_user}', 'gists_url': 'https://api.github.com/users/patrickvonplaten/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/patrickvonplaten/subscriptions', 'organizations_url': 'https://api.github.com/users/patrickvonplaten/orgs', 'repos_url': 'https://api.github.com/users/patrickvonplaten/repos', 'events_url': 'https://api.github.com/users/patrickvonplaten/events{/privacy}', 'received_events_url': 'https://api.github.com/users/patrickvonplaten/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[]
1,589,282,729,000
1,589,282,855,000
1,589,282,854,000
MEMBER
Flake 8's new version does not like our format. Pinning the version for now.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/76/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/76/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/76', 'html_url': 'https://github.com/huggingface/datasets/pull/76', 'diff_url': 'https://github.com/huggingface/datasets/pull/76.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/76.patch', 'merged_at': '2020-05-12T11:27:34Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/75
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/75/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/75/comments
https://api.github.com/repos/huggingface/datasets/issues/75/events
https://github.com/huggingface/datasets/pull/75
616,520,163
MDExOlB1bGxSZXF1ZXN0NDE2NjE0MzU1
75
WIP adding metrics
{'login': 'thomwolf', 'id': 7353373, 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/thomwolf', 'html_url': 'https://github.com/thomwolf', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'type': 'User', 'site_admin': False}
[]
closed
False
[]
[ "It's all about my metric stuff so I'll probably merge it unless you want to have a look.\r\n\r\nTook the occasion to remove the old doc and requirements.txt" ]
1,589,277,120,000
1,589,355,852,000
1,589,355,850,000
MEMBER
Adding the following metrics as identified by @mariamabarham: 1. BLEU: BiLingual Evaluation Understudy: https://github.com/tensorflow/nmt/blob/master/nmt/scripts/bleu.py, https://github.com/chakki-works/sumeval/blob/master/sumeval/metrics/bleu.py (multilingual) 2. GLEU: Google-BLEU: https://github.com/cnap/gec-ranking/blob/master/scripts/compute_gleu 3. Sacrebleu: https://pypi.org/project/sacrebleu/1.4.8/ (pypi package), https://github.com/mjpost/sacrebleu (github implementation) 4. ROUGE: Recall-Oriented Understudy for Gisting Evaluation: https://github.com/google-research/google-research/tree/master/rouge, https://github.com/chakki-works/sumeval/blob/master/sumeval/metrics/rouge.py (multilingual) 5. Seqeval: https://github.com/chakki-works/seqeval (github implementation), https://pypi.org/project/seqeval/0.0.12/ (pypi package) 6. Coval: coreference evaluation package for the CoNLL and ARRAU datasets https://github.com/ns-moosavi/coval 7. SQuAD v1 evaluation script 8. SQuAD V2 evaluation script: https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/ 9. GLUE 10. XNLI Not now: 1. Perplexity: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/perplexity.py 2. Spearman: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/spearman_correlation.py 3. F1_measure: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/f1_measure.py 4. Pearson_corelation: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/pearson_correlation.py 5. AUC: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/auc.py 6. Entropy: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/entropy.py
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/75/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/75/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/75', 'html_url': 'https://github.com/huggingface/datasets/pull/75', 'diff_url': 'https://github.com/huggingface/datasets/pull/75.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/75.patch', 'merged_at': '2020-05-13T07:44:10Z'}
true